0

A dd solution is provided:

unzip -p 2015-11-21-raspbian-jessie.zip 2015-11-21-raspbian-jessie.img | dd of=/dev/sdb bs=1M

however dd is contra-indicated with the availability of cat. I would like to constrain the question to answers that do not use ddsolutions. Solutions engaging cat (good) or pv(better) to write to the SD card are preferred over dd.

An image of an SD card was successfully copied to sdcard.image with the cat command and burned to a second SD card:

sudo sh -c 'pv /dev/disk2 >sdcard.image' 

/dev/disk2 is the SD card

The SD card image is 32GB and was zipped to an 8GB file: sdcard.image.zip. From the command line, how does one unzip and burn the file? Assume there is not enough space to unzip the image to the Macbook's SSD.

The goal is to avoid writing the 32GB file to the laptop's SSD and burn the compressed image directly to the SD card.

gatorback
  • 1,384
  • 23
  • 48
  • @Freddy Very close, the idea of unzipping and piping the contents into a write command is what I had envisioned. I should have initially constrained the question to preclude dd solutions given that they are contra-indicated in favor of cat. – gatorback Jan 27 '20 at 21:27
  • With unzip as the input you probably do want dd in the pipeline to coalesce part block reads into single block-sized writes. (iflag=fullblock). – Chris Davies Jan 27 '20 at 21:39
  • @roaima: I would guess that the SD card driver would make this unnecessary.  What am I overlooking? – G-Man Says 'Reinstate Monica' Jan 28 '20 at 04:21
  • @G-Man I'd prefer to write entire blocks rather than dribbles. Efficiency, and reducing the number of consecutive partial writes to the same SSD disk block. – Chris Davies Jan 28 '20 at 07:49
  • @roaima: Well, we agree that doing multiple physical writes to the same physical block of a storage device is a bad thing.  But (1) when I started using Unix, there were two kinds of device nodes (/dev entries→inodes) for storage devices: block special devices (a.k.a. “cooked”), e.g., hd0, and character special devices (a.k.a. “raw”), e.g., rhd0.  The raw device interface permitted more-or-less direct access to the hardware, which was sometimes beneficial (especially in the case of large block sizes) and often deleterious (especially in the case of small block sizes).  … (Cont’d) – G-Man Says 'Reinstate Monica' Feb 03 '20 at 08:58
  • (Cont’d) …  By contrast, the cooked device driver put a buffer between userland and the hardware.  If a process wrote 42 bytes to /dev/hd0, the driver would keep those bytes in a buffer in kernel memory, and write (or “flush”) them out to the hardware only when the process had written another 470 bytes (i.e., for a total of 512) or a long period of time had elapsed.  (I apologize if this comes across as pedantic.  I assume that you already know this, but I wanted to be sure that we were on the same page.)  … (Cont’d) – G-Man Says 'Reinstate Monica' Feb 03 '20 at 08:58
  • (Cont’d) …  In the past ten or twenty years, raw disk devices have been going away, leaving us with only the cooked version.   So my point is that I don’t see much benefit in adding a userland process to do what the kernel is already doing.   That seems less efficient.   Also, (2) it’s my understanding that SSD controllers (i.e., the firmware in the hardware) doesn’t really allow multiple writes to the same block.   That you never really overwrite a block; rather, it allocates a new block, writes that, remaps the block number** to the new block, … (Cont’d) – G-Man Says 'Reinstate Monica' Feb 03 '20 at 08:59
  • (Cont’d) …  and then (eventually) erases the original block.  That’s still no excuse for making the hardware do more work than it needs to; I’m just suggesting that it’s less harmful than you make it sound.   (And, again, I assume that you know about wear leveling, but that it merely slipped your mind.) – G-Man Says 'Reinstate Monica' Feb 03 '20 at 08:59
  • @G-ManSays'ReinstateMonica' I also remember raw and cooked disk devices. And hard drives that had a head-park switch as well as a power switch... :-/ Maybe I'm being overcautious, but I have had recent (last year or so) issues with uncompressed streams, partial blocks, and short (incomplete) writes. As a result I prefer to ensure my data is explicitly aggregated into whole blocks. – Chris Davies Feb 03 '20 at 09:05

1 Answers1

1
unzip -p sdcard.image.zip sdcard.image >/dev/disk2/

man unzip provides an explanation of unzip with options shown for MacOS X, and this link is the equivalent for Debian.

FYI, data is 'written' to SD cards because they are an inherently read-write medium; 'burning' is a term used for optical media. If you are copying data out of an archive (zip or otherwise), you are 'extracting' it.

gatorback
  • 1,384
  • 23
  • 48
K7AAY
  • 3,816
  • 4
  • 23
  • 39
  • The OP has been updated to indicate the SD card is at /dev/disk2/. If /dev/sdb is replaced with /dev/disk2/ should I expect the command to write the .image file to the SD card as it is unzipped? Does the ... act as a wildcard or do I need to replace this with the image filename? Thank you – gatorback Jan 27 '20 at 21:16
  • ... just means "gimme the fully qualified path name" a la https://en.wikipedia.org/wiki/Fully_qualified_name#Path_names – K7AAY Jan 27 '20 at 21:27
  • The suggested command has been updated with a concrete example: my best guess at what I think you are suggesting. Please adjust as necessessary. – gatorback Jan 27 '20 at 21:32
  • How about unzip -p sdcard.image.zip > /dev/disk2/sdcard.image – K7AAY Jan 28 '20 at 18:11