8

I am building a disk image for an embedded system (to be placed on an 4GB SD card). I want the system to have two partitions. A 'Root'(200Mb), and a 'Data' partition(800Mb).

I create an empty 1GB file with dd. Then I use parted to set up the partitions. I mount them each in a loop device then format them; ext2 for 'Root' ext4 for 'Data'. Add my root file system to the 'Root' partition and leave 'Data' empty.

Here's where the problem is. I am now stuck with a 1GB image, with only 200MB of data on it. Shouldn't I, in theory, be able to truncate the image down to say.. 201MB and still have the file system mountable? Unfortunately I have not found this to be the case.

I recall in the past having used a build environment from Freescale that used to create 30Mb images, that would have partitions for utilizing an entire 4GB sdcard. Unfortunately, at this time, I can not find how they were doing that.

I have read the on-disk format for the ext file system, and if there is no data in anything past the first super block (except for backup super blocks, and unused block tables) I thought I could truncate there.

Unfortunately, when I do this, the mounting system freaks out. I can then run FSCK, restore the super blocks, and block tables, and can mount it then no problem. I just don't think that should be necessary.

Perhaps a different file system could work? Any ideas?

thanks,

edit

changed partition to read file system. The partition is still there and deoesn't change, but the file system is getting destroyed after truncating the image.

edit

I have found the case to be that when I truncate the file to a size just larger than the first set of 'Data' partition superblock and inode/block tables, (Somewhere in the data-block range) the file system becomes umountable without doing a fsck to restore the rest of the super blocks and block/inode tables

AllenKll
  • 688
  • 2
  • 9
  • 16
  • I don't understand the title. I think it should read “How do I shrink a file-system and partition. – ctrl-alt-delor Jul 16 '15 at 21:20
  • Would you mind clarifying when you say ...I have not found this to be the case... What have you found to be the case? And you should probably stop using parted - that program is not about partition tables, but about some weird amalgam of partition tables + filesystems. In general, trying to combine the two is bad juju - partition tables and filesystems exist entirely independent of one another. – mikeserv Jul 16 '15 at 21:21
  • Are you wanting to create this image with a 800MB of space, so you can copy to sd cards. But not wanting to wast space when storing the image. – ctrl-alt-delor Jul 16 '15 at 21:48
  • The partition table is at the head of the disk - not anywhere else. You can get the partition table with wipefs and dd and you can just tar up the fs data or whatever and unpack when creating the image. – mikeserv Jul 16 '15 at 21:50
  • @richard I am not looking to shrink the file system. just the image of it. I want it to remain the actual size, but since there is nothing in 800 meg of it, there should be no reason to transfer the extra 800 MB – AllenKll Jul 16 '15 at 21:54
  • @mikeserv okay.. stop using parted. what command line tool should I replace it with? fdisk? – AllenKll Jul 16 '15 at 22:01
  • Yes, that would be better. I'm doing some tests to show how to do this definitively, but here's some fairly exhaustive demo on how and where partition tables go when you create them. – mikeserv Jul 16 '15 at 22:11

4 Answers4

6

The easiest way to do this is to create your backing file as a sparse file; that is, make it 1GB with truncate -s 1G disk.img instead of dd if=/dev/zero bs=1048576 count=1024 of=disk.img (or whatever). Nicely, truncate is also far faster.

If you do an ls -l on the file, it'll show as 1GB—but that's only its apparent size. du disk.img will give the actual size.

(Note—you'll need to have your image on a filesystem that supports sparse files. All the common Unix ones do. Ext2/3/4 all do. FAT32 does not. HFS+ doesn't, either.)

NOTE: Sparse files still are, logically, the full size. Just the never-written sections aren't physically stored on disk. For the most part, the not-actually-stored part is hidden from programs. A few utilities have support for it, though. (E.g., dd conv=sparse, cp --sparse=auto/always, etc.). An actual USB stick can't be sparse. And if you use dd conv=sparse to write it, it'll probably be much faster, but you'll leave whatever data was there before, instead of the blocks full of NULs (0x00) expected. Should work fine (as its free space), but will leave old data on the USB stick—possibly a security concern.

derobert
  • 109,670
  • 1
    @mikeserv yep, I realized that and added a note about it. You probably saw an older version... – derobert Jul 16 '15 at 21:31
  • I did some experimenting. and yes du does show it as a much smaller size, but DDing to the card still showed 1GB copied, and scp-ing to another system showed 1 Gig copied. I'm not sure what truncate -s really does but while it may look smaller on disk, any use of the file leaves it at its original size. – AllenKll Jul 16 '15 at 21:48
  • @mikeserv you're correct, I meant to write /dev/zero not null. Oops! Will fix. – derobert Jul 17 '15 at 02:58
  • Oh, yeah, /dev/zero will definitely not do a sparse file. – mikeserv Jul 17 '15 at 03:10
  • @AllenKll I've updated the answer to explain what's going on there. – derobert Jul 17 '15 at 04:56
  • Thanks drobert. I'd have used fallocate + that would create a larger disk object. Also by using - truncate -s - for file creation, the time required is instant. – David Favor Jul 17 '15 at 12:46
  • 1
    @DavidFavor - the time required is instant with any of fallocate -l... img, truncate -s... img, dd seek=... of=img </dev/null, all of which are equivalent. – mikeserv Jul 17 '15 at 15:54
5

Firstly, writing a sparse image to a disk will not result in anything but the whole of the size of that image file - holes and all - covering the disk. This is because handling of sparse files is a quality of the filesystem - and a raw device (such as the one to which you write the image) has no such thing yet. A sparse file can be stored safely and securely on a medium controlled by a filesystem which understands sparse files (such as an ext4 device) but as soon as you write it out it will envelop all that you intend it to. And so what you should do is either:

  1. Simply store it on an fs which understands sparse files until you are prepared to write it.

  2. Make it two layers deep...

    • Which is to say, write out your main image to a file, create another parent image with an fs which understands sparse files, then copy your image to the parent image, and...

    • When it comes time to write the image, first write your parent image, then write your main image.

Here's how to do 2:

  • Create a 1GB sparse file...

    dd bs=1kx1k seek=1k of=img </dev/null
    
  • Write two ext4 partitions to its partition table: 1 200MB, 2 800MB...

    printf '%b\n\n\n\n' n '+200M\nn\n' 'w\n\c' | fdisk img
    
  • Create two ext4 filesystems on a -Partitioned loop device and put a copy of the second on the first...

    sudo sh -c '
        for p in "$(losetup --show -Pf img)p"*        ### the for loop will iterate
        do    mkfs.ext4 "$p"                          ### over fdisks two partitions
              mkdir -p ./mnt/"${p##*/}"               ### and mkfs, then mount each
              mount "$p" ./mnt/"${p##*/}"             ### on dirs created for them
        done; sync; cd ./mnt/*/                       ### next we cp a sparse image
        cp --sparse=always "$p" ./part2               ### of part2 onto part1
        dd bs=1kx1k count=175 </dev/zero >./zero_fill ### fill out part1 w/ zeroes
        sync; cd ..; ls -Rhls .                       ### sync, and list contents
        umount */; losetup -d "${p%p*}"               ### last umount, destroy
        rm -rf loop*p[12]/ '                          ### loop devs and mount dirs
    

    mke2fs 1.42.12 (29-Aug-2014)
    Discarding device blocks: done
    Creating filesystem with 204800 1k blocks and 51200 inodes
    Filesystem UUID: 2f8ae02f-4422-4456-9a8b-8056a40fab32
    Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (4096 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    mke2fs 1.42.12 (29-Aug-2014)
    Discarding device blocks: done
    Creating filesystem with 210688 4k blocks and 52752 inodes
    Filesystem UUID: fa14171c-f591-4067-a39a-e5d0dac1b806
    Superblock backups stored on blocks:
        32768, 98304, 163840
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (4096 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    175+0 records in
    175+0 records out
    183500800 bytes (184 MB) copied, 0.365576 s, 502 MB/s
    ./:
    total 1.0K
    1.0K drwxr-xr-x 3 root root 1.0K Jul 16 20:49 loop0p1
       0 drwxr-xr-x 2 root root   40 Jul 16 20:42 loop0p2
    
    ./loop0p1:
    total 176M
     12K drwx------ 2 root root  12K Jul 16 20:49 lost+found
     79K -rw-r----- 1 root root 823M Jul 16 20:49 part2
    176M -rw-r--r-- 1 root root 175M Jul 16 20:49 zero_fill
    
    ./loop0p1/lost+found:
    total 0
    
    ./loop0p2:
    total 0
    
  • Now that's a lot of output - mostly from mkfs.ext4 - but notice especially the ls bits at the bottom. ls -s will show the actual -size of a file on disk - and it is always displayed in the first column.

  • Now we can basically reduce our image to only the first partition...

    fdisk -l img
    

    Disk img: 1 GiB, 1073741824 bytes, 2097152 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xc455ed35
    
    Device Boot  Start     End Sectors  Size Id Type
    img1          2048  411647  409600  200M 83 Linux
    img2        411648 2097151 1685504  823M 83 Linux
    
  • There fdisk tells us there are 411647 +1 512 byte sectors in the first partition of img...

    dd seek=411648 of=img </dev/null
    
  • That truncates the img file to only its first partition. See?

    ls -hls img
    

    181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 21:37 img
    
  • ...but we can still mount that partition...

    sudo mount "$(sudo losetup -Pf --show img)p"*1 ./mnt
    
  • ...and here are its contents...

    ls -hls ./mnt
    

    total 176M
     12K drwx------ 2 root root  12K Jul 16 21:34 lost+found
     79K -rw-r----- 1 root root 823M Jul 16 21:34 part2
    176M -rw-r--r-- 1 root root 175M Jul 16 21:34 zero_fill
    
  • And we can append the stored image of the second partition to the first...

    sudo sh -c '
        dd seek=411648 if=./mnt/part2 of=img
        umount ./mnt; losetup -D
        mount "$(losetup -Pf --show img)p"*2 ./mnt
        ls ./mnt; umount ./mnt; losetup -D'
    

    1685504+0 records in
    1685504+0 records out
    862978048 bytes (863 MB) copied, 1.96805 s, 438 MB/s
    lost+found
    
  • Now that has grown our img file: it's no longer sparse...

    ls -hls img
    

    1004M -rw-r--r-- 1 mikeserv mikeserv 1.0G Jul 16 21:58 img
    
  • ...but removing that is as simple the second time as it was the first, of course...

    dd seek=411648 of=img </dev/null
    ls -hls img
    

    181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 22:01 img
    
mikeserv
  • 58,310
  • I had to read this four times to fully understand what you've presented. and I think it's brilliant, and well documented. Since this is going to be an image on the device, I can have the startup script check to see if there is a second formatted partition, in the case of first boot the answer is no, so It will extract the partition. In the case of an upgrade it should leave it alone... and then there is also the added possibility of forcing an extraction over an existing Data file system to reset to factory defaults. Again... Brilliant! – AllenKll Jul 17 '15 at 18:21
  • 1
    Not exactly what I was asking for but it certainly solves the problem I have... so I'm marking this one correct. Kudos to you sir. – AllenKll Jul 17 '15 at 18:22
  • @AllenKll - I'm pleased it pleased you. If you're going to incorporate this into an automated upgrade process, consider a little optimization of dd block-sizes. When doing the truncation thing it doesn't matter - dd seeks instantly to the offset and truncates the file right there. But when doing the append the 512 byte blocksize will be pretty slow. Actually, probably the fastest way to do it is to allow dd to seek and allow cat to write. { dd if=/dev/null seek=411648; cat; } <part2 1<>/dev/mmcblk1 or something, if you get my drift. – mikeserv Jul 17 '15 at 19:03
0

Are you wanting to create this image with a 800MB of space, so you can copy to sd cards. But not wanting to wast space when storing the image. If so may I suggest compression, such as bzip. It will depend on unused blocks being initialised to zero though.

  • Yes mostly. I've seen it be done without zipping, I just want to know how to do it. Also for transerring to the card, to save time. as well as doing remote upgrades. to save bandwidth. – AllenKll Jul 16 '15 at 21:51
-1

Hope I understand the question properly:

To shrink a file-system and partition. Use gparted.

  • This will change the size of the partition. I do not want that. I want the partition to stay the same size. just the image of that partition to be smaller. Plus I need an automated way to do it. GUI tools will not work. – AllenKll Jul 16 '15 at 21:29
  • Then just you the file-system shrinker (that gparted/parted use). But why do we want most of the partition to be empty? – ctrl-alt-delor Jul 16 '15 at 21:39