A couple of things here. First, the command you listed probably won't work as you're expecting. It looks like you're trying to hit 150GB, but you need to factor in both the block size and the count (count is the number of blocks of block size). So if you want 150GB, you might do bs=1GB count=150
. You could then pick up where you left off by adding a skip=150
to skip 150 blocks (each of 1GB) on your second run. Also, to have gzip send its output to standard out, you need to pass it the -c
option.
However, before you do that, a couple of other questions. Are you using dd here because the filesystem is corrupted/damaged/etc and you need a bit-for-bit exact disk image copy of it? Or are you just trying to get the data off? A filesystem-aware tool might be more effective. Particularly if the source filesystem isn't full. Options include tar, or you might want to look into something like Clonezilla, Partclone, Partimage, or for a more Windows-specific option to directly access a Windows filesystem, Linux-NTFS (note the previously mentioned tools can handle Windows filesystems to various degrees, too).
If you are set on operating on the partition with a non-filesystem aware program, then using your dd line (as modified above to be correct) will likely work. It's hard to say how well it will compress, but it should be smaller than the original filesystem. If you have read-write access to the original filesystem, it would be worth filling up the free space with a file written from /dev/zero to zero out the unused space before saving it with dd. This will enhance gzip's ability to compress the free space in the disk image.
To operate on the second chunk, just add a skip=XXX
bit to your second dd invocation, where 'XXX' is equal to the count=
value you gave it the first time. If you wanted to do 150GB on your first one and 40 on your second, you might do:
sudo dd if=/dev/sdf1 bs=1GB count=150 | gzip -c > img1.gz
followed by:
sudo dd if=/dev/sdf1 bs=1GB skip=150 count=40 | gzip -c > img2.gz