Is it "better" for flash storage to fill with 1's instead of 0's?

0

When I backup a drive, I like to compress the image, so I fill it first with all the same value so the free space collapses to almost nothing:

cat /dev/zero > ~/zeros
sync
rm ~/zeros

For a mechanical/magnetic drive, this simply writes 0's to all the free space in one operation per bit, but for a solid-state/flash device like an SD card or SSD, writing 0 actually takes two operations per bit:

  • A mass erase, which sets an entire block to all 1's at once
  • An individual write, which puts selected bits back to 0

So, by filling a solid-state/flash device with 1's instead of 0's like this answer describes how to do, would I speed the process up or extend the life of the device by avoiding the second step of the write process?

AaronD

Posted 2017-01-14T01:48:25.327

Reputation: 149

What is the goal? – Ramhound – 2017-01-14T01:53:23.587

@Ramhound: Basically the questions at the end. Specifically, I'm doing a lot of trial-and-error trying to build a Pi the way I want it and thrashing the SD card a lot in the process. Backing up partial successes, then trying a completely different approach, etc. I didn't think the specific application was relevant except that I'm writing to it a lot. – AaronD – 2017-01-14T01:58:17.353

I think erasures just unlink entries to file locations or partition tables, and leave the data untouched. If during a restore the drive decided not to do a write because the bit happened to be the right one already, I would guess that is a feature of the installer or drive controller and not the way flash storage works. – Louis – 2017-01-14T02:06:24.400

@Louis: Yes, rm <file> does that, but I'm using dd for the backup/restore, not a file-copy that misses things that aren't files. The point of the full-drive file that is immediately deleted is to set all the free space to the same value so the drive image can be compressed efficiently. ~600MB for an 8GB SD card with ~2GB used, for example. Despite that file only being ~600MB, it must be restored to an 8GB card or bigger. The overall point is to read back a bunch of the same value when backing up, regardless of how that works underneath...except that there aren't THAT many spare blocks. – AaronD – 2017-01-14T02:19:33.650

Your question isn't clear, so clarify it, what is your end goal of writing 0's or 1's to the device? – Ramhound – 2017-01-14T02:35:02.790

@Ramhound: I'm not sure I changed much, but I reworded it anyway. Is this better? – AaronD – 2017-01-14T02:47:25.587

Most SSDs have a bulk erase that's so fast it would be a better solution than dd – Ramhound – 2017-01-14T03:17:07.190

@Ramhound: I don't want to erase EVERYTHING!!! Just set the free space to an easily-compressible pattern (all the same value) so I can then read it with dd and compress it. – AaronD – 2017-01-14T03:21:49.547

(okay, so I'm using the word "bulk" in a more general sense in the question than what the spec uses; changing that) – AaronD – 2017-01-14T03:23:41.900

You understand that SSDs have a limited writes right. Do you have any benchmarks that actually show your typical actions will actually be helpful with a SSD? – Ramhound – 2017-01-14T03:48:20.783

@Ramhound: Yes, that's half the point of the question - avoiding actual, physical writes by the specific value that I use, considering how the technology works. Other than reading an entire 8GB SD card into a 0.6GB compressed image, restoring the entire 8GB, and running it without errors, no I don't have a benchmark for my typical action of filling the drive with the same value. Personally, I consider that by itself to be a pretty good benchmark. – AaronD – 2017-01-14T03:52:41.430

Answers

0

This feels like an XY problem - the correct answer is it probably does not matter at all, but it makes sense to do it later.

Most SSDs encrypt or scramble data for wear levelling purposes, so all you're likely doing is wearing out the drive a little faster I suspect. SD cards, I'm not sure about. Its pretty uncommon to use them as boot drives outside scenarios with very little writes.

If its a relatively small backup, I actually suspect the 'smart' way is to image, then mount and zero out the sparse space on the image then compressing seems smart. You work on relatively fast storage, minimise rewrites on flash storage and you're reading out the whole drive and starting with an uncompressed image anyway. Imaging first then zeroing out then compressing saves on wear.

Its also worth considering if I remember correctly, noobs works off disk images on a fat32 drive anyway, and if so, you can just mount the SSD, copy over the disk image inside and perform those operations there. Then replace the disk images as needed. Or just copying over the contents of the drive and compressing it 'as needed' and replacing the contents of the old drive with the new one.

Journeyman Geek

Posted 2017-01-14T01:48:25.327

Reputation: 119 122

Ooo, that sounds attractive! I would never have thought of that! Okay, so on a Lubuntu laptop, how can I mount an uncompressed image file with multiple partitions (made with dd from the entire physical card) and fill its free space with compressible stuff? – AaronD – 2017-01-14T04:48:08.527

yup. I believe kpartx -a -v backup.img will add a series of devices in /dev/mapper, which you can then mount as if they were a physical disk – Journeyman Geek – 2017-01-14T04:49:44.747

I had to install that first, and it needs to run as sudo, but then it mounted a raw SD image as if I had inserted a physical card. Unfortunately, the directory structure appears to be readonly. (any attempt to create a file, even as sudo, results in Permission denied) The manpage does mention a readonly option (-r), but I didn't use it; only copied/pasted yours so far, with backup.img changed to what it really is of course. – AaronD – 2017-01-14T05:16:21.617

erf. I usually use it readonly - you are treating the things on /dev/mapper as devices right? so you'd need to mount them to actually work on them, you can run dd on them like a sdx device... – Journeyman Geek – 2017-01-14T05:22:41.073

I'm not sure what you mean there. I'm sure I'm missing a lot under the hood, but from what I can actually see: I get a new bookmark/quick-link in the filesystem browser (PCManFM) for each partition in the image, just like when I plug a card in. I click on one, its eject icon appears just like it does for a physical card, and I can read it and change existing files; I just can't create a file, even with sudo. – AaronD – 2017-01-14T06:16:04.973

ooh, that's wierd. Sounds like its automatically mounting them with odd permissions. try working out the mount points and unmounting them then mounting them again – Journeyman Geek – 2017-01-14T06:23:06.610

sudo mount /dev/mapper/loop0p2 /path/that/I/own doesn't make any difference. Still can't create a file. /proc/mounts says that both attempts ended up with the same options: ext4 rw,nosuid,nodev,relatime,block_validity,dealloc,barrier,user_xattr,acl 0 0 Any of that stand out to you? – AaronD – 2017-01-14T06:57:44.033

0

@AaronD

Are you trying to ask if wiping free space is better to do with 0's or 1's
pattern because there will be less blocks being used and re filled?

Thinking 0 for empty 1 being used, and by that logic will help increase or decrease the life of a SSD drive. Having the the second logic that is will collapse blocks to not being used, instead of having free space with deleted or removed items is still data using space and more blocks, having more writes, which makes degradation faster

Your missing the point of why Ramhound does not understand you question.

It's with your logic, by wiping free space and creating and piping it out with

cat > ~/zeros

Creating a write by removing used blocks is still a write on that drive, meaning bad idea, but good thought having bad logic regardless if helps to have a smaller backup image or not. This process will help to create a smaller compressed backup image but your killing your drive in the process.

By piping the output file on the same drive and clearing free space you are creating a large file with > on same drive b/c that is still writing the drive with data even though its is free space. A write to the drive is a write, and a massive write at that.

I think your asking a loaded question, meaning the logic your asking makes sense, but its wrong, and 100% counter productive, and possible wearing out your drive 10x faster.

Having the drive free of space but creating writes by using a file that will have a large file size, a write is a write, therefore the entire logic even though I get it, still by your thinking of having the smallest backup image is helping which is true, the wiping free space is creating much more writes than even with free space in your backup image would create, even with the process of doing it over and over, by writing free space that is a mute amount then creating a file write of the entire free space of the drive still creates a massive write, where the free space, would only be using a small amount of space. The drive will be worse because its writing a file. erasing clears the file, your writing a file.

Jason Swartz

Posted 2017-01-14T01:48:25.327

Reputation: 41

It's not so much that the blocks are used or not. They're all used when I write the file and freed when I delete the file, regardless of what value I fill the file with. The only top-level difference is the residual value that is read back from what's technically free space now, and I was hoping to save a little bit of wear and maybe even some speed by cutting out the second half of a generic write cycle while filling the file. That's really all it is - just the standard fill-with-compressible stuff-before-compression routine, but with a different fill value to try and optimize the process. – AaronD – 2017-01-14T06:31:33.340

That being said though, I really like where Journeyman Geek is going. That promises to side-step the issue altogether, if we can just work out the final bug or two. – AaronD – 2017-01-14T06:32:33.403