My two cents gift tor all of you is my own experience, YES it helps but with caution.
I had a lot SSDs and based on my own tests i recomend to full fill with zeros prior to rewrite master table and instead of delete partitions, recreate master table.
Later i will explain why, but steps would be ˋddˋ to fill entire SSD, use bs=1M, much faster than bs=1, and forget the count parameter to make it go till end (it will give the error of no more space when reach the end, that is spected so don't worry to see such error, it must be shown); after full fill use gparted or whatever you want to write a master table (MBR/GPT/etc) as needed, this will 'trim' all the disk, then create partitions with desired format, etc.
Why fill it with zeros? Short anwser is that my experience is when i fill it with zeros some SSDs that where giving 2-24 unreadable blocks got fixed, no more blocks unreable anymore.
Now first thing i do when i recieve a new SSD, prior to use it, is full fill it with zeros, to ensure i will not suffer again the common random errors of unreadable 1KiB blocks.
My experience: Using software to read / test the whole SSD (it tells you how much time takes to read each 'sector') i was getting a lot of pairs of '512byte secttors' (1KiB blocks) that are unreable, and their position changes randomly and number of fails varies from 2 to 24, etc.; after full fill with zeroes and recreate the master table (that vauses trim) no more unreadable sectors.
My crash test: Instesd of filling with zeros to recover from such errors, i let one SSD to be used, after a few hours and with only less than one terabyte written to it (120GiB SSD) it died misserably, it does not allow any access to it anymore, motherboard bios can not see it, usb enclosures freezes when accessing it, so neither Windows see it, neither Linix fdisk see it.
It was a 'die' test with multiple SSD i had buy at the same time, identical ones... all i did not zeroed are died, rest have a lot of blocks reallocated, but without any unreadable errors anymore.
Of course, my conclusion is that all SSDs are not reliable, no matter what brand and capacity.
So first thing with them, in my experience, is to force them to full fill at least once, better with zeros than with random (it is faster).
More, most SSD do internal trim when written with zeros (garbe recollection algorithms, etc).
Also, if you first fill them once, any block that gives write error get reallocated. It is much better that such thing happens with no vital data, when written zeros if data got lost (all was zeros) it is not relevant, but if data is 'vital' to operating system it is very bad.
Most SSD reallocate do that but loosing the data on the block that gave write error, only 'enterprise' (they cost >10€ per GiB) do a re-try the write after reallocate correctly. Some SSD will also loose all the other 'sectors' on such failed block (like doing a 'discard').
So best, try it first, after full fill, check SMART data to see how much reallocations can still be done.
It is not so important how much reallocations had been done, most SSDs came from manufacter with some blocks already reallocated, find one with zero is less than 1%, so the important is the ratio, reallocate versus future possi le reallocations.
Its is my experience after hundreds of SSDs died along 5 five years, some died in the first hour of use, other in a week, others in a month; but all i'd done such zero full fill did live for 2 to 3 years, with a 13GiB written each day, 3*365*13 GiB = 13.9TiB written, much less than manufatures say (>100TiB).
But speed matters, most if on Windows (on Linux a good 2xHDD LVM2 striping gives neat same boot times but thay do not fail in >25 years), so using SSD with a price of 0.21€ per Gigabyte (120GiB = 25€) is worth it (for Windows), among they muct be changed after 2 or 3 years; i hope technology will improve reliability.
For Linux i do not want anymore SSD till thay will be more reliable, but for Windows (Vista, 7 and 10) system partition is a must (boot times ten times lower in some cases, with Windows Vista, instead of >30min boot it boots on >4min on my old laptop).
Yes, full fill with zeroes is a must, given my experience.
But, only when you recieve the SSD and prior to use it for anything.
Tip: If SSD does not do garbage collection well and operating system does not tell it to trimm all, better do a full fill with zeros, at end that is what internally in the SSD when it discard blocks. Also writting zeros will clear electronic, that is why it helps to recover failing read blocks.
And also, every time you change data on it, try yo do a clone, the SSD will inform the write was OK, also on unreadble sectors (they can be written OK but not readed), no operating system is designed to support such condition, they all supose if wtite is OK data can be readed; do not confuse readable with read different data thsn what was written.
That is my experience with SSDs and HDDs. For Windows boot and apps i use SSD but allways with a clone done on normal HDDs since SSD die in less than 3 years, but gor Linux i use 2x or 3x good 2.5" HDD ti get similar times on normal use as what SSD would give, but lasting much longer (>25 years).
I am not up to pay >1000€ for 100GiB entetprise SSD that works well for 10 years, i preffer to pay 25€ for 130GiB every 2 or 3 years. Price matters 100€ per year (enterprise) versus 10€ per year (Yucon, Samsung, etc), just do the maths.
8No advantage to having zero's to start with as it will overwrite anything that is there to begin with. – Moab – 2015-10-16T22:08:37.327
7If it was a SSD, the answer would be "hell no", because mkfs.ext4 actually uses TRIM to discard the entire partition's contents, so manually writing zeros on top of that would slightly reduce performance. – user1686 – 2015-10-16T22:56:09.333
2@grawity I have been wondering if there is any SSD which automatically turns writes of all zeros into TRIM instead. – kasperd – 2015-10-17T09:06:28.027
@kasperd: good follow-up question! – liori – 2015-10-17T18:00:35.263
Another possible follow up question: if said disk was a VM mounted on a SAN that did low level disk deplication, could that help (in that if the data that was deleted at the VM level, unless the system is using some VM-aware API, it would see to the SAN that all of those deleted blocks need to stick around, whereas setting them to all zeros would end up with a bunch of duplicated blocks all pointed to a all zero thing) – Foon – 2015-10-17T19:10:50.847
@Foon That's a very different question from what the OP is asking. It may be a valid question (it probably needs more details to be answerable) but it isn't this question. – a CVn – 2015-10-17T21:14:50.817
Why use
bs=1
? – user253751 – 2015-10-18T06:14:47.173Not sure why no one has mentioned this - Particularly on a Hard Drive, you want to defragment the disk - this will re-order files so they are in contiguous blocks, which makes consecutive reads faster - in some cases dramatically so. – davidgo – 2016-03-11T08:28:33.123