Samsung SSD "Wear_Leveling_Count" meaning

30

3

I have Samsung SSDs on my own laptop and on some servers.

When I do:

smartctl -a /dev/sda | grep 177

I get results that I cannot understand. Here are some examples:

# my laptop Samsung SSD 850 EVO 500GB (new)
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
177 Wear_Leveling_Count     0x0013   100   100   000    Pre-fail  Always       -       0

# server 256 GB, SAMSUNG MZ7TE256HMHP-00000
177 Wear_Leveling_Count     0x0013   095   095   000    Pre-fail  Always       -       95

# server 512 GB, SAMSUNG MZ7TE512HMHP-00000 (1 year old)
177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       99

# server 512 GB, SAMSUNG MZ7TE512HMHP-00000 (suppose to be new)
177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       99

# server 480 GB, SAMSUNG MZ7KM480HAHP-0E005
177 Wear_Leveling_Count     0x0013   099   099   005    Pre-fail  Always       -       3

# server 240 GB, SAMSUNG MZ7KM240HAGR-0E005
177 Wear_Leveling_Count     0x0013   099   099   005    Pre-fail  Always       -       11

Any idea how to read Wear_Leveling_Count?

Some values are at the minimum, some are at the maximum.

If consider "laptop" Samsung SSD 850 EVO 500GB, it is 0 and probably will go to 100, then will fail.

If consider first "server" 256 GB, SAMSUNG MZ7TE256HMHP-00000, it is already at the maximum? Will it go down to zero?

Nick

Posted 2016-02-09T16:13:29.913

Reputation: 463

Answers

45

Kingston describe this SMART attribute as follows:

Number of erase/program cycles per block on average. This attribute is intended to be an indicator of imminent wear-out. Normalized Equation: 100 – ( 100 * Average Erase Count / NAND max rated number of erase cycles)

Ignore the Raw Data in these instances (They can be manipulated by manufacturers to work in different ways), and look at the Current Value column.

This source from Anandtech gives us a good indication of how to use this figure:

The Wear Leveling Count (WLC) SMART value gives us all the data we need. The current value stands for the remaining endurance of the drive in percentage, meaning that it starts from 100 and decreases linearly as the drive is written to. The raw WLC value counts the consumed P/E cycles, so if these two values are monitored while writing to the drive, sooner than later we will find the spot where the normalized value drops by one.

All of your drives are at between 95 and 100, and will eventually drop to 0. This is an estimation of how many write, erase, rewrite etc. cycles each block can go through before failing, and at the moment, one of your drives is estimated to have used 5% of it's current expected life span. Again, the key word here is estimated.

Note also that your drives may use different NAND technology, hence the differences in perceived life. Some NAND technology expects blocks to last for around 1000 PE cycles each, others can be rated for as much as 30,000.

Jonno

Posted 2016-02-09T16:13:29.913

Reputation: 18 756

I attached the table "header". What is "current" value? is it "VALUE" column? – Nick – 2016-02-09T17:36:54.717

@Nick Yes, exactly. – Jonno – 2016-02-09T17:42:45.947

That's the exact opposite of my experience. My new drives (Samsung 850 Pro, Samsung 840 Pro) had started at a Raw Value of 0 and went up from there. In fact my current 840 Pro was at 97 about a month ago, and it's now at 99. (This is from looking at SMART data through the Samsung Magician software.) – Granger – 2017-01-09T15:13:50.523

3@Granger Do you have a 'Value' or 'Current' column? Raw values are typically up to the OEM to decide what they do with, and aren't necessarily legible data. Notice in the example the OP provided, the 'VALUE' is 100, and 'RAW_VALUE' is 0 for their 850 EVO. – Jonno – 2017-01-09T16:24:53.160

Ah. That makes more sense if I completely ignore the "Raw Value" column. – Granger – 2017-01-09T18:21:55.277

So it turns out gnome-disk-utility reports raw value as "Value" and value as "Normalized" – Rodney – 2017-07-20T08:53:09.380

I have a two year old Samsung SSD 850 PRO and I have a Wear Leveling Count of 098 on value and 118 on raw value. Is that bad? – casolorz – 2017-12-04T16:37:59.527

@casolorz Far from it, you've used 2% of the anticipated life of your drive. Enjoy another potential 98 years of use ;) (Note that I say that in jest, of course these are just approximations) – Jonno – 2017-12-08T23:18:18.567

On my Samsung SSD 840 EVO 250GB, Wear_Leveling_Count is 43 on a not heavily used SSD after the final firmware update to fix slow speed. It has definitely speeded up wearing. – sdaffa23fdsf – 2017-12-25T11:08:11.737

Samsung SSD 850 EVO 500GB, 11 months of usage, Wear_Leveling_Count - 061. It seems it wears out quite fast. – NeverEndingQueue – 2018-10-09T07:27:57.410

@sdaffa23fdsf the 840 EVO fixed its problem by writing / updating cells if I recall correctly. I'm not surprised that the wear_leveling_count is so poor with the 840 evo. I had that drive for two or three years. I feel your pain. – D-Klotz – 2019-04-26T19:54:36.473

Just an FYI... as we're doing a fair amount of research into this. Our SSDs were warrantied to 500TB written. Wear leveling count went to 0 when we reached that. We are now currently at around 2.5PB written and we've still got no bad blocks or reallocations at all. I suspect that this number is pretty arbitrary, and is simply there to make people buy new SSDs earlier than they need to. – Reverend Tim – 2019-08-14T08:16:21.463

@ReverendTim Yes, there's really no way to know for sure, just using fairly meaningless estimated values. Would be interested to see your results as and when you have any if they're being made public. – Jonno – 2019-08-14T08:58:17.120

@Jonno they will indeed. A colleague of mine will be publishing a blog about it once we've blown up all the drives :) – Reverend Tim – 2019-08-15T09:54:27.943

We've got a bunch of old 840 EVO's here that all have a VALUE of 001, but still appear to be working. YMMV. – Mike Andrews – 2019-10-02T21:57:32.350

2

SMART reports a PREFAILED condition for my Samsung SM951 (AHCI) 128GB, reported in Linux as SAMSUNG MZHPV128HDGM-00000 (BXW2500Q).

But in my case I think it's a firmware bug of the drive,

  • because the total-bytes-written property is reported as 1.1TB while the drive has a specified Total Bytes Written (TBW) of 75TB! Which probably is on the (very) save side, because similar (MLC NAND) drives all reached a multitude of that (600TB) in a real endurance test,
  • and apart from the wear_level_count warning no other prefail or oldage errors or warnings are reported,
  • while the reallocated-sector-count, which according to that test is good pre-fail indicator, is still 0.

So my advise would be to examine those values for your drive/system and base your conclusions on that.

I prefer the low level utility skdump which is supplied with libatasmart, the same library that is used by Gnome Disks.

Use the following command, replacing /dev/sdc with the path to your block device:

sudo skdump /dev/sdc

Ronald

Posted 2016-02-09T16:13:29.913

Reputation: 21