4

I have a server running with two SSD disks in RAID1 and both drives report a Media_Wearout_Indicator at 043 from smartctl.

233 Media_Wearout_Indicator 0x0032   043   043   000    Old_age   Always       -       0

Two months ago it was at 44. I am not sure how to interpret this and if I should be worried. Will it realistic be fine until it reaches zero or when will be a good time to get a replacement?

jwl
  • 43
  • 3
  • Do you have a RAID controller? – ewwhite Nov 03 '14 at 08:48
  • No, software raid. – jwl Nov 03 '14 at 08:52
  • 2
    you lsot 1% in 2 months - that means that the other 43% are good for 86 months. What do you worry about? (and note, HD often can make a lot more than they guarantee). Also note: If you run end user SSD they may have SERIOUS write limitations.... making them hardly usable for many server scenarios. – TomTom Nov 03 '14 at 09:29

1 Answers1

3

Replace the drive when it fails. Granted, some info about your OS, the brand of SSDs and hardware would be helpful... Also, the age of the disks and how long they've been in use... Oh, and the workload on the disks. What are you doing to them?

But see: How to check the life left in SSD or the medium's wear level?

Media_Wearout_Indicator is a percentage, so your SSDs write-cycle life is displaying 43% left.

How to determine number of write cycles or expected life for SSD under Linux?

And understand that S.M.A.R.T. checks are not the only determinants of a drive's health.

ewwhite
  • 194,921
  • 91
  • 434
  • 799