Why has my Samsung 970 PRO NVMe SSD had a sinificant slow down in performance?

0

My 1TB NVMe M.2 SSD has become VERY slow over the last month or two and I would really like to restore the original performance. This is a request for debugging suggestions and advice. Below I listed some some system information and benchmarks (sysbench random read/write tests followed by sysbench random read benchmarks.)

This Gentoo Linux system was built last fall and is based on a SuperMicro C9X299-PG300 motherboard with three drives: System drive: Samsung 970 PRO NVMe M.2 Home and Data: Crucial MX500 Sata2 SSD Backup: Seagate BarraCuda Pro Sata2 HDD Current Kernel: 4.20.7-gentoo

All partitions on both the NVMe M.2 SSD and Sata2 SSD are ext4 and have a daily run of /sbin/fstrim and each drive was originally formatted with parted. 10GB was not formatted on each SSD. Currently less than each partition on the NVMe is using less than 50% of its total space.

There are no obvious errors that I see in /var/log/messages.

I have smartd run a short test each week on all drives and no errors are reported.

Benchmark results:

The SuperMicro MB on a SLOW Samsung 970 PRO NVMe M.2 partition:

===== sysbench fileio --file-total-size=128G prepare sysbench fileio --file-total-size=128G --file-test-mode=rndrw --time=120 --max-requests=0 run sysbench 1.0.15 (using system LuaJIT 2.0.5)

Running the test with following options: Number of threads: 1 Initializing random number generator from current time

Extra file open flags: (none) 128 files, 1GiB each 128GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Initializing worker threads...

Threads started!

File operations: reads/s: 93.14 writes/s: 62.09 fsyncs/s: 199.69

Throughput: read, MiB/s: 1.46 written, MiB/s: 0.97

General statistics: total time: 120.4662s total number of events: 42629

Latency (ms): min: 0.00 avg: 2.81 max: 232.93 95th percentile: 8.28 sum: 119588.52

Threads fairness: events (avg/stddev): 42629.0000/0.00 execution time (avg/stddev): 119.5885/0.00

=====

SuperMicro MB Crucial SATA2 SSD Partiton: (not the NVMe --- does not seem to have a performance issue)

===== sysbench fileio --file-total-size=128G prepare

sysbench fileio --file-total-size=128G --file-test-mode=rndrw --time=120 --max-requests=0 run sysbench 1.0.15 (using system LuaJIT 2.0.5)

Running the test with following options: Number of threads: 1 Initializing random number generator from current time

Extra file open flags: (none) 128 files, 1GiB each 128GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Initializing worker threads...

Threads started!

File operations: reads/s: 171.96 writes/s: 114.64 fsyncs/s: 367.84

Throughput: read, MiB/s: 2.69 written, MiB/s: 1.79

General statistics: total time: 120.0254s total number of events: 78423

Latency (ms): min: 0.01 avg: 1.52 max: 76.46 95th percentile: 4.41 sum: 119243.66

Threads fairness: events (avg/stddev): 78423.0000/0.00 execution time (avg/stddev): 119.2437/0.00

=====

The random read throughput of the SATA2 SSD is 1.8 times faster than the NVMe SSD on the same system.

Same test on SuperMicro MB spinning HD:

=====

sysbench fileio --file-total-size=128G prepare

sysbench fileio --file-total-size=128G --file-test-mode=rndrw --time=120 --max-requests=0 run sysbench 1.0.15 (using system LuaJIT 2.0.5)

Running the test with following options: Number of threads: 1 Initializing random number generator from current time

Extra file open flags: (none) 128 files, 1GiB each 128GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Initializing worker threads...

Threads started!

File operations: reads/s: 81.04 writes/s: 54.02 fsyncs/s: 173.62

Throughput: read, MiB/s: 1.27 written, MiB/s: 0.84

General statistics: total time: 120.1649s total number of events: 36966

Latency (ms): min: 0.01 avg: 3.24 max: 242.17 95th percentile: 12.98 sum: 119633.22

Threads fairness: events (avg/stddev): 36966.0000/0.00 execution time (avg/stddev): 119.6332/0.00

=====

The read and write throughput is only slightly better on the NVMe SSD compared to the spinning HD.

Finally here is the same test as above but only random read test (no writing):

=====

sysbench fileio --file-total-size=128G prepare

sysbench fileio --file-total-size=128G --time=120 --max-requests=0 --file-test-mode=rndrd run sysbench 1.0.15 (using system LuaJIT 2.0.5)

Running the test with following options: Number of threads: 1 Initializing random number generator from current time

Extra file open flags: (none) 128 files, 1GiB each 128GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random read test Initializing worker threads...

Threads started!

File operations: reads/s: 644.97 writes/s: 0.00 fsyncs/s: 0.00

Throughput: read, MiB/s: 10.08 written, MiB/s: 0.00

General statistics: total time: 120.0028s total number of events: 77401

Latency (ms): min: 0.00 avg: 1.54 max: 251.64 95th percentile: 3.55 sum: 119217.50

Threads fairness: events (avg/stddev): 77401.0000/0.00 execution time (avg/stddev): 119.2175/0.00


Much better 'read' performance than the 'read' in the random read-write test.

Jagdpanther

Posted 2019-02-18T18:48:00.373

Reputation: 1

Temperature? smartctl -a /dev/nvme0 | grep Temp – duanev – 2019-10-29T23:26:17.920

No answers