Why does my hard drive perform well in benchmarks but slow in applications?

2

My hard drive has been acting slow lately, taking several seconds to load an application. It's the Hitachi HTS723232L9SA62 (7200 RPM). I ran a benchmark and the speeds were in the 70 Megabytes/second range. However, When I open an application (eg. Google Chrome), I run iotop (I use linux) and it says that my hard drive utilization is around 500 Kilobytes/second to 1 Megabyte/second. Sometimes it will peak around 15 Megabytes/second for a few seconds, then drop back to 1-2MB/s. There are not SMART errors.

How is this possible? Why do benchmarks show my hard drive speed as 60 times faster than the speeds I am getting? How is it possible to get speeds this low? This sounds like fragmentation, but I am using linux with and ext2 file system, so fragmentation should be minor or nonexistent. I have over 200GB free out of the 320GB hard drive.

My computer specs

  • Thinkpad x61
  • Intel Core2Duo 2.00GHz
  • 2GB DDR2-667 Memory
  • Hitachi HTS723232L9SA62 320GB 7200RPM Internal Hard Drive

Thanks for your help!

Albert Z.

Posted 2015-06-18T03:31:29.287

Reputation: 91

1Two words "bad sectors" your benchmarks are writing to good sectors your applications already exist on the disk and have to be read. Replace the HDD and rejoice. Based on the size the HDD is old old older then most paintings in a museum if you adjusted its age based – Ramhound – 2015-06-18T03:40:26.433

@Ramhound Is there any way to probe for these bad sectors? I checked and there were no SMART errors. Does this mean no bad sectors? – Albert Z. – 2015-06-18T05:05:11.590

2gig of memory is it possible at that time (eg, having chrome with many tabs) that you are already paging out to the same disk your reading from? The usual HD benchmarks are not testing program run, and paging to disk, although there are tests that will. for the most part the HD benchmarks are instead small footprint programs doing only specific I/O mostly unimpeded by the rest of what is going on, to test only the HD itself (and controller and bus). – Psycogeek – 2015-06-18T09:06:21.330

Answers

0

In a benchmark, you will be writing to a continuous stream of sectors, and then reading back that stream of sectors, when booting an application, the head is tracking all over the disk loading many many files - this is your seek time.

To reduce this, I recommend a defragmentation.

The disk will also actively load something into memory (the 15mb/s), and then start to actually do its processing once it is loaded (the drop to nothing).

In Windows task manager you can view your HDD statistics.

Matt Clark

Posted 2015-06-18T03:31:29.287

Reputation: 1 819

Thank You for your prompt response. I am running Linux (3.19.0-20) with an ext2 filesystem. Therefore, defragmentation probably is not the solution. Could this be due to a bottleneck somewhere else? – Albert Z. – 2015-06-18T04:16:25.117

0

It is hard to answer without knowing your benchmarking software and how comprehensive and "real world test" based it is. In synthetic testing it could be benchmarking activities which do not really occur like when you use it with normal tasks. It could be writing/reading one contiguous stretch which often is not the case in reality. Maybe try another benchmarking software package and see how it goes from there.

Frankeex

Posted 2015-06-18T03:31:29.287

Reputation: 101

I used 'dd' for my benchmark according to the instructions here

– Albert Z. – 2015-06-18T16:24:17.670