Achieve Maximum write speed on hard disk

0

I have i5 6th gen processor with 8gb ram 4tb secondary hard disk and 500gb primary hard disk. The 4tb hard disk is formated with ntfs. My target is to write huge amount of files on the disk. The disk is SATA 7200RPM have to test compresssion algorithm on the same written files. The files which will be written on the disk are if smaller size compressed witj zlib the range will be around 12-20KB. I wrote a bash script to make 500000 copies of the same for test purpose but found that 7-8 files per se ond were being written in that directory which is 100kB/s while the speed mentiones is much higher. I want to achieve like 100 files per second. I don't know what to do. Please suggest me to achieve the highest write speed.

M A SIDDIQUI

Posted 2017-08-28T17:30:01.890

Reputation: 101

3>

  • don't use NTFS...
  • < – Tetsujin – 2017-08-28T17:31:11.583

    please suggest so the disk can be used with linux as well as windows which format should i use – M A SIDDIQUI – 2017-08-28T17:33:47.650

    I think [though I'm not 100% certain] the only [appropriate] file system they can both use natively would be FAT32. You need to be aware that Windows itself is notoriously slow at file moves, though. – Tetsujin – 2017-08-28T17:40:55.000

    Buy an SSD if you want more IOPS than a 7K2 disk can achieve. – Eugen Rieck – 2017-08-28T17:44:32.353

    @Tetsujin Fat32 has limitations that a file can not be greater than 4gb while in my case there will be files larger than that. – M A SIDDIQUI – 2017-08-28T17:53:04.147

    @Eugen yes ssd is better option but i am not able to utilize the speed of sata. It is is good if i can utilize the full speed and then want more in that case ssd will be better – M A SIDDIQUI – 2017-08-28T17:54:38.153

    compressing files over 4GB is very probably not worth the effort – Tetsujin – 2017-08-28T17:54:39.383

    This has nothing to do with SATA. It is about ca. 300 IOPS, which a 7K2 mechanical disk simply can't give you. – Eugen Rieck – 2017-08-28T18:24:42.537

    Answers

    2

    Your bottleneck is the filesystem, not disk. How well filesystem (and its implementation) scale on file operations (creation/deletion/etc.) varies greatly, depending on implementation and design. You likely already archive significantly better throughput when writing to a single file sequentially instead of writing same amount of data to different files requiring lot of filesystem operations (open/create).

    If you must do lots of file operations, you need to choose a filesystem which scales better on Linux than NTFS. XFS or EXT4 are solid choices with good performance.

    There are plenty of benchmarks comparing the differences in performance which point out the same.

    sebasth

    Posted 2017-08-28T17:30:01.890

    Reputation: 670

    1

    If you write small files you are mostly testing the speed at which the filesystem can open/close files (and possibly some head move latency). And by using NTFS on Linux you are not using the best performing filesystem around. If you want to test-speed your algorithm, use a native filesystem (ext4...) and big files. Then if you get slower results on NTFS you will know where they come from.

    xenoid

    Posted 2017-08-28T17:30:01.890

    Reputation: 7 552