NTFS write speed really slow (<15MB/s) on Ubuntu

18

8

When copying large files or testing writespeed with dd, the max writespeed I can get is about 12-15MB/s on drives using the NTFS filesystem. I tested multiple drives (all connected using SATA) which all got writespeeds of 100MB/s+ on Windows or when formatted with ext4, so it's not an alignment or drive issue.

top shows high cpu usage for the mount.ntfs process.

AMD dual core processor (2.2 GHz)
Kernel version: 3.5.0-23-generic
Ubuntu 12.04
ntfs-3g version: both 2012.1.15AR.1 (Ubuntu default version) and 2013.1.13AR.2

How can I fix the writespeed?

Zulakis

Posted 2013-06-30T17:05:42.630

Reputation: 1 316

1With the same Ubuntu 2015.04 laptop, I formatted to NTFS a 320GB external hard disk and a 32GB USB stick. Copying 2GB of pictures to the first one was taking forever (6 hours left estimated after 30 minutes), but to the second one (USB stick) it only took a minute or two. I did not change any settings between the two. – Nicolas Raoul – 2015-12-15T17:45:00.953

Have you tried testing dd with raw drive access (on the drive or partition, doesn't matter)? Note that testing that way will destroy the filesystem and will lose any data on it. It will bypass the NTFS drivers entirely. – Bob – 2013-06-30T17:17:07.813

Yep I just did, the result is 149MB/s. – Zulakis – 2013-06-30T17:20:57.487

Just out of curiosity I have to ask if this drive is one of those 4k drives and if therefore your filesystem might be unaligned somehow?! – Waxhead – 2013-06-30T18:05:07.143

try bonnie++ and what kernel are you using? uname -r – cybernard – 2013-06-30T18:25:31.650

bonnie++ produced similar results. read speed is faster then write (about 60mb/s), still not nearly the possible 150mb/s though. I added kernel and ntfs-3g versions to my question. – Zulakis – 2013-06-30T18:47:46.550

What options did you try for dd? the block size should be at least 65536 and a large count size? If dd runs in less than 2 min the samples size is to low. Try: dd if=/dev/sda of=/dev/null bs=65536 count 10000 – cybernard – 2013-06-30T19:23:15.130

dd if=/dev/random of=/dev/sda bs=65536 count=10000 – cybernard – 2013-06-30T19:25:47.610

655360000 bytes (655 MB) copied, 49.2048 s, 13.3 MB/s – Zulakis – 2013-06-30T19:32:44.737

If you double the block size to 131072 and 262144 do the speeds increase at all? – cybernard – 2013-06-30T19:50:51.887

Yes, it increases a little bit but only about 1-3 mb/s. Increasing te block size further doesn't increase the speed anymore though. – Zulakis – 2013-06-30T20:14:17.130

How large are the files you're moving? The overhead of file creation will dominate when transferring small files. – HABO – 2013-07-01T12:51:26.967

I am copying single files with sizes between 10-15GB. No small files overhead. Also, when testing with dd (which ultimately writes one single file), the write rates are as bad as when copying. – Zulakis – 2013-07-01T12:53:12.777

4I believe that the free version of NTFS-3G is crippled so that it uses 4 KiB writes with no caching, causing extremely slow write performance on SSDs and USB drives. The company behind the driver suggests buying the commercial version for better performance. Apparently no-one cares enough to actually fix (and if necessary, fork) the open source version because this problem has been around for almost a decade, ever since NTFS-3G was first released. – Tronic – 2014-03-09T02:23:38.623

Answers

18

A previous post was on the right track with the reference provided:

perhaps check here for ideas on what could be causing it. http://www.tuxera.com/community/ntfs-3g-faq/#slow

The original question mentions noticing the issue with large file transfers. In my experience with copying media files or doing backups, the key option in the above FAQ was:

Workaround: using the mount option “big_writes” generally reduces the CPU usage, provided the software requesting the writes supports big blocks.

Simply add the big_writes option, e.g.

sudo mount -o big_writes /media/<mount_dir> /dev/<device>

My Linux NAS with a low spec CPU now manages NTFS large file writes about three times faster. It improved from ~17MB/s to 50MB/s+. Even seen it peek at about 90MB/s in iotop which is probably near the external drives capability (a 2.5" USB3 HDD).

From the NTFS-3G man page:

 big_writes
              This option prevents fuse from splitting write buffers  into  4K
              chunks,  enabling  big  write buffers to be transferred from the
              application in a single step (up to some system limit, generally
              128K bytes).

Closing notes:

  • the big_writes option probably won't help a 4K random write benchmark ;-)
  • While Tuxera seem to be reserving the pro NTFS driver for embeded system partners, Paragon offers an alternate free for personal useNTFS driver called NTFS&HFS for Linux 9.0 Express and a professional version. I don't however vouch for this product and when I tried a previous version (v8.5), I could not get it to work with my Linux Kernel version at the time.

JPvRiel

Posted 2013-06-30T17:05:42.630

Reputation: 871

big_writes option made my disk go from 300kb/s to 35mb/s! Thanks! – JosFabre – 2019-05-24T15:01:08.800

10 characters made a world of difference, thank you very much! – João Miguel Brandão – 2019-12-11T09:49:30.113

big_writes was deprecated in 2016, however, 3 years later some distros are still using an even older version of libfuse. – Dmitry Grigoryev – 2019-12-21T16:34:01.780

2

perhaps check here for ideas on what could be causing it. http://www.tuxera.com/community/ntfs-3g-faq/#slow

This sounds a bit like the 'old days' when file io was not using DMA by default. It's unlikely these days but is BIOS using IDE emulation for SATA drives? Because if it is emulating IDE then it may also be emulating non-DMA mode as well.

Another potential slow down is if ntfs file compression. Is compression enabled on the folder you are writing to? If it is, that will make any new files in that folder compressed as well.

BeowulfNode42

Posted 2013-06-30T17:05:42.630

Reputation: 1 629

How can I test if it is using DMA? Apart from this, I have already tried out all the suggestions on the page. – Zulakis – 2013-07-01T13:53:18.247

Uhm, from what I have read, DMA is only relevant for IDE drives? I am only using SATA drives. – Zulakis – 2013-07-01T13:58:32.040

According to http://en.wikipedia.org/wiki/Serial_ATA#Transport_layer it sounds like DMA is the only option for SATA. Lets find out if his bios is using ide emulation

– BeowulfNode42 – 2013-07-02T03:11:14.477

0

big_writes was deprecated in 2016, the corresponding behavior is always enabled when using libfuse version 3.0.0 or later. On a modern Linux system, poor NTFS performance usually means that:

  • the disk is fragmented
  • NTFS disk compression is enabled
  • inadequate mount options such as sync are used

Dmitry Grigoryev

Posted 2013-06-30T17:05:42.630

Reputation: 7 505

0

This is an old thread, but for people looking for a solution to the same problem: do you have cpuspeed active? ntfs-3g is CPU-hungry and in my case cpuspeed mistakenly detected a low load for processes with lots of IO waits, eventually throttling down the core and starving the driver.

Try disabling cpuspeed (if e.g. it is running as a service) and test again.

irisx

Posted 2013-06-30T17:05:42.630

Reputation: 1

How do I determine that cpuspeed is active? Is that a daemon or a setting? – Daniel – 2017-02-01T21:30:36.860

-1

This patch improves wrote performance for embedded devices: https://www.lysator.liu.se/~nietzsche/ntfs/

Nihilus

Posted 2013-06-30T17:05:42.630

Reputation: 1