faster way to mount a remote file system than sshfs?

77

25

I have been using sshfs to work remotely, but it is really slow and annoying, particularly when I use eclipse on it.

Is there any faster way to mount the remote file system locally? My no.1 priority is speed.

Remote machine is Fedora 15, local machine is Ubuntu 10.10. I can also use Windows XP locally if necessary.

Vendetta

Posted 2011-10-08T02:36:44.757

Reputation: 1 571

Answers

18

sshfs is using the SSH file transfer protocol, which means encryption.

If you just mount via NFS, it's of course faster, because not encrypted.

are you trying to mount volumes on the same network? then use NFS.

Tilo

Posted 2011-10-08T02:36:44.757

Reputation: 382

35It's not slow because of the encryption, it's slow because it's FUSE and it keeps checking the file system state. – w00t – 2013-05-19T13:40:42.690

3@w00t I don't think that it's FUSE slowing it down, and not the encryption. Changing the encryption to arcfour sped it up for me, whereas using scp was just as slow as sshfs. – Sparhawk – 2013-09-28T04:57:49.053

21@Sparhawk there's a difference between throughput and latency. FUSE gives you pretty high latency because it has to check the filesystem state a lot using some pretty inefficient means. arcfour gives you good throughput because the encryption is simpler. In this case latency is most important because that's what causes the editor to be slow at listing and loading files. – w00t – 2013-09-29T11:16:40.123

3@w00t. Ah okay. Good points. – Sparhawk – 2013-09-29T12:42:08.293

45

If you need to improve the speed for sshfs connections, try these options:

oauto_cache,reconnect,defer_permissions,noappledouble,nolocalcaches,no_readahead

command would be:

sshfs remote:/path/to/folder local -oauto_cache,reconnect,defer_permissions

Meetai.com

Posted 2011-10-08T02:36:44.757

Reputation: 611

1Thanks, worked for me! Had to remove defer_permissions though (unknown option). – Mathieu Rodic – 2015-03-10T11:35:43.203

4Won't nolocalcaches decrease performance by forcing lookups every operation? Does this contradict auto_cache? – earthmeLon – 2015-06-15T18:13:26.927

The way I read the docs, nolocalcaches only disables the kernel side of things, sshfs still has its own cache. I could imagine that the kernel level checks are tuned for "real" file systems and as such more extensive. On the sshfs side "cache_timeout" looks promising, too. Here's a list: http://www.saltycrane.com/blog/2010/04/notes-sshfs-ubuntu/ ... lots of good stuff. :-)

– Someone – 2015-10-29T17:21:07.623

2nolocalcaches and defer_permissions don't seem valid (anymore?) on Debian Jessie. – Someone – 2015-10-29T17:31:24.873

I find that "kernel_cache" is faster than "auto_cache", but afaik it assumes exclusive access, so only use it if nothing else is changing that data. – Someone – 2016-05-30T14:52:50.610

4Why no_readahead? – studgeek – 2016-08-09T00:40:59.357

1What do you mean by "oauto_cache"? – ManuelSchneid3r – 2017-03-15T14:35:02.087

1Removed 'defer_permission' as I think that is mac specifid, no linux. – Elijah Lynn – 2018-02-10T09:35:39.053

1@ManuelSchneid3rI know it's a bit late, but it's the same as -o auto_cache because the argument and parameters do not need to be spaced. – Abandoned Cart – 2019-04-30T03:46:17.223

On Mac OS X Catalina, the local folder just disappears when it gets mounted and you can't see anything in it when trying to ls it. "No such file or directory". unmount it and then the folder becomes visible again. Any thoughts? – LewlSauce – 2019-12-26T18:49:04.167

The defer_permissions option fixes some issues on translating filesystem permissions when mounting SSH filesystem from Mac OS, but the option does not exist in Linux. – ThankYee – 2020-02-12T20:24:19.397

20

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transfered data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

aland

Posted 2011-10-08T02:36:44.757

Reputation: 2 644

4

The chacha20-poly1305@openssh.com cipher is also an option worth considering now arcfour is obsolete. Chacha20 is faster on ARM processors than AES but far worse on x86 processors with AES instructions (which all modern desktop CPUs have as standard these days). https://klingt.net/blog/ssh-cipher-performance-comparision/

You can list supported ciphers with "ssh -Q cipher"

– TimSC – 2017-11-20T20:48:26.637

-oCipher=arcfour made no difference in my tests with a 141 MB file created from random data. – Sparhawk – 2013-09-28T04:39:14.167

6That's because there were multiple typos in the command. I've edited it. I noticed a 15% speedup from my raspberry pi server. (+1) – Sparhawk – 2013-09-28T04:56:14.077

14

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

sshfs_opts="-o auto_cache -o cache_timeout=115200 -o attr_timeout=115200   \
-o entry_timeout=1200 -o max_readahead=90000 -o large_read -o big_writes   \
-o no_remote_lock"

You should also check out the options recommended by Meetai in his answer.

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

find project/folder/on/mounted/fs/A > /dev/null &
find project/folder/on/mounted/fs/B > /dev/null &
find project/folder/on/mounted/fs/C > /dev/null &

If you also want to pre-cache file contents, you could try this:

tar c project/folder/on/mounted/fs > /dev/null &

Obviously this will take much longer, will transfer a lot of data, and requires you to have a huge cache size. But when it's done, accessing the files should feel nice and fast.

joeytwiddle

Posted 2011-10-08T02:36:44.757

Reputation: 1 346

4

After searching and trial. I just found add -o Compression=no speed it a lot. The delay may be caused by the compression and uncompression process. Besides, use 'Ciphers=aes128-ctr' seems faster than others while some post has done some experiments on this. Then, my command is somehow like this:

sshfs -o allow_other,transform_symlinks,follow_symlinks,IdentityFile=/Users/maple/.ssh/id_rsa -o auto_cache,reconnect,defer_permissions -o Ciphers=aes128-ctr -o Compression=no maple@123.123.123.123:/home/maple ~/mntpoint

maple

Posted 2011-10-08T02:36:44.757

Reputation: 140

4

SSHFS is really slow because it transfers the file contents even if it does not have to (when doing cp). I reported this upstream and to Debian, but no response :/

Daniel Milde

Posted 2011-10-08T02:36:44.757

Reputation: 41

3It is efficient with mv. Unfortunately when you run cp locally, FUSE only sees requests to open files for reading and writing. It does not know that you are making a copy of a file. To FUSE it looks no different from a general file write. So I fear this cannot be fixed unless the local cp is made more FUSE-aware/FUSE-friendly. (Or FUSE might be able to send block hashes instead of entire blocks when it suspects a cp, like rsync does, but that would be complex and might slow other operations down.) – joeytwiddle – 2016-09-08T05:00:07.857

2

I found turning off my zsh theme that was checking git file status helped enourmously - just entering the directory was taking 10+ minutes. Likewise turning off git status checkers in Vim.

bloke_zero

Posted 2011-10-08T02:36:44.757

Reputation: 121

Wow, this is a really good tip! – Dmitri – 2019-08-07T05:16:11.250

2

NFS should be faster. How remote is the filesystem? If it's over the WAN, you might be better off just syncing the files back and forth, as opposed to direct remote access.

Adam Wagner

Posted 2011-10-08T02:36:44.757

Reputation: 121

1

Either NFS or Samba if you have large files. Using NFS with something like 720p Movies and crap is really a PITA. Samba will do a better job, tho i dislike Samba for a number of other reasons and i wouldn't usually recommend it.

For small files, NFS should be fine.

Franz Bettag

Posted 2011-10-08T02:36:44.757

Reputation: 239

-4

Login as root.

Access your top level directory, by using "cd /".

Then ensure that you have a mount folder created or create one using "mkdir folder_name".

After that, simply use "mount x.x.x.x:/remote_mount_directory /local_mount_directory.

if everything worked on your end, before this you should have a successful mount. You might want to check and make sure the destination directory is shared by using the "exportfs" command to gurantee they are able to be found.

Hope this helps. This is not from a lvie environment, it has been tested on a LAN using VMware and Fedora 16.

JasonN

Posted 2011-10-08T02:36:44.757

Reputation: 1

5This does not answer the question… – Léo Lam – 2015-05-01T15:15:35.397