Mounting large remote directory using SSHFS

0

I have a large remote directory with 600GB of content, that is split over 6 million MP3 files, without any sub directories.

  1. Will the large number of files under a single directory, cause any performance issue with SSHFS?

  2. Would it be better to shard the files into multiple sub directories based on the first two characters of the filename, similar to how git stores objects (Filenames are MD5 hashes). For example, 098f6bcd4621d373cade4e832627b4f6.mp3 will be renamed to 09/8f6bcd4621d373cade4e832627b4f6.mp3 Or would it further slow down file reads, since SSHFS needs a round-trip for each directory.

Update: The directory will be used as a cached for generated audio files. Files will be written to / read from the directory. There is a monthly cron job to delete less frequently used files. There wont be any directory scans. Files will be accessed / deleted using exact file path, which is stored in a local database.

Joyce Babu

Posted 2019-05-27T08:27:11.883

Reputation: 282

Do your programs expect to list/scan the directory, or do they directly access known paths only? (Directory listing is most impacted, even for local access; direct path lookups much less so.) – user1686 – 2019-05-27T08:40:59.653

@grawity There won't be any directory scan. I have updated the question with more details. Thank you. – Joyce Babu – 2019-05-27T09:05:28.317

No answers