-1

we are planning to migrate to AWS, how do you migrate shared NFS mount points to AWS, we have file systems in them. Is S3 a good choice or EBS, Is there any other way to do this, how have people been traditionally doing this.

chandra
  • 103
  • 1

1 Answers1

0

There is the option of EFS as well, which is effectively NFS in AWS with "unlimited" storage .

cduffin
  • 824
  • 7
  • 8
  • EFS is fairly new, i heard it's causing latency issues, were there any other solutions before efs came into picture – chandra Sep 21 '16 at 15:23
  • It really depends on how much effort you are willing to go to and what you're going to be hosting. If the application is expecting files to be locally accessible across multiple machines, you could look at setting up your own NFS server using an EC2 instance with EBS volumes in an LVM setup. If are re-engineering the application, S3 would be my suggestion. One benefit EFS offers is that it has redundancy built in. – cduffin Sep 21 '16 at 15:43
  • @chandra the reason EFS was introduced was that there previously wasn't a solid solution, other than building your own NFS server in EC2 with EBS volumes, which is of course has single points of failure. *"I heard its causing latency issues."* Where did you hear this? – Michael - sqlbot Sep 21 '16 at 19:58
  • @Michael.. i have been going through AWS dev forums and many of the consumers are complaining latency issues. – chandra Sep 22 '16 at 12:08
  • I see. Yes, there is a small number of posts along those lines, but for now my assumption is that those are most likely to be simple issues, such as using an NFS 4 client that can't speak 4.1, which still works but isn't optimal. I have not experienced any similar issues, and have been using EFS since the preview, so my assumption for the moment is that these are isolated cases. One system does a large number of very small, unaligned, but largely sequential reads from very large files on EFS and it has performed exceptionally well. – Michael - sqlbot Sep 22 '16 at 15:03