31

I have a server that is outside of AWS. I'd like to be able to mount an EFS volume to it, but I am not sure if that is possible.

Perhaps if you create a VPC, and you create a tunnel over VPN?

Does anybody know if this is possible?

Michael - sqlbot
  • 21,988
  • 1
  • 57
  • 81
Adam
  • 421
  • 1
  • 4
  • 9
  • It's definitely possible... I've been using EFS from outside AWS over a TLS tunnel for a while now... but there is a bit of a "trick" that I believe you'll need to implement in order to make it work. I'll confirm that the way I'm doing it is actually necessary (it's been a while since I set it up) or whether it's possible without that, and I'll post an answer once I can confirm. – Michael - sqlbot Aug 25 '16 at 19:02
  • EFS is meant to be a shared file system for multiple EC2 instances. Externally you should consider using S3 (which is similar to a file system, it's an object store really) or perhaps a small EC2 instance with an EBS instance. Either would likely be cheaper than EFS - EBS on SSD is 1/3 the price of EFS, EBS on magnetic is 1/6 the cost of EFS, and S3 is 1/10th the cost of EFS. What exactly are you trying to achieve that makes EFS is the best option? – Tim Aug 25 '16 at 19:27
  • I thought that because it is called ELASTIC file system it will be easy to connect it to outside of AWS. Also - if I wanted to backup files to outside of AWS, it will be hard if not impossible to do from S3. From EFS I can just mount it to an EC2 instance and perform the backup. But If they both require VPN I guess it makes very little difference... – Adam Aug 25 '16 at 19:37
  • S3 is easily accessible from outside AWS, by design, much easier for integration / backup / anything really - super flexible. EFS is designed as a shared file system between EC2 instances, so will likely be more difficult to use outside AWS, probably requiring an EC2 instance as a proxy. Neither require a VPN. Suggest you need to discuss your use cases with someone qualified / experienced rather than making assumptions and jumping in. – Tim Aug 25 '16 at 21:31

2 Answers2

48

Important updates:

In October, 2018, AWS expanded the capabilities of the network technology underpinning EFS so that it now natively works across managed VPN connections and cross-region VPC peering, without resorting to the proxy workaround detailed below.

https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-efs-now-supports-aws-vpn-and-inter-region-vpc-peering/

EFS added support for connectivity via AWS Direct Connect circuits in late 2016.

https://aws.amazon.com/blogs/aws/amazon-efs-update-on-premises-access-via-direct-connect-vpc/


Comments have raised some interesting issues, since in my initial reading of the question, I may have assumed more familiarity with EFS than you may have.

So, first, a bit of background:

The "Elastic" in Elastic File System refers primarily to the automatic scaling of storage space and throughput -- not external access flexibility.

EFS does not seem to have any meaningful limits on the amount of data you can store. The documented maximum size of any single file on an EFS volume is 52,673,613,135,872 bytes (52 TiB). Most of the other limits are similarly generous.

EFS is particularly "elastic" in the way it is billed. Unlike filesystems on EBS volumes, space is not preallocated on EFS, and you only pay for what you store on an hourly average basis. Your charges grow and shrink (they're "elastic") based on how much you've stored. When you delete files, you stop paying for the space they occupied within an hour. If you store 1 GB for 750 hours (≅1 month) and then delete it, or if you store 375 GB for 2 hours and then delete it, your monthly bill would be the same... $0.30. This is of course quite different than EBS, which will happily bill you $37.50 for storing 375 GB of 0x00 for the remaining hours in the month.

S3's storage pricing model much the same as EFS, as billing for storage stops as soon as you delete an object, and the cost is ~1/10 the cost of EFS, but as I and others have mentioned many times, S3 is not a filesystem. Utilities like s3fs-fuse attempt to provide an "impedance bridge" but there are inherent difficulties in trying to treat something that isn't truly a filesystem as though it were (eventual consistency for overwrites being not the least of them). So, if a real "filesystem" is what you need, and it's for an application where access needs to be shared, or the storage needs space required is difficult to determine or you want it to scale on demand, EFS may be useful.

And, it looks cool when you have 8.0 EiB of free space.

$ df -h | egrep '^Filesystem|efs'
Filesystem                                            Size  Used Avail Use% Mounted on
us-west-2a.fs-5ca1ab1e.efs.us-west-2.amazonaws.com:/  8.0E  121G  8.0E   1% /srv/efs/fs-5ca1ab1e
us-west-2a.fs-acce55ed.efs.us-west-2.amazonaws.com:/  8.0E  7.2G  8.0E   1% /srv/efs/fs-acce55ed

But it is, of course, important to use the storage service most appropriate to your applications. Each of the options has its valid use cases. EFS is probably the most specialized of the storage solutions offered by AWS, having a narrower set of use cases than EBS or S3.


But can you use it from outside the VPC?

The official answer is No:

Mounting a file system over VPC private connectivity mechanisms such as a VPN connection, VPC peering, and AWS Direct Connect is not supported.

http://docs.aws.amazon.com/efs/latest/ug/limits.html

EFS is currently limited to only EC2 Linux access only. That too within the VPC. More features would be added soon. You can keep an eye on AWS announcements for new features launched.

https://forums.aws.amazon.com/thread.jspa?messageID=732749

However, the practical answer is Yes, even though this isn't a an officially supported configuration. To make it work, some special steps are required.

Each EFS filesystem is assigned endpoint IP addresses in your VPC using elastic network interfaces (ENI), typically one per availability zone, and you want to be sure you mount the one in the availability zone matching the instance, not only for performance reasons, but also because bandwidth charges apply when transporting data across availability zone boundaries.

The interesting thing about these ENIs is that they do not appear to use the route tables for the subnets to which they are attached. They seem to be able to respond only to instances inside the VPC, regardless of security group settings (each EFS filesystem has its own security group to control access).

Since no external routes are accessible, I can't access the EFS endpoints directly over my hardware VPN... so I turned to my old pal HAProxy, which indeed (as @Tim predicted) is necessary to make this work. It's a straightforward configuration, since EFS uses only TCP port 2049.

I'm using HAProxy on a t2.nano (HAProxy is very efficient), with a configuration that looks something like this:

listen fs-8d06f00d-us-east-1
    bind :2049
    mode tcp
    option tcplog
    timeout tunnel 300000 
    server fs-8d06f00d-us-east-1b us-east-1b.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000
    server fs-8d06f00d-us-east-1c us-east-1c.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000 backup
    server fs-8d06f00d-us-east-1d us-east-1d.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000 backup

This server is in us-east-1b so it uses the us-east-1b endpoint as primary, the other two as backups if the endpoint in 1b ever fails a health check.

If you have a VPN into your VPC, you then mount the volume using the IP address of this proxy instance as the target (instead of using the EFS endpoint directly), and voilà you have mounted the EFS filesystem from outside the VPC.

I've mounted it successfully on external Ubuntu machines as well as Solaris¹ servers (where EFS has proven very handy for hastening their decommissioning by making it easier to migrate services away from them).

For certain situations, like moving data into AWS or running legacy and cloud systems in parallel on specific data during a migration, EFS seems like a winner.

Of course, the legacy systems, having higher round-trip times, will not perform as well as EC2 instances, but that's to be expected -- there aren't exceptions to the laws of physics. In spite of that, EFS and the HAProxy gateway seem to be a stable solution for making it work externally.

If you don't have a VPN, then a pair of HAProxy machines, one in AWS and one in your data center, can also tunnel EFS over TLS, establishing an individual TCP connection with the payload wrapped in TLS for transport of each individual EFS connection across the Internet. Not technically a VPN, but encrypted tunneling of connections. This also seems to perform quite well.


¹Solaris 10 is (not surprisingly) somewhat broken by default -- initially, root didn't appear to have have special privileges -- files on the EFS volume created by root are owned by root but can't be chowned to another user from the Solaris machine (Operation not permitted), even though everything works as expected from Ubuntu clients. The solution, in this case, is to defeat the NFS ID mapping daemon on the Solaris machine using svcadm disable svc:/network/nfs/mapid:default. Stopping this service makes everything work as expected. Additionally, the invocation of /usr/sbin/quota on each login needs to be disabled in /etc/profile. There may be better or more correct solutions, but it's Solaris, so I'm not curious enough to investigate.

Michael - sqlbot
  • 21,988
  • 1
  • 57
  • 81
0

As of 20 Dec 2016, Amazon announced AWS Direct Connect which can be used to mount an EFS filesystem on on-premises servers. So, basically, there is a native feature which allows you to use AWS EFS outside the VPC.

As a prerequisite, you will have to enable and establish the AWS Direct Connect connection, and then use the nfs-utils as you should use when mounting the EFS within the EC2 instances.

More information can be found in Amazon EFS Update – On-Premises Access via Direct Connect. I just posted this, as I had search for this future too, for others to be aware that there is the native solution for the EFS connectivity outside the VPC.

theherk
  • 105
  • 4
Alan Kis
  • 141
  • 4