3

the main problem is:

Infrastructure:

  • Autoscaling (min 1 - max 3)

  • RDS

  • ELB

  • elasticache (redis)

  • elasticsearch

I want to share a volume (EFS / S3) with the code of my application; The size of it is about 1.3 Gb.

With EFS:

The first attempt I tried to mount with the permissions and uid, gid, umask, etc. and It works, but the EFS is really slow even with the performance mode and with the dedicated 10 Mb of data transfer.

Apache tries to read the content on that path (EFS) and the response is slow as hell.

mount -t nfs4 efs-amazonaws.com:/  /var/www/filesystem/custom/

With S3

It works faster than the EFS but the problem is that when apache reads the content on the bucket (mounted equal as the EFS), it fails to connect to the resources inside the bucket, example Mysql functions.

s3fs bucket-name /var/www/filesystem/custom/ -o allow_other,uid=33,gid=33,mp_umask=002

The other alternative that I have is:

1- Mount the S3 or EFS in other location of the server.

2- With Lsyncd replicate the changes on the s3 to the real path of the server app.

What I need are alternatives of what can I do to share a volume in my autoscaling group . !!

Thanks!

Andrew Gaul
  • 225
  • 1
  • 4
sysalam0
  • 71
  • 1
  • 5
  • s3fs-fuse is not known for speed, so it's unexpected (at least to me) that it should be faster than EFS, unless EFS is saturated or there's some other problem. As I mentioned, below, please check your `BurstCreditBalance` and other CloudWatch metrics for your EFS filesystem and let us know what that looks like. – Michael - sqlbot Apr 16 '19 at 17:22

1 Answers1

2

EFS performance depends on how much data you have on the volume. The more you store the higher is the performance. That’s probably why with just 1.3GB it’s slow.

You can however pay for EFS provisioned IOPS that will increase the performance for an extra cost.

Alternatively you can simply store a couple of big files (e.g. 10x 50GB) to increase the volume-size related performance.

Test both approaches and see how you go.

MLu
  • 23,798
  • 5
  • 54
  • 81
  • Hey, thanks 4 your answer, I have enabled the **EFS provisioned IOPS** with 10 Mb dedicated, the problem would be if I have to create "dummy files" just to make it work faster, that is not the kind of solution that I need :/ – sysalam0 Apr 15 '19 at 22:34
  • @Alanmunizrdz as you are probably aware, the strategy of creating dummy files was just a workaround before the [provisioned throughput](https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-efs-now-supports-provisioned-throughput/) was introduced for EFS. The suggestion here seems only to be *try it, just to verify* that it doesn't help -- it shouldn't. The impact of provisioned throughput of 10 MB/s should be equivalent to claiming 200 GB of storage. Please check your `BurstCreditBalance` in CloudWatch. – Michael - sqlbot Apr 16 '19 at 17:09
  • @Alanmunizrdz also note that "performance mode" should probably have been called "distributed performance tradeoff mode" and would likely not be recommended for your use case. Whether it has a negative throughput impact at smaller scale is difficult to say, but it is not supposed to be used unless the `PercentIOLimit` metric on a General Purpose EFS system indicates saturation of whatever exactly that metric is measuring. (The metric, and the mode, appear to be related to internal index IO, rather than file IO.) – Michael - sqlbot Apr 16 '19 at 17:15
  • @MLu a slightly pedantic note, EFS uses the term provisioned *throughput* rather than provisioned *IOPS*. The burst capacity appears to be IO-size agnostic, thus the number of IOs is less significant than the actual number of bytes being written and read. – Michael - sqlbot Apr 16 '19 at 17:18