1

Given an auto-scaling web server farm setup that needs access to a very large number of image files, we are currently using Google Cloud Storage with a mounted FUSE folder on each web server so that the same set of shared files can be accessed.

The performance when accessing / reading GCS publicly shared files is excellent, however any write operations using gsutil or through FUSE ( which of course uses the same API ) seem painfully slow, relatively speaking. As is mentioned here: Google Cloud Storage Fuse vs GlusterFS, pros, cons and costs

Tempted to setup an NFS container VM with a large disk, however eventually the shared image folder is expected to reach into terabyte size ranges.

What is the best way to host a very large number of image files on Google Cloud while maintaining fast write performance? Preferably one that scales well.

sean2078
  • 111
  • 2

1 Answers1

1

On Google Compute Engine, you can use a single NFS, the instance type that you choose will determine the egress caps on write throughput.

If you're looking for scalability, you might consider using GlusterFS that scales to hundreds of terabytes or Avere vFXT for petabytes of storage and millions of IOPS.

Marilu
  • 296
  • 1
  • 7