1

I have a shared Wordpress hosting server with approximately 50 sites on it, which is fully configured via Ansible.

I'm trying to find a good boundary between

a) updating AMIs every time a site is added or configuration is changed, for example this would effect Virtual hosts and PHP pool files

b) running some of these configuration changes with a custom startup script and simply giving the instance more time at start up before it receives traffic from a load balancer

Currently there is a single instance and there auto scaling is likely not needed for a while to cope with massive changes in traffic amounts. Rather, there is demand to use auto scaling for automatic replacement of the instance should it fail out of hours.

Based on my current scale for good cost management I would be best to run a c5.large at night and schedule autoscaling for 2 c5.large during day times, which would give me added benefit of having multi-AZ reliability at the same cost of a single c5.xlarge

I'm planning to use EFS to share all Wordpress files and likely Redis for shared session management, however this is not the topic of this question.

My concern with solution a) is that every time a new site is added, a staging site created or any other configuration change is needed, I need to create a new instance, make the change, create an AMI and rotate the AMIs into my autoscaling group. Even if fully automated, I would expect this take too long to be of acceptable turnaround speed.

Instead I could make these changes on a small number of instances and update the AMI programmatically. It would then only become used - in a failure scenario or - when a restore test is being done or - when development is tested on a test stack

Is this a good approach to manage a shared hosting environment?

jdog
  • 111
  • 4
  • 28

1 Answers1

2

I'm planning to use EFS to share all Wordpress files and likely Redis for shared session management, however this is not the topic of this question.

Hmm, that's a pity. I was just about to suggest that you should offload all the configuration and user data from the instances to a durable, shared storage and use the instances purely as stateless web servers - easy to scale, easy to replace. Conversion from local storage to EFS is easy (it's not really a re-architecting the system, only moving some directories to EFS) and can be done with very little downtime.

All the Apache / Nginx / PHP and Wordpress config files as well as the uploaded user media files will then be stored on the shared filesystem and the instances will self-configure from there.


Anyway, if you straight away dismiss the best and obvious solution we're left with suggesting some inferior options. I've got a CloudFormation template that does something close to what you want:

  • The EC2 instance is in AutoScaling group of min=1/max=1. I.e. if it dies it automatically restarts.
  • Every night a Lambda creates a snapshot of the instance as a new AMI and updates the ASG Launch Configuration with the new AMI ID. I.e. if the instance dies the next day it will be spun up from the last night's snapshot.

This works well for instances that don't change very often. E.g. our CMS has all the data in a database and all the app config and user files on EFS and only some packages and system config files are updated on the instance from time to time, but not very often.

May something like that work for you? However the effort to implement that is quite likely higher than migrating to EFS in the first place and the result is not as good and as resilient either.

Hope that helps :)

MLu
  • 23,798
  • 5
  • 54
  • 81