This has happened a couple of times since we moved our cluster project from Google to AWS.
We have an EFS volume that's mounted on a load-balanced cluster in a Beanstalk project.
I will be in the middle of setting something up, either uploading a large ZIP file to that EFS volume (via an instance on the load-balanced cluster), or unzipping one from an ssh session on the cluster instance, and I will suddenly find the instance ripped out from under me, and find that the cluster has bred two (or more) new instances, and is shutting down the one I was accessing.
What is going on here? The instances are all "t2-micro" instances; are they inadequate to the sustained load, and running out of burst capacity? Has anybody seen anything like this?