Services like DynamoDB (not specifically, but it was the first that came to mind) provide dynamic scaling on write/read capacity (i.e. compute) as well as storage capacity.
This means that you can have a DynamoDB table terabytes in size, with 0 provisioned capacity on reads or writes. Importantly, you are also only paying for the storage, as no reads/writes are being done.
If DynamoDB nodes use locally attached storage (presumably they need to for latency reasons), what do they do with the idle CPUs of those nodes?
The motivation for this question is because I am currently running a data store on AWS EC2 instances, already on instance types with the highest SSD capacity (i3 class), where storage capacity needs dramatically exceed compute/memory/network needs, resulting in most of the nodes having idle CPUs i.e. wasted money.
How do you provision storage and compute resources efficiently without losing the benefits of locally-attached storage? How do established systems like AWS DynamoDB do it?