There's several different approaches to containers, and the current accepted answer only seems to account for the OCI-style (docker-like) containers. There's many other types of containers, such as LXC and BSD jails, which have different approaches.
LXC for example can easily contain several applications, and is mutable by default. It also has init scripts and system daemons (systemd etc).
Perhaps it is harder to allocate CPU resources without using VMs? And as a result, users would fight over each other to take the available CPU?
The allocation for CPU, RAM and disk space resources can be done as easily with containers.
The upside would be that the compute instance would spin up very quickly.
Provisioning containers is not an instant task (but can be faster than "60-90 seconds") as you still have to get an image, extract it and start it up.
Or perhaps there's some security concern?
Security is a major source of concern on all of the container solutions I mentioned as they all share a kernel. While there's many security measures in place, there's still occasionally vulnerabilities that are found. If you had a shared server with your friends and you all had containers in them, you'd probably be mostly safe, but at the scale of large providers such as Amazon (where there's tons of businesses using their services), it can be significant security concern.
If you check the AWS Fargate website for example, it states that many resources for their containers aren't shared, and from that aspect it is much closer to a VM than a traditional self-hosted container:
Individual ECS tasks or EKS pods each run in their own dedicated kernel runtime environment and do not share CPU, memory, storage, or network resources with other tasks and pods. This ensures workload isolation and improved security for each task or pod.
One final concern I'd like to note is compatibility. As your access to the kernel (and also potentially your syscalls) is limited, you can't do certain tasks like loading dkms modules or doing sysctl configs. Not all applications will run in this, but those tend to be the exception rather than the norm.
There's many valid use cases for containers (both OCI-like and LXC-like), and it's definitely not a "one solution fits all" thing. Not having to run a whole kernel and do other types of virtualization (graphics, audio, network etc) does result in a lot less overhead, but there's also considerations that must be made about the cons of using containers, some of which I've mentioned in my answer.