0

We have a project where our hosting clients are offered a database (MySql) as part of their plan and I was curious how would one orchestrate this. We want to run the databases on different machines to where the web servers run.

I was thinking on using Kubernetes with the new StatefulSet feature but I'm really intrigued by the way compose.io is doing it's job. They seem to guarantee iops and CPU with the limitations being applied on RAM and Disk, as with all the databases they offer. One way would be in Kubernetes to allocate a limited persistent volume and running the container with a limited RAM without anything applied on CPU but that will create a highly unstable cluster as containers/pods start eating CPU and with Kubernetes constantly shifting them on nodes where the required resources are found. Couple that with persistent volumes which slower things down since the pod must be killed before attaching the disk elsewhere and you might get a lot of downtime for customers.

I'm just curious, what is your guess on how they are orchestrating their databases.

Romeo Mihalcea
  • 502
  • 1
  • 6
  • 24

1 Answers1

0

WARNING: State is very hard inside Kubernetes, do more homework than this answer before putting anything even near production!

Each scenario is different, but some best practices are:

  1. Mutli-instance cluster using StatefulSets
  2. Each pod in the set gets its own volume for data
  3. Inter-pod Anti-affinity
ConnorJC
  • 921
  • 1
  • 7
  • 19