As I mentioned in comments this can be done with help of StatefulSets
.
According to kubernetes documentation about StatefulSets.:
Using StatefulSets
StatefulSets are valuable for applications that require one or more of
the following.
- Stable, unique network identifiers.
- Stable, persistent storage.
- Ordered, graceful deployment and scaling.
- Ordered, automated rolling updates.
In the above, stable is synonymous with persistence across Pod
(re)scheduling. If an application doesn’t require any stable
identifiers or ordered deployment, deletion, or scaling, you should
deploy your application using a workload object that provides a set of
stateless replicas.
Deployment
or
ReplicaSet
may be better suited to your stateless needs.
Limitations
- The storage for a given Pod must either be provisioned by a PersistentVolume
Provisioner
based on the requested
storage class
, or pre-provisioned by an
admin.
- Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data
safety, which is generally more valuable than an automatic purge of
all related StatefulSet resources.
- StatefulSets currently require a Headless Service
to be responsible for the network identity of the Pods. You are
responsible for creating this Service.
- StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful
termination of the pods in the StatefulSet, it is possible to scale
the StatefulSet down to 0 prior to deletion.
- When using Rolling Updates
with the default Pod Management
Policy
(
OrderedReady
), it’s possible to get into a broken state that
requires manual intervention to
repair.
The example below demonstrates the components of a StatefulSet.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
In the above example:
- A Headless Service, named
nginx
, is used to control the network domain.
- The StatefulSet, named
web
, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
- The
volumeClaimTemplates
will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
Also from same documentation page:
Stable Storage
Kubernetes creates one
PersistentVolume
for each VolumeClaimTemplate. In the nginx example above, each Pod
will receive a single PersistentVolume with a StorageClass of
my-storage-class
and 1 Gib of provisioned storage. If no
StorageClass is specified, then the default StorageClass will be used.
When a Pod is (re)scheduled onto a node, its volumeMounts
mount
the PersistentVolumes associated with its PersistentVolume Claims.
Note that, the PersistentVolumes associated with the Pods’
PersistentVolume Claims are not deleted when the Pods, or StatefulSet
are deleted. This must be done manually.