Actually, Kubernetes represents a flexible mechanism for automated task workloads such as one time processes Jobs and periodic processes like CronJobs, which are included in batch/v1
Kubernetes API object model, therefore solution from @Tim is quite decent for me.
I assume that it might be possible for you to spin up some pod with a aws-cli
on a board in order to trigger sync action between mapped PVC into this container and target S3 storage. For that purpose you can build own image ported with necessary binary, or i.e. use ready solution like docker-kubectl-awscli
image maintained by @Expert360.
The following Job will execute aws s3 sync
command inside particular container and launch sync action between target parties:
apiVersion: batch/v1
kind: Job
metadata:
name: backup-s3
spec:
template:
spec:
containers:
- name: kubectl-aws
image: expert360/kubectl-awscli:v1.11.2
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-s3-key
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-s3-access-key
key: aws-secret-access-key
command: [ "/bin/bash", "-c", "aws s3 sync s3://my-bucket/ /data/backup" ]
volumeMounts:
- name: backup-aws
mountPath: /data/backup
volumes:
- name: backup-aws
persistentVolumeClaim:
claimName: backup-aws-claim
restartPolicy: Never
You have to supply aws-cli
with corresponding AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables stored in particular Kubernetes Secret objects.