2

I tried to set up a rabbitmq cluster in a kubernetes envirnoment that has NFS PVs with the help of this tutorial. Unfortunately it seems like the rabbitmq wants to change the owner of /usr/lib/rabbitmq, but when I have a NFS directory mounted there, I get an error:

 $ kubectl logs rabbitmq-0 -f
chown: /var/lib/rabbitmq: Operation not permitted
chown: /var/lib/rabbitmq: Operation not permitted

I guess I have two options: fork the rabbitmq and remove the chown and build my own images or make kubernetes/nfs work nicely. I would not like to make my own fork and getting kubernetes/nfs working nicely does not sound like it should be my problem. Any other ideas?

Al Hoo
  • 121
  • 1
  • Please see https://www.rabbitmq.com/production-checklist.html and https://groups.google.com/d/msgid/rabbitmq-users/CAGcLz6V4af9_TNU4HSH9x5eKEgXDKqeQ3%3DBL%2Br%3DR%2B%3DM8ghwR0Q%40mail.gmail.com?utm_medium=email&utm_source=footer – Martin Schröder Dec 02 '19 at 06:30

1 Answers1

2

This is what i tried to reproduce this issue. I have installed kubernetes cluster using kubeadm on redhat 7 and below is the cluster, node details.

ENVIRONMENT DETAILS:

[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use kubectl cluster-info dump.

[root@master tmp]

[root@master tmp]# kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   8d      v1.16.2
node1.k8s    Ready   <none>   7d22h   v1.16.3
node2.k8s    Ready      <none>   7d21h   v1.16.3
[root@master tmp]#

First I have set the nfs configuration on both master and worker nodes by running below steps on both master and worker nodes. Here master node is nfs server and both worker nodes are nfs clients.

NFS SETUP:

yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap       =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports   =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r               =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client

Now nfs setup is ready and will apply rabbitmq k8s setup.

RABBITMQ K8S SETUP:

First step is to create persistent volumes using the nfs mount which we created in previous step.

[root@master tmp]# cat /root/rabbitmq-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-1
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-2
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-3
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-4
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle

After applied the above manifest, it created pv's as below:

[root@master ~]# kubectl apply -f rabbitmq-pv.yaml
persistentvolume/rabbitmq-pv-1 created
persistentvolume/rabbitmq-pv-2 created
persistentvolume/rabbitmq-pv-3 created
persistentvolume/rabbitmq-pv-4 created
[root@master ~]# kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
rabbitmq-pv-1   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-2   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-3   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-4   1Mi        RWO,ROX        Recycle          Available                                   5s
[root@master ~]#

No need to create PersistentVolumeClaim ,since it will be automatically taken care while running statefulset manifest by volumeclaimtemplate option. Now lets create the secret which you have mentioned as below:

[root@master tmp]# kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me
secret/rabbitmq-config created
[root@master tmp]#

[root@master tmp]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-vjsmd   kubernetes.io/service-account-token   3      8d
jp-token-cfdzx        kubernetes.io/service-account-token   3      5d2h
rabbitmq-config       Opaque                                1      39m
[root@master tmp]#

Now let submit your rabbitmq manifest by make changes of replacing all loadbalancer service type to nodeport service, since we are not using any cloudprovider environment. Also replace the volume names to rabbitmq-pv, which we have created in pv step. Reduced the size from 1Gi to 1Mi, since it is just testing demo.

apiVersion: v1
kind: Service
metadata:
  # Expose the management HTTP port on each node
  name: rabbitmq-management
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 15672
    name: http
  selector:
    app: rabbitmq
  sessionAffinity: ClientIP
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  clusterIP: None
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq-cluster
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  type: NodePort
  selector:
    app: rabbitmq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
spec:
  serviceName: "rabbitmq"
  selector:
   matchLabels:
    app: rabbitmq
  replicas: 4
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: rabbitmq
        image: rabbitmq:3.6.6-management-alpine
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - >
                if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
                  sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
                  cat /etc/resolv.conf.new > /etc/resolv.conf;
                  rm /etc/resolv.conf.new;
                fi;
                until rabbitmqctl node_health_check; do sleep 1; done;
                if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
                  rabbitmqctl stop_app;
                  rabbitmqctl join_cluster rabbit@rabbitmq-0;
                  rabbitmqctl start_app;
                fi;
                rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
        env:
        - name: RABBITMQ_ERLANG_COOKIE
          valueFrom:
            secretKeyRef:
              name: rabbitmq-config
              key: erlang-cookie
        ports:
        - containerPort: 5672
          name: amqp
        - containerPort: 25672
          name: rabbitmq-dist
        volumeMounts:
        - name: rabbitmq-pv
          mountPath: /var/lib/rabbitmq
  volumeClaimTemplates:
  - metadata:
      name: rabbitmq-pv
      annotations:
        volume.alpha.kubernetes.io/storage-class: default
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Mi # make this bigger in production

After submitted the manifest, we are able to see statefulset and pods were created.

[root@master tmp]# kubectl apply -f rabbitmq.yaml
service/rabbitmq-management created
service/rabbitmq created
service/rabbitmq-cluster created
statefulset.apps/rabbitmq created

[root@master tmp]#
NAME                         READY   STATUS                       RESTARTS   AGE
rabbitmq-0                   1/1     Running                      0          18m
rabbitmq-1                   1/1     Running                      0          17m
rabbitmq-2                   1/1     Running                      0          13m
rabbitmq-3                   1/1     Running                      0          13m

[root@master ~]# kubectl get pvc
NAME                     STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rabbitmq-pv-rabbitmq-0   Bound    rabbitmq-pv-1   1Mi        RWO,ROX                       49m
rabbitmq-pv-rabbitmq-1   Bound    rabbitmq-pv-3   1Mi        RWO,ROX                       48m
rabbitmq-pv-rabbitmq-2   Bound    rabbitmq-pv-2   1Mi        RWO,ROX                       44m
rabbitmq-pv-rabbitmq-3   Bound    rabbitmq-pv-4   1Mi        RWO,ROX                       43m

[root@master ~]# kubectl get svc
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                         AGE
rabbitmq              ClusterIP   None             <none>        5672/TCP,4369/TCP,25672/TCP                     49m
rabbitmq-cluster      NodePort    10.102.250.172   <none>        5672:30574/TCP,4369:31757/TCP,25672:31854/TCP   49m
rabbitmq-management   NodePort    10.108.131.46    <none>        15672:31716/TCP                                 49m
[root@master ~]#

Now I've tried to hit the rabbitmq management page using nodeport service by http://192.168.56.6://31716 and I was able to get the login page rabbitmq management.

login page

cluster status

So please let me know if you still face chown issue after you tried like above, so that we can see further by checking podsecuritypolicies applied or not.

PjoterS
  • 615
  • 3
  • 11
JPNagarajan
  • 121
  • 3