2

I recently reinstalled a kubernetes cluster using kubeadm, and trying some helm charts I encountered a strange behavior that I don't understand.

ClusterIP services are supposed to accessible only from the cluster, and so far all services used to work that way. But trying kube-prometheus from bitnami, I end up with a few ClusterIP services that somehow highjack the host IP and are publicly exposed. Did not think it was possible without any Ingress/NodePort/etc.

Here is the actual service:

kind: Service
apiVersion: v1
metadata:
  name: kube-prometheus-node-exporter
  namespace: kube-prometheus
  selfLink: /api/v1/namespaces/kube-prometheus/services/kube-prometheus-node-exporter
  uid: 0c4cd5a1-4849-4635-a656-238a5c3b4b78
  resourceVersion: '20388'
  creationTimestamp: '2020-09-08T18:08:17Z'
  labels:
    app.kubernetes.io/instance: kube-prometheus
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: node-exporter
    app.kubernetes.io/version: 1.0.1
    helm.sh/chart: node-exporter-1.1.0
    jobLabel: node-exporter
  annotations:
    meta.helm.sh/release-name: kube-prometheus
    meta.helm.sh/release-namespace: kube-prometheus
    prometheus.io/scrape: 'true'
  managedFields:
    - manager: Go-http-client
      operation: Update
      apiVersion: v1
      time: '2020-09-08T18:08:17Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:meta.helm.sh/release-name': {}
            'f:meta.helm.sh/release-namespace': {}
            'f:prometheus.io/scrape': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.kubernetes.io/version': {}
            'f:helm.sh/chart': {}
            'f:jobLabel': {}
        'f:spec':
          'f:ports':
            .: {}
            'k:{"port":9100,"protocol":"TCP"}':
              .: {}
              'f:name': {}
              'f:port': {}
              'f:protocol': {}
              'f:targetPort': {}
          'f:selector':
            .: {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/name': {}
          'f:sessionAffinity': {}
          'f:type': {}
spec:
  ports:
    - name: metrics
      protocol: TCP
      port: 9100
      targetPort: metrics
  selector:
    app.kubernetes.io/instance: kube-prometheus
    app.kubernetes.io/name: node-exporter
  clusterIP: 10.107.104.65
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

Nothing stands out as to why this one would end up public (eg. can access using host IP like: http://xx.yy.150.105:9100/metrics).

Any idea what I could be missing?

Olivier
  • 415
  • 3
  • 5
  • 14
  • Your question mentions an IP address that doesn't appear anywhere except in your example. Is that the IP address of the Node, and does that mean your Node has a public IP address? (I suspect one of the Pods has `hostNetwork: true` but it's hard to know if that's what's happening to you based on the vagueness of your question) – mdaniel Sep 10 '20 at 03:21

0 Answers0