4

I am deploying an ELK stack on a Kubernetes(v1.4.5) cluster on Azure. This is the configuration that creates the Kibana Service and Deployment.

# deployment.yml
---
apiVersion: v1
kind: Namespace
metadata:
  name: logging
---
# elasticsearch deployment and Service
---
# logstash Deployment and Service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kibana
    spec:
      containers:
      - name: kibana
        image: sebp/elk:521
        env:
          - name: "LOGSTASH_START"
            value: "0"
          - name: "ELASTICSEARCH_START"
            value: "0"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
        volumeMounts:
          - name: config-volume
            mountPath: '/opt/kibana/config/'
      volumes:
      - name: config-volume
        configMap:
          name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: logging
  labels:
    component: kibana
spec:
  type: LoadBalancer
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
    nodePort: 30257
  selector:
    component: kibana

I deploy via kubectl apply -f deployment.yml. I clear out the whole ELK stack via kubectl delete namespace logging.
There's a load balancer on Azure, with backend pool the k8s agents. When I deploy, a public ip and a rule are added to the load balancer. And I can access kibana in my browser by using its IP address from the Front end pool of addresses of the load balancer.

And here's the problem 1) and want I want to achieve 2):

  1. Every time I kubect apply, a new IP address and rule are added to the front end pool of the Load balancer. Strangely enough, the previous IP addresses and rules are still there (even though I run kubectl delete namespace logging before deploying, which would suggest that the previously used IP addresses and rules should be released. I checked the code here and as far as I can see there are functions that ensure there are no stale public IP addresses and load balancing rules). IP addresses added from previous deployments can't access the currently deployed Kibana service.
  2. I want to have a DNS name that the clients (e.g. my browser<->kibana, log emitting server<->logstash) of the ELK stack can use to refer to the ELK services without needing to hardcode IP addresses in the clients (so re-deployments of the ELK stack are transparent to the clients).

What I've tried so far: I created manually a Public IP address with a dns name via Azure's Dashboard. I added load balancing rules to the Load balancer that look like the ones created automatically on kubectl apply. I tried to use this manually created Public IP in the kibana Service spec under externalIPs and then under loadBalancerIP (k8s docs).
When the externalIPs is set to the public IP, kubectl describe service --namespace=logging returns

Name:           kibana
Namespace:      logging
Labels:         component=kibana
Selector:       component=kibana
Type:           LoadBalancer
IP:         10.x.xxx.xx
External IPs:       52.xxx.xxx.xxx   <- this is the IP I manually created
LoadBalancer Ingress:   52.xxx.xxx.xxx
Port:           <unset> 5601/TCP
NodePort:       <unset> 30257/TCP
Endpoints:      10.xxx.x.xxx:5601
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  15m       15m     1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  14m       14m     1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

However requests to the dns name or directly to the external IP time out. When I set the loadBalancerIP in the Service spec, kubectl describe service returns similar output, but without the External IPs row, and again, a new public ip address + rules are created on the loadbalancer. It's again not possible to use the dns name/ip of the Public ip I created manually.

Any help would be super appreciated :)

1 Answers1

2

Ah. The easiest thing to do would be to avoid deleting the service before every deployment. In my experience, services tend to be very long lived; they provide a nice, fixed way to refer to things without having to worry about dynamic values for ports, ips, dns, etc.

In the Kibana service spec, remove the nodePort entry from the port configuration, so that the service can do its own thing; one less thing to think about. Don't set values for loadBalancerIP or externalIPs. Same rules apply to the other services.

For the ELK stack config files (I don't recall off the top of my head what they look like), refer to other components by their service names: no need to use hardcoded IPs or anything. (No idea if you were doing this, but just in case.)

Allow the services to be created; get the loadbalancer external IP and plug it into your DNS config.

You can continue using namespaces if that's how you prefer to do things, but don't delete the whole namespace to clear out the Deployments for ELK components.

Split your ELK stack spec into separate files for Deployments and Services (technically, I'm not sure if this is required; you might be able to get away with ), so that you can use:

kubectl delete -f logging-deployments.yaml
kubectl apply -f logging-deployments.yaml

or a similar command to update the deployments without bothering the services.

If you need (or prefer) to delete the ELK stack in another manner before creating a new one, you can also use:

kubectl -n logging delete deployments --all

to delete all of the deployments within the logging namespace. To me, this option seems a little more dangerous than it needs to be.

A second option would be:

kubectl delete deployments kibana
kubectl delete deployments elasticsearch
kubectl delete deployments logstash

If you don't mind the extra typing

Another option would be to add a new label, something like:

role: application

or

stack: ELK

to each of the Deployment specs. Than you can use:

kubectl delete deployments -l stack=ELK

to limit the scope of the deletion... but again, this seems dangerous.


My preference would be, unless there is some overriding reason not to, to split the config into >2 files and use:

kubectl create -f svc-logging.yaml
kubectl create -f deploy-logging.yaml
kubectl delete -f deploy-logging.yaml
kubectl apply -f deploy-logging.yaml
...  
etc

in order to help prevent any nasty typo-induced accidents.

I break things down a little bit further, with a separate folder for each component that contains a deployment and service, nested together as makes sense (easier to keep in a repo, easier if more than one person need to make changes to related but separate components), and usually with bash create/destroy scripts to provide something like documentation... but you get the idea.

Set up this way, you should be able to update any or all deployment components without breaking your DNS/loadbalancing configuration.

(Of course, this all sort of assumes that having everything in one file is not some kind of hard requirement... in that case, I don't have a good answer for you off the top of my head...)

mdavids
  • 36
  • 2
  • 1
    Thanks a lot for the detailed advice. One of the problems turned out to be that I installed kubectl via `az acs kubernetes install-cli` which for some reason installed an older version of kubectl than this of the cluster. – Georgi Tenev Mar 10 '17 at 09:59
  • @JoroTenev ah can you perhaps elaborate on that - do you mean updating `kubectl` now allows you to use Azure static IPs through setting `loadBalancerIP`? Thanks. – yungchin May 29 '17 at 11:32