I am deploying an ELK stack on a Kubernetes(v1.4.5) cluster on Azure. This is the configuration that creates the Kibana Service
and Deployment
.
# deployment.yml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
# elasticsearch deployment and Service
---
# logstash Deployment and Service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana
namespace: logging
spec:
replicas: 1
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: sebp/elk:521
env:
- name: "LOGSTASH_START"
value: "0"
- name: "ELASTICSEARCH_START"
value: "0"
ports:
- containerPort: 5601
name: ui
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: '/opt/kibana/config/'
volumes:
- name: config-volume
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
component: kibana
spec:
type: LoadBalancer
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30257
selector:
component: kibana
I deploy via kubectl apply -f deployment.yml
. I clear out the whole ELK stack via kubectl delete namespace logging
.
There's a load balancer on Azure, with backend pool the k8s agents. When I deploy, a public ip and a rule are added to the load balancer. And I can access kibana in my browser by using its IP address from the Front end pool of addresses of the load balancer.
And here's the problem 1) and want I want to achieve 2):
- Every time I
kubect apply
, a new IP address and rule are added to the front end pool of the Load balancer. Strangely enough, the previous IP addresses and rules are still there (even though I runkubectl delete namespace logging
before deploying, which would suggest that the previously used IP addresses and rules should be released. I checked the code here and as far as I can see there are functions that ensure there are no stale public IP addresses and load balancing rules). IP addresses added from previous deployments can't access the currently deployed Kibana service. - I want to have a DNS name that the clients (e.g. my browser<->kibana, log emitting server<->logstash) of the ELK stack can use to refer to the ELK services without needing to hardcode IP addresses in the clients (so re-deployments of the ELK stack are transparent to the clients).
What I've tried so far: I created manually a Public IP address with a dns name via Azure's Dashboard. I added load balancing rules to the Load balancer that look like the ones created automatically on kubectl apply
. I tried to use this manually created Public IP in the kibana Service spec under externalIPs
and then under loadBalancerIP
(k8s docs).
When the externalIPs is set to the public IP, kubectl describe service --namespace=logging
returns
Name: kibana
Namespace: logging
Labels: component=kibana
Selector: component=kibana
Type: LoadBalancer
IP: 10.x.xxx.xx
External IPs: 52.xxx.xxx.xxx <- this is the IP I manually created
LoadBalancer Ingress: 52.xxx.xxx.xxx
Port: <unset> 5601/TCP
NodePort: <unset> 30257/TCP
Endpoints: 10.xxx.x.xxx:5601
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
14m 14m 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
However requests to the dns name or directly to the external IP time out.
When I set the loadBalancerIP
in the Service spec, kubectl describe service
returns similar output, but without the External IPs row, and again, a new public ip address + rules are created on the loadbalancer. It's again not possible to use the dns name/ip of the Public ip I created manually.
Any help would be super appreciated :)