0

I'm trying to aggregate logs from my Kubernetes cluster into Elasticsearch server.
To do that, I've deployed Filebeat on the cluster, but I think it doesn't have a chance to work since in the /var/lib/docker/containers directories, there are no logs.

I'm able to see container logs via kubectl logs, but no idea how to make the Filebeat to reach it.
Here is the fragment of the docker inspect command output:

    "LogPath": "",
    "Name": "/k8s_POD_checkit-incubator-6bd48754c5-s64bk_checkit-incubator_2cb40353-c7b4-11e8-9574-005056b1f077_1",
    "RestartCount": 0,
    "Driver": "devicemapper",
    "MountLabel": "",
    "ProcessLabel": "",
    "AppArmorProfile": "",
    "ExecIDs": null,
    "HostConfig": {
        "Binds": null,
        "ContainerIDFile": "",
        "LogConfig": {
            "Type": "journald",
            "Config": {}
        },
        "NetworkMode": "none",
        "PortBindings": {},
        "RestartPolicy": {
            "Name": "",
            "MaximumRetryCount": 0
        },

Any clues how I can figure out the valid configuration for Filebeat?

Djent
  • 89
  • 4
  • 15

1 Answers1

1

Your configuration ships logs to journald, so journalctl is your tool.

Tell us what kubernetes distribuition you are running, kubespray, gke or something else.

Second you might want to check other node directories, for example /var/log/containers/ for container logs.

Third try deploy filebeat using helm or any other way recommended by your k8s distribution.

anx
  • 328
  • 1
  • 6
  • I'm running kubernetes via kubeadm on a bare metal CentOS VM. Is it possible to change the docker output logs from journalctl to standard log files? Is it wise to do that? As I did some research about the journalctl logs, it is not easy to push them into Elasticsearch... – Djent Oct 21 '18 at 10:54
  • you can simply deploy filebeat via helm. It should handle all the stuff for you – anx Oct 24 '18 at 13:17