2

inside the docker-compose.yml we configured the following volume

image: confluentinc/cp-kafka:latest
volumes:
  - /grid/kafka-data:/var/lib/kafka/data

.

docker-compose ps
               Name                           Command            State                     Ports
-------------------------------------------------------------------------------------------------------------------
kafka-node_kafka_1            /etc/confluent/docker/run   Up      0.0.0.0:9092->9092/tcp

from my understanding - kafka docker container path - /var/lib/kafka/data is mounted to /var/kafka-data , when ( /var/kafka-data , is the path of the linux OS )

About - /var/kafka-data this mount point folder is mounted to OS disk - /dev/sdb while disk sdb size is - 1.8T byte

So lets summary:

/var/lib/kafka/data is kafka docker partition and /var is only 100G

/var/kafka-data is the partition that mounted to sdb disk ( on linux OS )

I want to ask this question to be on the safe side

Lets say on kafka docker partition - /var/lib/kafka/data , size of /var/../data is more then 100G

Is its mean that container partition /var/lib/kafka/data is limited to 100G ??


Or /var on kafka docker container limited To the external volume that is 1.8T byte?

From kafka container side we have :

# df -h /var
Filesystem      Size  Used Avail Use% Mounted on
overlay         101G  7.5G   94G   8% /


df -h  /var/lib/kafka/data/
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/os-rhel_root   101G  7.5G   94G   8% /var/lib/kafka/data

While outside from the container - on the real Linux OS we have

df -h

Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/os-rhel_root  50G  5.3G   45G   11% /
devtmpfs                  12G     0  12G    0% /dev
tmpfs                     12G   156K  12G   1% /dev/shm
/dev/sdb                  1.8T   77M  1.8T   1% /grid/kafka-data
/dev/sda1                 492M  158M  335M  32% /boot
/dev/mapper/os-rhel_var   106G   11G   96G  10% /var
tmpfs                      26G     0   26G   0% /run/user/1005
tmpfs                      26G   20K   26G   1% /run/user/0
overlay                   101G  7.5G   94G   8% /var/lib/docker/overlay2/8411835673dfedd5986093eb771582dac7317d99f431b832f3baea8ea1aa3e4d/merged
shm                        64M     0   64M   0% /var/lib/docker/containers/629aefd21b6042ebfbf1a0a08a882b2f1865137edfb4b2b02f5c9a1681d895e4/mounts/shm
overlay                   101G  7.5G   94G   8% /var/lib/docker/overlay2/b4677bed14050337580958bc903bbb733d9464ca8bfc46124c3c506dc064867d/merged
.
.
.
shalom
  • 451
  • 12
  • 26

1 Answers1

0

The steps to resolve this question were as follows (discussed in chat):

  1. Checking the output of docker inspect for the container in question. It produces a lot of output, the key parts to consider are the Binds and Mounts sections.
  2. Having seen the mounts in place, to avoid having mismatching mounts hiding the actual data destionations, we do a check to see that files that are created in the container are visible on the host. A simple command inside the container like date > /var/lib/kafka/data/test.txt suffices for the purpose and becomes visible under /grid/kafka-data on the host.
  3. As there is a little chance that somehow docker mounted something above the existing volume mount on the host (I have never seen that, but when it's important one might better check twice), we create a test file of 400 MiB in the container as follows: dd if=/dev/zero of=/var/lib/kafka/data/test.bin count=400 bs=1M
  4. Checking the df output on the host shows an increase of the used size of ~400 MiB for the volume the data went to. As this is displayed for the "correct" mountpoint (with 1.8T size), we are sure that the actual limit for writing files is the 1.8T.
  5. Afterwards, the test files test.bin and test.txt are deleted to avoid cluttering the system with unnecessary files and space usage.
linux-fan
  • 286
  • 1
  • 6