I can't figure out how AWS sets up their Docker 'thin pool' on ElasticBeanstalk and how it is getting filled. My docker thin pool is filling up somehow and causing my apps to crash when they try to write to disk.
This is from inside the container:
>df -h
> /dev/xvda1 25G 1.4G 24G 6%
The EBS does, in fact, have a 25GB disk apportioned to it; 1.6 gb is what du -sh /
returns.
Outside in EC2, it starts off innocuously enough... (via lvs
)
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool docker twi-aot--- 11.86g 37.50 14.65
However, the filesystem will soon re-mount as read-only. via dmesg:
[2077620.433382] Buffer I/O error on device dm-4, logical block 2501385
[2077620.437372] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 0 size 8388608 starting block 2501632)
[2077620.444394] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error [2077620.473581] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 8388608 size 5840896 starting block 2502912)
[2077623.814437] Aborting journal on device dm-4-8.
[2077649.052965] EXT4-fs error (device dm-4): ext4_journal_check_start:56: Detected aborted journal
[2077649.058116] EXT4-fs (dm-4): Remounting filesystem read-only
Back out in EC2 instance-land, Docker reports this: (from docker info
)
Pool Name: docker-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 107.4 GB
Backing Filesystem: ext4
Data file:
Metadata file:
Data Space Used: 12.73 GB
Data Space Total: 12.73 GB
Data Space Available: 0 B
Metadata Space Used: 3.015 MB
Metadata Space Total: 16.78 MB
Metadata Space Available: 13.76 MB
Thin Pool Minimum Free Space: 1.273 GB
LVS dumps this info:
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID xxxxxxxxxxxxxxxxxxxxxxxxxxxx
LV Write Access read/write
LV Creation host, time ip-10-0-0-65, 2017-03-25 22:37:38 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 2
LV Size 11.86 GiB
Allocated pool data 100.00%
Allocated metadata 17.77%
Current LE 3036
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
What is this thin pool, why does it fill up, and how do I stop it from doing so? Also, if I have 20+ GB free from inside the container on my / volume, why does it stop new writes? As far as I can tell it is not connected to files that my programs are writing to.
Thank you!