I "move" this question from superuser as I think it's more appropiate here. I've actually found sort-of duplicate questions, but none has solved my problem.
General scope
I have a Nextcloud (php) instance running in a Olimex LIME2 home server, which has only 1GB of RAM. I'm quite happy whith the hardware, but I'm hitting memory issues now.
I deployed it by hand (no docker) with mysql, nginx and php-fpm, all installed from Debian repos.
The problem
Usually it runs well, but I've found one case where OOM kills the php-fpm service because of memory abuse. That's when uploading via web UI a hundred of pictures to a shared folder. All php-fpm children are spawned and start getting more and more memory, and higher than allowed. They should keep lower than 128MB, but end up eating as much as 200MB each. And finally the system runs out of available memory and kills the service.
Trial: tune php-fpm settings
I tried by tuning the php.ini and pool configs, but php-fpm in this case seems to ignore the memory limit. I have this configuration that should allow all this www pool only consume 4*128MB = 512MB of RAM
, which leaves more than enough memory for the database and rest of the system. Actually I think I'm safe under 750MB.
# /etc/php/7.4/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 2
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;
; Default Value: 0
;pm.max_requests = 500
# /etc/php/7.4/fpm/php.ini
memory_limit = 128M
I have looked for secondary php config files, but there are none that apply.
Trial: cgroup jailing for php-fpm
As suggested in Limit the total memory usage of PHP-FPM, I also tried to limit the memory through cgroup and systemd like this:
sudo systemctl set-property php7.4-fpm.service MemoryLimit=700M
I tried with different values, like a ridiculously low value of 300M, and the result wasn't the expected. Instead of adapting to the limits, the service tries to get more memory and cgroup system ends up killing it. Isn't there a more peaceful way?
Trial: less workers with more memory
With just 2 workers and more memory per worker, the problem comes later, but still gets killed. Also, some actions that should be parallel are held back because of the lack of available workers
Trial: more workers with less memory
128MB is already a low value, and not all actions need many workers. So I risk to hit other limits. Also, this instance barely has concurrent usage from more than one human user.
Trial: "admin" setting
As suggested in Limit PHP-FPM memory consumption , I set php_admin_value[memory_limit] = 128M
, but still same result: top shows usage of each children rising from 15 to 19%, that from a total of 998 MiB reported by free -h
, represents quite more than the limit (189 >> 128
)
Workaround
I also added a line to the service so that when it's killed, it restarts itself automatically.
Restart=always
This lets me upload a bit more pictures, but creates a bunch of problems, like GUI notifications of errors messages and locked files.
Acknowledgements
- nextcloud recommends setting the script memory_limit to 512MB, but this is too high for me. I need at least 3 workers for Nextcloud to work properly, and I don't have this 1.5 GB mem.
- related: https://stackoverflow.com/questions/15962634/nginx-or-php-fpm-ignores-memory-limit-in-php-ini#21959895
Question
Am I doing something wrong? Is it normal that for some processes, memory limit can't be enforced? How would you do it? Changing hardware is not an option for me, I've been using a self hosted nextcloud since 2016 (owncloud then) and I had half the memory at that time.