2

I "move" this question from superuser as I think it's more appropiate here. I've actually found sort-of duplicate questions, but none has solved my problem.

General scope

I have a Nextcloud (php) instance running in a Olimex LIME2 home server, which has only 1GB of RAM. I'm quite happy whith the hardware, but I'm hitting memory issues now.

I deployed it by hand (no docker) with mysql, nginx and php-fpm, all installed from Debian repos.

The problem

Usually it runs well, but I've found one case where OOM kills the php-fpm service because of memory abuse. That's when uploading via web UI a hundred of pictures to a shared folder. All php-fpm children are spawned and start getting more and more memory, and higher than allowed. They should keep lower than 128MB, but end up eating as much as 200MB each. And finally the system runs out of available memory and kills the service.

Trial: tune php-fpm settings

I tried by tuning the php.ini and pool configs, but php-fpm in this case seems to ignore the memory limit. I have this configuration that should allow all this www pool only consume 4*128MB = 512MB of RAM, which leaves more than enough memory for the database and rest of the system. Actually I think I'm safe under 750MB.

# /etc/php/7.4/fpm/pool.d/www.conf
pm = dynamic

pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 2

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;

; Default Value: 0
;pm.max_requests = 500
# /etc/php/7.4/fpm/php.ini
memory_limit = 128M

I have looked for secondary php config files, but there are none that apply.

Trial: cgroup jailing for php-fpm

As suggested in Limit the total memory usage of PHP-FPM, I also tried to limit the memory through cgroup and systemd like this:

sudo systemctl set-property php7.4-fpm.service MemoryLimit=700M

I tried with different values, like a ridiculously low value of 300M, and the result wasn't the expected. Instead of adapting to the limits, the service tries to get more memory and cgroup system ends up killing it. Isn't there a more peaceful way?

Trial: less workers with more memory

With just 2 workers and more memory per worker, the problem comes later, but still gets killed. Also, some actions that should be parallel are held back because of the lack of available workers

Trial: more workers with less memory

128MB is already a low value, and not all actions need many workers. So I risk to hit other limits. Also, this instance barely has concurrent usage from more than one human user.

Trial: "admin" setting

As suggested in Limit PHP-FPM memory consumption , I set php_admin_value[memory_limit] = 128M, but still same result: top shows usage of each children rising from 15 to 19%, that from a total of 998 MiB reported by free -h, represents quite more than the limit (189 >> 128)

Workaround

I also added a line to the service so that when it's killed, it restarts itself automatically.

Restart=always

This lets me upload a bit more pictures, but creates a bunch of problems, like GUI notifications of errors messages and locked files.

Acknowledgements

Question

Am I doing something wrong? Is it normal that for some processes, memory limit can't be enforced? How would you do it? Changing hardware is not an option for me, I've been using a self hosted nextcloud since 2016 (owncloud then) and I had half the memory at that time.

raneq
  • 21
  • 2
  • 1
    After how many seconds the Service is killed? – Blockchain Office Aug 27 '22 at 07:08
  • 1
    Did you check the upload_max_filesize, memory_limit and max_execution_time too? – Blockchain Office Aug 27 '22 at 07:12
  • @BlockchainOffice , it lasts about one minute, as it's the time it takes to eat enough memory to be killed. – raneq Sep 03 '22 at 18:45
  • @BlockchainOffice `upload_max_filesize` is big enough to upload each of the pictures, and when I had an error because of that it was super clear what the problem was. `memory_limit` is the param i'm setting, yes, and is the one that I feel it's unexplicably ignored. And `max_execution_time` well, I modified it some time ago for the web-ui upgrades to be able to complete, but i don't think it has anything to do with this memory problem. the php-fpm logs clearly state something like "killed by OOM" – raneq Sep 03 '22 at 18:49

1 Answers1

0

Consider using

pm = STATIC 

to limit RAM requests. Maybe automatic queue will cover the load in time when you push multiple requests via the pic downloads.

How many cores available on your server?

View profile for contact info.

Wilson Hauck
  • 426
  • 4
  • 10