When I'm running Celery with Upstart, after a while, the child processes or the main processes die without any trace.
The Upstart script I'm using (/etc/init/celery
):
description "celery"
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 20
# console log
setuid ***
setgid ***
script
chdir /opt/***/project/
exec /opt/***/virtualenvs/***/bin/python manage.py celery worker --settings=settings.staging -B -c 4 -l DEBUG
end script
respawn
When running exectly the same command without upstart (manually running the exec
part), everything works fine.
With the respawn
stanza, the master process will die and get respawned while the lost child processes still exist, causing memory overflows. Without it the processes will just disappear until there are no workers left.
Celery spawns a master process and worker processes (4 of them in this case).
I also tried running it with eventlet
instead of multiprocessing (1 master, 1 child process) but the results are similar.
Did anyone encouter such behaviour before?
Update:
- Celery when run with
-c N
starts withN + 2
processes,N
of which are workers (what are the other 2?). - I'm beginning to think that this is related to the
expect
stanza, but not sure what the value should be. Witheventlet
expect fork
makes sense. But what about multiprocessing?
Update2:
Using except fork
seemed to stop the processing from dying, but when trying to stop or restart the job it just hangs.