Celery is a distributed task queue written in Python.
Questions tagged [celery]
35 questions
17
votes
3 answers
Running multiple workers using Celery
I need to read from Rabbitmq and execute task in parallel using Celery in a single system.
[2014-12-30 15:54:22,374: INFO/Worker-1] ...
[2014-12-30 15:54:23,401: INFO/Worker-1] ...
[2014-12-30 15:54:30,878: INFO/Worker-1] ...
[2014-12-30…
SrC
- 181
- 1
- 1
- 4
9
votes
1 answer
Celery Daemon receives unregistered tasks
I installed Celery for my Django project following what the official tutorial / doc says. And it's working fine when I launch celery at the command line, I can see it receiving the tasks and execute them. But once everything was working I decided to…
Bastian
- 263
- 3
- 13
6
votes
3 answers
Is it possible to run automatically celery at startup?
I have python server based on django and celery. Each time computer restarts, apache2 starts so my server is working, BUT I have to restart celery manually (going to my project directory and executing "python manage.py celeryd"). What is the correct…
user35348
5
votes
1 answer
Is there a way to use length of a RabbitMQ queue used by Celery to start instance in an autoscale group?
Is there any way for Celery to emit events when the length of a queue exceeds a threshold? I want to use that event to start an EC2-instance.
We have two queues for two different tasks in Celery. One of these queues has tasks which will require an…
web_ninja
- 153
- 1
- 4
3
votes
0 answers
How many celery and gunicorn workers on the same server?
I have one server that runs a Django application served by gunicorn and a Celery task queue.
Gunicorn docs suggest (2 x $num_cores) + 1 as the default number of workers.
Celery docs show that the number of Celery workers defaults to 1 x…
YPCrumble
- 175
- 6
3
votes
0 answers
Preventing Ubuntu EC2 server locking up with high CPU
I have a server that runs celery tasks. It runs a couple of worker threads with celery multi start 2, configured using systemd. Sometimes, it gets overworked and hits 100% CPU. When this happens, everything locks up entirely: I can't ssh into the…
user31415629
- 301
- 2
- 12
3
votes
1 answer
rabbitmq: with hundreds of celery workers, beam.smp consumes > 200% CPU
I have one machine (test-server) with a rabbitmq server and 4 celery workers, and another machine (test-worker) with 240 celery workers, which connect to the rabbitmq server on test-server.
All queues are currently empty.
With this setup, beam.smp…
Amichai Schreiber
- 161
- 6
3
votes
1 answer
Docker, docker-compose application settings
I'm starting to migrate my Application to Docker containers:
I use Ngnix, supervisord, gunicorn, python flask, celery, flower, lighttpd, RabbitMQ and Postgresql.
In my original virtual machine, I keep all my configurations…
gogasca
- 313
- 2
- 15
3
votes
1 answer
Is there a more convenient way to stop (or restart) a detached celery beat process?
Just to clarify things - I am working on a systemd service file celerybeat.service which should take care of the Celery beat. What I am doing at the moment is calling a script which reads /var/run/celery/celerybeat.pid, kills process with that PID,…
DejanLekic
- 304
- 3
- 16
2
votes
0 answers
Celery: Finding the settings that we use
I have a Celery server that someone else set up, with a byzantine system to manage the configuration. I'd like to know the Cerely settings that our server uses, without looking at the code. I have an SSH connection to the Cerely server. Is there a…
Ram Rachum
- 5,011
- 6
- 33
- 44
2
votes
2 answers
Celery with Upstart - processes are dying unexpectedly
When I'm running Celery with Upstart, after a while, the child processes or the main processes die without any trace.
The Upstart script I'm using (/etc/init/celery):
description "celery"
start on runlevel [2345]
stop on runlevel [!2345]
kill…
yprez
- 183
- 1
- 10
2
votes
1 answer
Approach to auto scale celery servers based on broker(redis) queue size
I am working on a project which requires rolling out new celery servers if the broker(redis) queue is consistently higher than a predetermined threshold size and killing the new boxes when the queue size comes down.
I have scripts to take care of…
APZ
- 954
- 2
- 12
- 24
2
votes
2 answers
kill -HUP is not working with celery daemon
So I have a shell script that daemonizes celery, and creates a bunch of workers running as a daemon. I would like to have a way to restart the celery task when the underlying source gets changed since the --autoreload option does not work.
According…
Kevin Meyer
- 145
- 7
2
votes
1 answer
Memory limits in systemd
I am using systemd to run Celery, a python distributed task scheduler, on an Ubuntu 18.04 LTS server.
I have to schedule mostly long running tasks (taking several minutes to execute), which, depending on data, may end up eating a large amount of…
sblandin
- 121
- 1
- 4
1
vote
0 answers
Worker processes in Celery
I have a CPU intensive Celery task and within one of the task it can be further parallelized using joblib. By default, starting a worker with celery, will create a pool with number of max concurrency equal to number of CPUs/cores (which is 36 in my…
Coderji
- 69
- 7