3

Just to clarify things - I am working on a systemd service file celerybeat.service which should take care of the Celery beat. What I am doing at the moment is calling a script which reads /var/run/celery/celerybeat.pid, kills process with that PID, and then starts Celery beat process again.

Is there a better way to accomplish this?

DejanLekic
  • 304
  • 3
  • 16

1 Answers1

5

What we do is we start celery like this (our celery app is in server.py):

python -m server --app=server multi start workername -Q queuename -c 30 --pidfile=celery.pid --beat

Which starts a celery beat process with 30 worker processes, and saves the pid in celery.pid.

Then we can call this to cleanly exit:

celery multi stop workername --pidfile=celery.pid
jaapz
  • 166
  • 3
  • 1
    I did not know about the `--beat` option. I like it as now I do not need the `celerybeat.service` - all will be controlled by the celery.service. – DejanLekic Jun 02 '15 at 09:54
  • me neither. this (http://www.metaltoad.com/blog/celery-periodic-tasks-installation-infinity) describe possible gotcha - "the --beat flag needs to appear after worker, otherwise nothing will happen." – naoko Jan 16 '16 at 00:43
  • 5
    For future Googlers: the docs for `celery worker --help` states that the `-B` or `--beat` option should be used for development purposes only and that you'd need to start celery beat separately in a production environment. See [this](https://github.com/celery/celery/issues/2131) for a sample config file. – J.C. May 22 '17 at 16:16