7

I made an AWS Beanstalk and RDS instances. I'm testing out a project which syncs FTP and S3 files.

How do I make the typical php artisan queue:work or php artisan queue:listen work in AWS Beanstalk?

I am aware of Redis, ElastiCache, etc. I haven't tested them out. But I'm trying to make it work with the database driver only.

I am also aware of just accessing SSH, but is there any way to process queued jobs without using SSH? Maybe like using .ebconfig?

kenlukas
  • 2,886
  • 2
  • 14
  • 25
Rick
  • 171
  • 1
  • 3

3 Answers3

12

Using Linux AMI 2 (recommended)

Since the first post the eb roadmap announced that the PHP platform had been updated making things much easier. Below the .ebextensions configuration files and the .platform files that needs to be set up are explained (otherwise nginx will throw 404 errors on all routes)

This image uses Systemd which makes the process much easier since a supervisor is not needed anymore. Unfortunately the services keyword is not supported yet for the new images so the service has to be started and restarted using the container_commands keyword.


Setting up the .ebextensions configuration

This file contains all the commands I execute on every production environment, remember to change them to whatever suit your needs:

\.ebextension\01-setup.config
container_commands:
    01-no_dev:
        command: "composer.phar install --optimize-autoloader --no-dev"
    02-config_clear:
        command: "php artisan config:clear"
    03-view_clear:
        command: "php artisan view:clear"
    04-route_cache:
        command: "php artisan route:cache"
    05-view_cache:
        command: "php artisan view:cache"
    06-migrate: 
        command: "php artisan migrate --force"
        leader_only: true
    07-queue_service_restart:
        command: "systemctl restart laravel_worker"
files: 
    /opt/elasticbeanstalk/tasks/taillogs.d/laravel-logs.conf: 
        content: /var/app/current/storage/logs/laravel.log
        group: root
        mode: "000755"
        owner: root
    /etc/systemd/system/laravel_worker.service:
        mode: "000755"
        owner: root
        group: root
        content: |
            # Laravel queue worker using systemd
            # ----------------------------------
            #
            # /lib/systemd/system/queue.service
            #
            # run this command to enable service:
            # systemctl enable queue.service

            [Unit]
            Description=Laravel queue worker

            [Service]
            User=nginx
            Group=nginx
            Restart=always
            ExecStart=/usr/bin/nohup /usr/bin/php /var/www/html/laravel-project/artisan queue:work --daemon

            [Install]
            WantedBy=multi-user.target

This second file serves to set up the Laravel scheduler, it is a cron job that runs the php artisan schedule:run every minute. It must be executed as root, and since the environment variables are not available, we need to get them from /opt/elasticbeanstalk/deployment/env. Here's a great answer about the topic.

\.ebextension\cron-linux.config
files:
    "/etc/cron.d/schedule_run":
        mode: "000644"
        owner: root
        group: root
        content: |
            * * * * * root . /opt/elasticbeanstalk/deployment/env && /usr/bin/php /var/app/current/artisan schedule:run 1>> /dev/null 2>&1

commands:
    remove_old_cron:
        command: "rm -f /etc/cron.d/*.bak"

Setting up the .platform configuration

\.platform\nginx\conf.d\elasticbeanstalk\laravel.conf
location / {
    try_files $uri $uri/ /index.php?$query_string;
    gzip_static on;
}

Using the old Linux AMI (previous image)

The best way is to run supervisor to manage the queues under a service to ensure that it is kept running even after a reboot.


Setting up the .ebextensions configuration

1- Install supervisor with the packages. the python keyword uses pip and easy_install under the hood:

packages:
    python:
        supervisor: []

2- Create the supervisor configuration file:

files:
    /usr/local/etc/supervisord.conf:
        mode: "000755"
        owner: root
        group: root
        content: |
            [unix_http_server]
            file=/tmp/supervisor.sock   ; (the path to the socket file)

            [supervisord]
            logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
            logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
            logfile_backups=10           ; (num of main logfile rotation backups;default 10)
            loglevel=info                ; (log level;default info; others: debug,warn,trace)
            pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
            nodaemon=false               ; (start in foreground if true;default false)
            minfds=1024                  ; (min. avail startup file descriptors;default 1024)
            minprocs=200                 ; (min. avail process descriptors;default 200)

            [rpcinterface:supervisor]
            supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

            [supervisorctl]
            serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

            [include]
            files = /etc/supervisor/conf.d/*.conf

            [inet_http_server]
            port = 127.0.0.1:9001

3- Create the supervisor process file, from the configuring supervisor section in the laravel docs:

files: 
    /etc/supervisor/conf.d/laravel-worker.conf: 
        content: |
            [program:laravel-worker]
            process_name=%(program_name)s_%(process_num)02d
            command=php /var/app/current/artisan queue:work database --sleep=3 --tries=3
            autostart=true
            autorestart=true
            ;user=root
            numprocs=1
            redirect_stderr=true
            ;stdout_logfile=/var/app/current/storage/logs/worker.log
            stopwaitsecs=3600

4- Create a service that runs supervisor. It is the same as this answer with the chkconfig and processname lines added. These will allow us to run it as a service later.

files:
    /etc/init.d/supervisord:
        mode: "000755"
        owner: root
        group: root
        content: |
            #!/bin/bash

            #chkconfig: 345 99 76
            # processname: supervisord

            # Source function library
            . /etc/rc.d/init.d/functions

            # Source system settings
            if [ -f /etc/sysconfig/supervisord ]; then
                . /etc/sysconfig/supervisord
            fi

            # Path to the supervisorctl script, server binary,
            # and short-form for messages.
            supervisorctl=/usr/local/bin/supervisorctl
            supervisord=${SUPERVISORD-/usr/local/bin/supervisord}
            prog=supervisord
            pidfile=${PIDFILE-/tmp/supervisord.pid}
            lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
            STOP_TIMEOUT=${STOP_TIMEOUT-60}
            OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"
            RETVAL=0

            start() {
                echo -n $"Starting $prog: "
                daemon --pidfile=${pidfile} $supervisord $OPTIONS
                RETVAL=$?
                echo
                if [ $RETVAL -eq 0 ]; then
                    touch ${lockfile}
                    $supervisorctl $OPTIONS status
                fi
                return $RETVAL
            }

            stop() {
                echo -n $"Stopping $prog: "
                killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
                RETVAL=$?
                echo
                [ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
            }

            reload() {
                echo -n $"Reloading $prog: "
                LSB=1 killproc -p $pidfile $supervisord -HUP
                RETVAL=$?
                echo
                if [ $RETVAL -eq 7 ]; then
                    failure $"$prog reload"
                else
                    $supervisorctl $OPTIONS status
                fi
            }

            restart() {
                stop
                start
            }

            case "$1" in
                start)
                    start
                    ;;
                stop)
                    stop
                    ;;
                status)
                    status -p ${pidfile} $supervisord
                    RETVAL=$?
                    [ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
                    ;;
                restart)
                    restart
                    ;;
                condrestart|try-restart)
                    if status -p ${pidfile} $supervisord >&/dev/null; then
                    stop
                    start
                    fi
                    ;;
                force-reload|reload)
                    reload
                    ;;
                *)
                    echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
                    RETVAL=2
                esac

                exit $RETVAL

5- After all the files are created run the following commands to start the service and to add it so it can be managed:

commands:
  command-1: 
    command: "/etc/init.d/supervisord start"
  command-2:
    command: "chkconfig --add supervisord"

6- Now the services keyword should work, allowing us to set the enabled and ensurerunning flags to true.

services:
    sysvinit:
        supervisord:
            enabled: "true"
            ensureRunning: "true"
            files: 
                - "/usr/local/etc/supervisord.conf"

Place all this in your .config file and deploy to have queues working. If you want to configure the scheduler for the old image, it is also explained in this answer, I won't update it here since I haven't tested it.


Full config file

remember to change the chkconfig number and note that I am running a migrate:fresh command

packages:
    python:
        supervisor: []
container_commands:
    01-migrate: 
        command: "php artisan migrate:fresh --seed"
        cwd: /var/app/ondeck
        leader_only: true
files: 
    /opt/elasticbeanstalk/tasks/taillogs.d/laravel-logs.conf: 
        content: /var/app/current/storage/logs/laravel.log
        group: root
        mode: "000755"
        owner: root
    /etc/supervisor/conf.d/laravel-worker.conf: 
        content: |
            [program:laravel-worker]
            process_name=%(program_name)s_%(process_num)02d
            command=php /var/app/current/artisan queue:work database --sleep=3 --tries=3
            autostart=true
            autorestart=true
            ;user=root
            numprocs=1
            redirect_stderr=true
            ;stdout_logfile=/var/app/current/storage/logs/worker.log
            stopwaitsecs=3600
    /usr/local/etc/supervisord.conf:
        mode: "000755"
        owner: root
        group: root
        content: |
            [unix_http_server]
            file=/tmp/supervisor.sock   ; (the path to the socket file)

            [supervisord]
            logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
            logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
            logfile_backups=10           ; (num of main logfile rotation backups;default 10)
            loglevel=info                ; (log level;default info; others: debug,warn,trace)
            pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
            nodaemon=false               ; (start in foreground if true;default false)
            minfds=1024                  ; (min. avail startup file descriptors;default 1024)
            minprocs=200                 ; (min. avail process descriptors;default 200)

            [rpcinterface:supervisor]
            supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

            [supervisorctl]
            serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

            [include]
            files = /etc/supervisor/conf.d/*.conf

            [inet_http_server]
            port = 127.0.0.1:9001
    /etc/init.d/supervisord:
        mode: "000755"
        owner: root
        group: root
        content: |
            #!/bin/bash

            #chkconfig: <number>
            # processname: supervisord

            # Source function library
            . /etc/rc.d/init.d/functions

            # Source system settings
            if [ -f /etc/sysconfig/supervisord ]; then
                . /etc/sysconfig/supervisord
            fi

            # Path to the supervisorctl script, server binary,
            # and short-form for messages.
            supervisorctl=/usr/local/bin/supervisorctl
            supervisord=${SUPERVISORD-/usr/local/bin/supervisord}
            prog=supervisord
            pidfile=${PIDFILE-/tmp/supervisord.pid}
            lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
            STOP_TIMEOUT=${STOP_TIMEOUT-60}
            OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"
            RETVAL=0

            start() {
                echo -n $"Starting $prog: "
                daemon --pidfile=${pidfile} $supervisord $OPTIONS
                RETVAL=$?
                echo
                if [ $RETVAL -eq 0 ]; then
                    touch ${lockfile}
                    $supervisorctl $OPTIONS status
                fi
                return $RETVAL
            }

            stop() {
                echo -n $"Stopping $prog: "
                killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
                RETVAL=$?
                echo
                [ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
            }

            reload() {
                echo -n $"Reloading $prog: "
                LSB=1 killproc -p $pidfile $supervisord -HUP
                RETVAL=$?
                echo
                if [ $RETVAL -eq 7 ]; then
                    failure $"$prog reload"
                else
                    $supervisorctl $OPTIONS status
                fi
            }

            restart() {
                stop
                start
            }

            case "$1" in
                start)
                    start
                    ;;
                stop)
                    stop
                    ;;
                status)
                    status -p ${pidfile} $supervisord
                    RETVAL=$?
                    [ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
                    ;;
                restart)
                    restart
                    ;;
                condrestart|try-restart)
                    if status -p ${pidfile} $supervisord >&/dev/null; then
                    stop
                    start
                    fi
                    ;;
                force-reload|reload)
                    reload
                    ;;
                *)
                    echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
                    RETVAL=2
                esac

                exit $RETVAL
                
commands:
  command-1: 
    command: "/etc/init.d/supervisord start"
  command-2:
    command: "chkconfig --add supervisord"
services:
    sysvinit:
        supervisord:
            enabled: "true"
            ensureRunning: "true"
            files: 
                - "/usr/local/etc/supervisord.conf"
Aridez
  • 221
  • 2
  • 8
  • i used this. the status shows its running. But the jobs not done. what am i missing. there is no log as well. – romal tandel Apr 24 '20 at 06:14
  • To see where's the error you could connect to the EB instance and check if the service is running, if it's not then something might be wrong with the configuration file. I edited my answer with the full file that's working for me – Aridez May 04 '20 at 18:17
  • i modified the command and it worked.Thanks for this. – romal tandel May 07 '20 at 06:38
  • command=php /var/app/current/artisan queue:work database --queue emails --sleep=3 --tries=3 --timeout=60 – romal tandel May 07 '20 at 06:38
  • @Aridez chkconfig number, what should I put there? – BlackPearl May 13 '20 at 08:11
  • @BlackPearl I used these question as a guide: https://serverfault.com/questions/29788/what-is-needed-for-a-linux-service-to-be-supported-by-chkconfig https://stackoverflow.com/questions/38102283/chkconfig-35-99-05-explanation – Aridez May 13 '20 at 16:53
  • Will this approach (with Systemd) restart the queue process once a new version of the app is deployed? (in case there are changes in the code for the queues processing) – user345602 Jun 09 '20 at 09:14
  • 1
    @user345602 I'm afraid that this won't restart the queue, although I'm keeping an eye on it since I didn't have the need yet. There's three ways that I may go about that, we can run the "php artisan queue:listen" instead or we can add a container command that should run after the services, so even during the first deploy it should exist. In case it throws an error we could use the "ignoreErrors: true" flag, but it doesn't seem great. I'll update the answer once I know for sure, but these are the three ways to go that I know of. – Aridez Jun 10 '20 at 12:19
  • 1
    Thank you @Aridez. I solved it by creating a new container command "php artisan queue:restart". I'm still testing if this is a reliable solution though. – user345602 Jun 10 '20 at 13:05
  • I got around trying this too, in case it helps anyone, it seems that the "systemctl restart laravel_worker" command should work too to restart the queue. – Aridez Jun 17 '20 at 18:57
  • What will happen if I have 2 machines running on the same environment? Are you using any kind of `leader_only` flag to limit the queue to be only launched on the main instance? – Kirk Hammett Jul 18 '22 at 12:51
  • @KirkHammett Using the database driver for queues, if you look at the columns of the `jobs` table you will see a `reserved_at` field, which is set if a worker has reserved the job so it doesn't overlap with other workers: https://stackoverflow.com/questions/69063163/laravel-what-is-difference-between-reserved-at-and-available-at-on-jobs-table Horizontal scalability of the application layer should not be a problem using this approach. – Aridez Jul 19 '22 at 00:42
1

The answer by @Aridez is correct but I had to make some more changes to get my queues working correctly.I hope this benefits someone else.

The following works for me using Laravel 8, AWS SQS in AWS ElasticBeanstalk with Amazon Linux 2

I was able to push jobs to the queue easily but the jobs weren't being picked from the queue by the workers. It took me a day to figure out that the queue workers were not picking up the environment variables and hence were not connecting to SQS. To solve this you need to add the EnvironmentFile option within the [Service].

The followup problem here was how to get the environment variables in a file since I've set them through the EB configuration in the AWS console. This article explains clearly how to make a copy of your environment through a .platform/postdeploy hook.

The followup problem this created was: the config provided by Aridez starts the workers through the container_commands but during the stage when these are run, the env file isn't generated as described in the article. Moving the commands into a postdeploy hook (given below) fixed this.

I also changed the user and group of the service to webapp since root didn't feel safe and everything works fine with webapp too.

I wanted multiple workers for my queue and doing that is as simple as adding an @ to the end of your service name. Check Start N processes with one systemd service file. The code below runs 3 workers.

.ebextensions/01_deploy.config

container_commands:

  01_run_migrations:
    command: "php artisan migrate --force"
    leader_only: true

files:
  /opt/elasticbeanstalk/tasks/taillogs.d/laravel-logs.conf:
      content: /var/app/current/storage/logs/laravel.log
      group: root
      mode: "000644"
      owner: root
  /etc/systemd/system/laravel_queue_worker@.service:
      mode: "000644"
      owner: root
      group: root
      content: |
          [Unit]
          Description=Laravel queue worker

          [Service]
          User=webapp
          Group=webapp
          Restart=always
          EnvironmentFile=/opt/elasticbeanstalk/deployment/laravel_env
          ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/artisan queue:work

          [Install]
          WantedBy=multi-user.target

commands:
  remove_service_bak_file:
    command: "rm -f /etc/systemd/system/laravel_queue_worker@.service.bak"

.platform/hooks/postdeploy

#!/bin/bash
# https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-linux2/

#Create a copy of the environment variable file.
cp /opt/elasticbeanstalk/deployment/env /opt/elasticbeanstalk/deployment/laravel_env

#Set permissions to the custom_env_var file so this file can be accessed by any user on the instance. You can restrict permissions as per your requirements.
chmod 644 /opt/elasticbeanstalk/deployment/laravel_env

#Remove duplicate files upon deployment.
rm -f /opt/elasticbeanstalk/deployment/*.bak

# Enable the workers
systemctl enable laravel_queue_worker@{1..3}.service

# Restart the workers
systemctl restart laravel_queue_worker@{1..3}.service

For integrating Cloudwatch so you can stream your laravel.log file check this link You will need to setup awslogs package and then add a .conf file within /etc/awslogs/config/. The below config file does this and streams the laravel.log file into a log group prefixed with the elastic beanstalk environment name.

.ebextensions\03_logs.config

###################################################################################################
#### The following file installs and configures the AWS CloudWatch Logs agent to push logs to a Log
#### Group in CloudWatch Logs. The configuration below sets the logs to be pushed, the Log Group
#### name to push the logs to and the Log Stream name as the instance id.
####
#### /var/app/current/storage/logs/laravel.log
####
#### http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
###################################################################################################

packages:
  yum:
    awslogs: []

files:
  "/etc/awslogs/awscli.conf" :
    mode: "000600"
    owner: root
    group: root
    content: |
      [plugins]
      cwlogs=cwlogs
      [default]
      region=`{"Ref":"AWS::Region"}`

  "/etc/awslogs/awslogs.conf" :
    mode: "000600"
    owner: root
    group: root
    content: |
      [general]
      state_file=/var/lib/awslogs/agent-state

  "/etc/awslogs/config/logs.conf" :
    mode: "000600"
    owner: root
    group: root
    content: |
      [var/app/current/storage/log/laravel]
      log_group_name=`{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/storage/logs/laravel.log"]]}`
      log_stream_name={instance_id}
      file=/var/app/current/storage/logs/laravel.log

commands:
  "01":
    command: systemctl enable awslogsd.service
  "02":
    command: systemctl restart awslogsd
otaku
  • 111
  • 4
  • 1) What will happen if I have 2 machines running on the same environment? Are you using any kind of leader_only flag to limit the queue to be only launched on the main instance? 2) Is there any way how this can integrate with Cloudwatch logs? We are writing into laravel.log but since the environment can scale and there can be multiple instances with workers, it may be harder to set up monitoring. – Kirk Hammett Jul 18 '22 at 13:03
  • 1) not using a leader only flag for the queues. if our machines scale our workers scale as well. We however use the `onOneServer` command when scheduling commands in `Console/Kernel.php` to have them initially run on one server which in turn dispatch jobs and are picked up by all active workers across machines. 2) I do have cloudwatch logs integrated as well with a log group setup that gets the laravel.log file streamed in from each instance separately. check this [link](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html) I'll try and update my answer too – otaku Jul 18 '22 at 19:12
  • @KirkHammett updated my answer – otaku Jul 18 '22 at 19:20
0

The ideal way to do this would be to create an .ebextensions/01_queue_worker.config file with the contents that kick off the php artisan queue:work command.

Something like:

container_commands:
  01_queue_worker:
    command: "php artisan queue:work"

Now if you do not want to run the queue worker on the web-server, but only on a separate dedicated worker node, then you can create an environment variable called "WORKER" and set it to true. Then in your ebextensions config file, you can test that variable and only run the script if the "WORKER" variable is "true". That would look something like this:

container_commands:
  01_queue_worker:
    test: '[ "${WORKER}" == "true" ]'
    command: "php artisan queue:work"

As a general rule, any time you need to make any modifications to what is running on Elasticbeanstalk, try looking into ebextensions. This is the mechanism used by AWS to make modifications.

Shahzeb Khan
  • 101
  • 1