0

I have a React/Node.js application running on a single server using docker-compose. I'm trying to achieve a 0 downtime deployment for my react app. The process right now, does a webpack build (replaces the files in my dist folder) and then docker down and docker up. This whole process takes about 2-3 minutes. I realized that with docker-compose I can scale my container up/down but I'm not sure how to only push my code to 1 of them and rebuild the webpack. I really don't want to use Kubernetes/Swarm or Openshift since it's a bit of overkill. I'm wondering if anyone else has achieved something similar to this.

My docker-compose looks like this:

node:
    build:
        context: ./env/docker/node
        args:
            - PROJECT_ROOT=/var/www/app
    image: react_app:rapp_node
    command: "npm run prod"
    expose:
        - "3333"
    networks:
        - react-net
    volumes_from:
        - volumes_source
    tty: false

nginx:
    env_file:
        - ".env"
    build:
        context: ./env/docker/nginx
    volumes_from:
        - volumes_source
    volumes:
        - ./env/data/logs/nginx/:/var/log/nginx
        - ./env/docker/nginx/sites/node.template:/etc/nginx/node.template
    networks:
        - react-net
        - nginx-proxy
    environment:
        NGINX_HOST: ${NGINX_HOST}
        VIRTUAL_HOST: ${NGINX_VIRTUAL_HOST}
        LETSENCRYPT_HOST: ${NGINX_VIRTUAL_HOST}
        ESC: $$
    links:
        - node:node
    command: /bin/sh -c "envsubst < /etc/nginx/node.template > /etc/nginx/sites-available/node.conf && nginx -g 'daemon off;'"

volumes_source:
    image: tianon/true
    volumes:
        - ./app:/var/www/app

And my nginx is something like this:

server {
server_name www.${NGINX_HOST};
return 301 ${ESC}scheme://${NGINX_HOST}${ESC}request_uri;
}

server {
listen 80;
server_name ${NGINX_HOST};

root /var/www/app;

location / {
proxy_pass http://node:3333;
proxy_http_version 1.1;
proxy_set_header Upgrade ${ESC}http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host ${ESC}host;
proxy_cache_bypass ${ESC}http_upgrade;
}
}

2 Answers2

2

I highly recommend a simple single-node swarm for this. It's the perfect solution for cases where you need zero downtime during updates but can't or don't need multi-node high-availability. It really doesn't add overhead or more admin burdens and uses the same compose files.

Yep, you should really be building a new version of your image with your code on every commit you plan to ship to this server. The tools expect this type of workflow, so you'll have an easier time if you adopt that workflow. Docker Hub supports doing this for you on every commit to a branch (for free if open source, and still for free for a single private repo) of GitHub and BitBucket. Here's how it would work in a single-node-swarm assuming you were building new images each time on Docker Hub:

  1. Assuming you on recent stable docker version (17.12 as of this post)
  2. docker swarm init and you now have a single-node swarm. That's it. (If I can only have one server to deploy docker stuff on I always use a single-node-swarm and not docker-compose for a list of good reasons.
  3. With a few changes, your compose file can be used as a stack file for docker stack deploy -c compose-file.yml stackname
  4. To ensure zero downtime deployments, you'll want to add healthchecks to your node/nginx containers so they know when the app is truly "ready for connections". Swarm uses this during service updates as well, so it's key.
  5. If you want swarm to add a new container first, before removing your old container, add order: start-first to the https://docs.docker.com/compose/compose-file/#update_config.
  6. Then just change your image tags in the compose file, maybe myuser/myrepo:1.0 to myuser/myrepo:2.0 and then run the same stack deploy command again, and Swarm will detect differences and update the serivce with a new image (by replacing the container).

To test this out, use httping to validate it's still available remotely during update process.

Bret Fisher
  • 3,963
  • 2
  • 20
  • 25
0

I think the better way is using any orchestrator for that, all of them have a support of rolling update and you can use any of standard updating flow.

But, if you want exactly what you wrote (but that is not a true way at all), you can, as example run a script in the container, which will checkout a new version of your application, build it and switch symlink from old version to new, which you can do atomic like this: ln -s new current_tmp && mv -Tf current_tmp current.

So, structure of directories will be like this: /var/www/app - symlink to your current version /var/www/app_v1 - directory with current version, which symlinked to "/var/www/app" /var/www/app_v2 - directory with new version

So now, you can run command ln -s /var/www/app_v2 /var/www/app_v2_sym && mv -Tf /var/www/app_v2_sym /var/www/app for switch the current version of application which Nginx using.

Anton Kostenko
  • 652
  • 6
  • 5
  • Yea that's what I ended up doing. Can you tell me what is the downside of doing this since you emphasized on "at all"? – Hirad Roshandel Mar 14 '18 at 20:09
  • 1
    Changing a container in runtime is a bad idea because you will lose all your data (which is in a fact your new version which already working and solving clients requests). In fact - that is absolutely not a reproducible environment, but one of the goals of containers is an opposite thing - make an environment 100% reproducible. That's why highly recommend building new container per version and updating container itself, not a content of a container. – Anton Kostenko Mar 14 '18 at 20:27