0

I am looking to scale a docker application used to perform periodic performance tests on equipment at several remote locations. This application requires running the same performance tests- using the same underlying images- on each of the remote worker nodes with a configuration specific to each worker node.

Consider this example, where I have three servers...

  • worker.denver.example.com
  • worker.chicago.example.com
  • worker.newyork.example.com

...and I want to run my containers fizbuz-tester and foobar-performance on all three of the worker nodes. How would I do the following:

  • Run fizbuz-tester once every hour on worker.denver and worker.chicago but every six hours on worker.newyork.
  • Set an environment variable in foobar-performance as MY_JAM=ABBA on worker.chicago, MY_JAM=IRONMAIDEN on worker.denver, and MY_JAM=BEATLES on worker.newyork.

The underlying Docker image that runs the container is the same, but the runtime settings are different (and will be adjusted periodically). Currently my process is to consult my documentation and run the appropriate docker run command on each worker. If I want to change one of the configuration options for a container it again involves connecting to the worker. The scalibility issues should be apparent.

How can I manage this application as it scales from 3 to 5, to 10, to 20 worker nodes?

The Docker orchestration tools I've found seem to be based around "do the same thing in the same way in lots of places", whereas for my application I need to "do the same thing in different ways in lots of places"

The go-to for remote container orchestration- Docker Swarm- does not serve my use case. Everything I've read about Swarm indicates that it abstracts the management of the actual Docker worker hosts away, which is exactly not what I need.

Is there a tool that lets me remotely manage multiple docker worker nodes from a central server, while still giving me control over each worker individually and not just as a part of a resource pool?

I'm hoping that two years on from this question there might be an answer available

enpaul
  • 202
  • 2
  • 13

2 Answers2

1

OpenSVC can be used to cover your needs.

* fizbuz-tester *

[DEFAULT]
nodes = *
topology = flex
flex_target = 2

[task#fizbuz]
type = docker
image = fizbuz-tester:latest
netns = host
rm = true
schedule@worker.newyork.example.com = @360
schedule = @60

see schedule definitions doc for advanced syntax https://docs.opensvc.com/latest/agent.scheduler.html#schedule-definition

* foobar-performance *

[DEFAULT]
nodes = *
topology = flex
flex_target = 3

[task#foobar]
type = docker
image = foobar-performance:latest
netns = host
environment@worker.chicago.example.com = MY_JAM=ABBA
environment@worker.denver.example.com = MY_JAM=IRONMAIDEN
environment@worker.newyork.example.com = MY_JAM=BEATLES
rm = true

When the task does not have a schedule, you can run it manually with om foobar-performance run --rid task#foobar

When you scale up nodes, you just have to join new nodes to the cluster (2 commands on joining node), and the services will be automatically scaled up.

Remote management is also available with OpenSVC 2.0, using the cluster TLS socket https://docs.opensvc.com/latest/agent.configure.client.html

averon
  • 76
  • 3
0

When you run a docker command, the 'docker' binary is only a client (= docker cli), which sends an HTTP request to the docker engine, by default to a local unix socket.
When you activate remote access on your servers, then you can run docker commands from a remote machine.

Seaech for 'docker remote api' or see https://gist.github.com/kekru/4e6d49b4290a4eebc7b597c07eaf61f2

If you install the newest docker cli 19.03, then you can use the new docker context command, to switch between remote servers
https://docs.docker.com/engine/context/working-with-contexts/

KeKru
  • 136
  • 2