8

Main question

Imagine this scenario.

  • A network of 192.168.0.0/24.
  • A computer with hostname 'Docker-Host' is running a docker engine at 192.168.0.2
  • 'Docker-Host' has sshd server running
  • On 'Docker-Host' , I'm running a application in a container that uses ssh:22 and https:443 (GitLab)

How do I assignee this container an IP of 192.168.0.3?

I need services to run on their designed default ports.


Additional Information

I cannot use a reverse proxy as a solution because that does not solve the problem of how to communicate with the GitLab instance over SSH.

Mapping the port 22 to a different port on the host is unprofessional in this situation, and my client developers would not like the setup.

This also would be a struggle to maintain if I was spinning up many instances of this application. and had to keep mapping each SSH to a new port on the host for each container.

My clients need to be able to resolve and run the following without additional configuration client side.

https://GitLab.internal.net.work

ssh git clone https://GitLab.internal.net.work

I have reviewed the Docker Network Documention, and unless I'm mistaken, I don't see a easy maintainable solution (although I'm still new to Docker).

How can this be done? What are other people doing in this situation as 'best practice'? (if possible, give answers in form of docker-compose syntax).

RtmY
  • 277
  • 2
  • 9
TrevorKS
  • 233
  • 1
  • 2
  • 6
  • As shown in https://docs.gitlab.com/omnibus/docker/#install-gitlab-using-docker-compose they are forwarding port 22 to GitLab ssh. This means the host ssh has to be run on a different port. – Michael Hampton Mar 15 '19 at 02:33
  • And what if I needed to set up a second gitlab server? Or anything that requires another conflicting port? There must be a way to assign these containers IPs. – TrevorKS Mar 15 '19 at 02:36

3 Answers3

3

This tends to be an anti-pattern in the container space. One common solution instead of externally accessing the container directly, is to setup a load balancer per IP that you need to expose, and that load balancer maps a well known port to the unique port. In the cloud space, this is often cheaper than allocating multiple VM's with different IP's.


You can publish directly to a single IP address with docker. e.g.:

docker run -p 192.168.0.3:22:22 sshd

This requires that the host have each of the IP addresses configured which has been described in other SE Q&A's.


If you still need the original request, directly exposing the container, you can use macvlan or ipvlan network drivers to give the container an externally reachable IP. I tend to avoid this since it's often the symptom of trying to manage a container as if it is a VM. Documentation on macvlan is at: https://docs.docker.com/network/macvlan/

BMitch
  • 5,189
  • 1
  • 21
  • 30
0

For a case where you need IPs on the containers, the closest thing is bridge networking, there are a few subtypes of bridge. IBM has and example of one of them here. It's better than what I could explain. Now what they do is:

  1. Create a bridge Linux on the host.

    brctl addbr br0 brctl addif br0 enp0s1 brctl setfd br0 0 ifconfig br0 192.168.0.0/24 netmask 255.255.255.0

*Omited the step to persist it.

  1. Then check the bridge is UP

    root@docker:~# brctl show br0 bridge name bridge id STP enabled interfaces br0 8000.42570a00bd6d no enp0s1 root@docker:~#

  2. Create the bridged network:

    docker network create --driver=bridge --ip-range=192.168.0.0/24 --subnet=192.168.0.0/24 -o "com.docker.network.bridge.name=br0" br0

** Here you can use --aux-address to exclude ips from the range. You should also limit the --subnet to a smaller subset, but it's all up to your needs.

*** You might want to make this default. That is also explained in the link.

  1. Start your container(s):

    docker run -it my/contianer

    • Options like --ip and others can be used here. Docker help docker run --help |grep IP might be of some help here.
  2. Once the container is running, you can use docker inspect <container_name> to know if you have IP and then need to test the connectivity to that container, etc.

I hope it helps, please ask further if you happen to have questions or if this is not what you want/need provide some context.

wti
  • 138
  • 8
0

All my production docker containers are running such a setup.

I setup a bridge with my physical nic inside (steps 1 and 2 in @wti answer)

I install opensvc agent (https://repo.opensvc.com), create a service (svcmgr -s mygitlab create)

I fill in the service configuration (svcmgr -s mygitlab edit config) with a config snippet like below

[DEFAULT]
id = 0ce6aa9c-715f-113f-9c32-0fb32df00d49
orchestrate = start

[ip#0]
container_rid = container#0
gateway = 192.168.1.1
ipdev = br0
ipname = 192.168.1.3
netmask = 255.255.255.0
type = netns

[container#0]
type = docker
run_image = gitlab:latest
run_args = -i -t --net=none
        --hostname=gitlab.acme.com
    -v /etc/localtime:/etc/localtime:ro

Once done, just start the service (svcmgr -s mygitlab start) and check status (svcmgr -s mygitlab print status)

When needed, I also deploy high availability setup, which makes the docker service failover to another node in case of first node downtime.

Chaoxiang N
  • 1,218
  • 4
  • 10