Run multiple docker-compose (one per machine) - docker

I'm testing a lot of micro-services.
I group some of them in a docker-compose file like:
agent:
image: php:fpm
volumes:
- ./GIT:/reposcm:ro
expose:
- 9000
links:
- elastic
elastic:
image: elasticsearch
expose:
- 9200
- 9300
Then I start the first one by $docker-compose up
In another directory I would start another "micro-service" by $docker-compose up. But I get:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
How can I specify the docker machine for a docker-compose.yml?
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default - virtualbox Running tcp://192.168.99.101:2376
my_test - virtualbox Running tcp://192.168.99.102:2376
Only the default machine can run a "micro-service".
How can I specify a target machine of a docker-compose.yml?

Make sure that, in the shell you want to run your second docker-compose up, you have done first a docker-machine env:
docker-machine env <machine name>
eval "$(docker-machine env <machine name>)
That will configure the right environment variables for docker commands to contact the right machine (the right docker daemon).

Related

Docker containers refuse to communicate when running docker-compose in dind - Gitlab CI/CD

I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.

Connect to docker-compose network using docker run

Let say I have running orchestration with docker-compose with docker-compose.yml looking like this:
version: '2.2'
services:
service1:
# ...
networks:
- compose_network
service2:
# ...
networks:
- compose_network
networks:
compose_network:
I aim to run and connect temporarily one container to compose_network_1. I tried using
$ docker run --net=compose_network <image for the job>
but I could not connect. I am also aware that docker-compose names the networks as [projectname]_default, so I also tried that variant, but with same result.
Is there a way I can accomplish that?
I'm not sure if the --net option ever existed but it's now --network.
From docker run --help:
--network string Connect a container to a network (default "default")
As #maxm notes you can find the network name, with the DIR prefix of the compose project directory, then simply run it as you were trying:
$ docker run --network=DIR_compose_network <image for the job>
I wanted to connect on run as my container is transient (running tests) so I can't use a second docker network command in time before it quits.
e.g. for my docker composition in a "dev" folder with no network name specified so uses the docker-compose "default" name, therefore I get the name dev_default.
docker network ls
NETWORK ID NAME DRIVER SCOPE
2c660d9ed0ba bridge bridge local
b81db348e773 dev_default bridge local
ecb0eb6e93a5 host host local
docker run -it --network dev_default myimage
This connects the new docker container to the existing docker-compose network.
The network name is going to be something like name-of-directory_compose_network. Find the name with docker network ls
I had success with:
docker-compose up # within directory ./demo
docker run -itd -p "8000:8000" --hostname=hello "crccheck/hello-world"
# outputs: 1e502f65070c9e2da7615c5175d5fc00c49ebdcb18962ea83a0b24ee0440da2b
docker network connect --alias hello demo_compose_network 1e502f65070c
I could then curl hello:8000 from inside my docker compose containers. Should be the exact same functionality as your commands, just with an added alias.

Strange way to launch a background apache/mysql docker container

I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.

How can I add hostnames to a container on the same docker network?

Suppose I have a docker compose file with two containers. Both reference each other in their /etc/hosts file. Container A has a reference for container B and vice versa. And all of this happens automatically. Now I want to add one or more hostnames to B in A's hosts file. How can I go about doing this? Is there a special way I can achieve this in Docker Compose?
Example:
172.0.10.166 service-b my-custom-hostname
Yes. In your compose file, you can specify network aliases.
services:
db:
networks:
default:
aliases:
- database
- postgres
In this example, the db service could be reached by other containers on the default network using db, database, or postgres.
You can also add aliases to running containers using the docker network connect command with the --alias= option.
Docker compose has an extra_hosts feature that allows additional entries to be added to the container's host file.
Example
docker-compose.yml
web1:
image: tomcat:8.0
ports:
- 8081:8080
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
web2:
image: tomcat:8.0
ports:
- 8082:8080
web3:
image: tomcat:8.0
ports:
- 8083:8080
Demonstrate host file entries
Run docker compose with the new docker 1.9 networking feature:
$ docker-compose --x-networking up -d
Starting tmp_web1_1
Starting tmp_web2_1
Starting tmp_web3_1
and look at the hosts file in the first container. Shows the other containers, plus the additional custom entries:
$ docker exec tmp_web1_1 cat /etc/hosts
..
172.18.0.4 web1
172.18.0.2 tmp_web2_1
172.18.0.3 tmp_web3_1
50.31.209.229 otherhost
162.242.195.82 somehost
If I understand your question correctly, you can pass a host name referenced in your host's /etc/hosts file via --add-host flag :
$ docker run ... --add-host="droid"
Your host's /etc/hosts would need the following entry:
xx.xx.xx.xx droid
Of course, xx.xx.xx.xx will need to be reachable from inside the container you just started using the 'docker run' command. You can have one or more --add-host="xyz".
More details about --add-host here:
http://docs.docker.com/v1.8/reference/commandline/run/

How to tunnel a docker X windows to a remote host?

when I am at work, with Ubuntu 14.04 (IP: a.b.c.d) and I want to execute a program (p.e. firefox) in a docker container and get the graphic output, I start a shell in the docker container and in this shell I execute:
DISPLAY=a.b.c.d:0 firefox
On the other hand, when I am at home and I need to run a program in the work-pc and get the output in the home-pc with private IP address (NATed), I connect with:
$ ssh -X work-pc
then I run the program in that shell and get the output locally.
Is there a way of redirecting the output of the docker container to home thru the "ssh -X" tunnel?
I know I could install an ssh server in the container, redirect a port in the work-pc to port 22 of the container, redirect a home-pc local port to that work-pc port (using ssh -L port:host:port work-pc) and connect from home-pc to the container with "ssh -X" to get the output at home, but I wonder if there is other way.
Thanks.
I got something to work following the instructions at https://dzone.com/articles/docker-x11-client-via-ssh.
My docker-compose has:
version: "3.7"
services:
rhel:
privileged: true
build:
context: /home/mpawlowsky/docker
dockerfile: Dockerfile
volumes:
- /tmp/.x11-unix:/tmp/.x11-unix
- /home/mpawlowsky/.Xauthority:/root/.Xauthority:rw
cap_add:
- NET_ADMIN
- NET_RAW
environment:
- DISPLAY
network_mode: host
I start the container and run in it:
$ docker-compose up -d
$ docker exec -it rhel /bin/bash
$ firefox

Resources