Is it possible to let ansible print each statement? - docker

For example, I have this ansible task, which is to run DockerUI container.
- name: DockerUI is running
docker:
image: uifd/ui-for-docker
name: dockerui
ports: 9000:9000
privileged: yes
volumes:
- /var/run/docker.sock:/var/run/docker.sock
tags: [docker]
Is it possible to see what exactly docker command did this ansible task invoke? Like docker run ...

The Ansible Docker modules don't execute docker command lines. They use the Docker HTTP API to accomplish their work, via the docker-py python module.
The best way to figure out what the module is doing is probably by watching the docker daemon logs, or by inspecting the module source.

Related

How to use a rabbitmq definitions file inside a docker container in circleci?

I'm using a definitions.json file to define user and queue configurations for rabbitmq. When I create a docker image, I add this file as a volume:
services:
rabbitmq:
image: rabbitmq:3.8.16-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
networks:
- XXX
volumes:
- ./conf/rabbitmq-definitions.json:/etc/rabbitmq/definitions.json:ro
However, looking at circleci documentation, it seems like you cannot mount a volume in circleci. Is there any other way I can add the definitions file to make it work with circleci as well?
What executor type are you using? You can definitely use volumes in CircleCI if you use machine executor instead of docker executor. I use machine executors for my integration tests when I need to do a docker run or a docker-compose. I don't have a docker-compose example handy, as I have workflows that use docker-compose but without volumes. I do have multiple workflows that use the machine executor and execute the docker run command mounting specific folders from the workspace as volumes. So your job will look something like below:
integration_test:
machine:
image: ubuntu-2004:202010-01
steps:
- checkout
- attach_workspace:
at: ~/my-app
- run: |
make functional-test
You can find a sample machine executor job here and a test that gets executed when make functional-test is called where it performs a docker run with volumes here

Docker containers refuse to communicate when running docker-compose in dind - Gitlab CI/CD

I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.

Injecting host network into container in CircleCI

I have this CircleCI configuration.
version: 2
jobs:
build:
docker:
- image: docker:18.09.2-git
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
name: elasticsearch
working_directory: ~/project
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: test
command: |
docker run --rm \
--network host \
byrnedo/alpine-curl \
elasticsearch:9200
I'm looking for a way to allow my new container to access to the elasticsearch port 9200. With this configuration, the elasticsearch is not even a known host name.
Creating an extra network is not possible, so I have this error message container sharing network namespace with another container or host cannot be connected to any other network
Host network seems to be working only in the primary image
How could I do this?
That will not work. Containers started during a build via the docker run command are running via a remote Docker engine. The cannot talk to the containers running as part of the executor via TCP since they are isolated. Just docker exec.
The solution will ultimately depend on your end goal, but one option might be to remove the Elasticsearch image/container from the executor, and use Docker Compose to get both images to talk to each other within the build.

Docker - issue command from one linked container to another

I'm trying to set up a primitive CI/CD pipeline using 2 Docker containers -- I'll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that's done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.
The jenkins container is using the latest image (Dockerfile).
The node-app container's Dockerfile is:
FROM node:latest
EXPOSE 80
WORKDIR /usr/src/final-exercise
ADD . /usr/src/final-exercise
RUN apt-get update -y
RUN apt-get install -y nodejs npm
RUN cd /src/final-exercise; npm install
CMD ["node", "/usr/src/final-exercise/app.js"]
jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to #alkis):
node-app:
container_name: node-app
build: .
ports:
- 80:80
links:
- jenkins
jenkins:
container_name: jenkins
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69e52b216d48 finalexercise_node-app "node /usr/src/final-" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp node-app
5f7e779e5fbd jenkins "/bin/tini -- /usr/lo" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
I can ping jenkins from node-app and vice versa.
Is this even possible? If not, am I making an architectural mistake here?
Thank you very much in advance, I appreciate it!
EDIT:
I've stumbled upon nsenter and easily entering a container's shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.
node-app:
build: .
ports:
- 80:80
links:
- finalexercise_jenkins_1
jenkins:
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Try the above. You are linking by image name, but you must use container name.
In your case, since you don't specify explicitly the container name, it gets auto-generated like this
finalexercise : folder where your docker-compose.yml is located
node-app : container configs tag
1 : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
The setup of the yml files:
node-app:
build: .
container_name: my-node-app
ports:
- 80:80
links:
- my-jenkins
jenkins:
image: jenkins
container_name: my-jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.
Update
In order to test, log to a bash terminal of the jenkins container
docker exec -it my-jenkins bash
Then try to ping my-node-app, or even telnet for the specific port.
ping my-node-app:80
Or you could
telnet my-node-app 80
Update
What you want to do is easily accomplished by the exec command.
From your host you can execute this (try it so you are sure it's working)
docker exec -i <container_name> ./deploy.sh
If the above works, then your problem delegates to executing the same command from a container. As it is you can't do that, since the container that's issuing the command (jenkins) doesn't have access to your host's docker installation (which not only recognises the command, but holds control of the container you need access to).
I haven't used either of them, but I know of two solutions
Use this official guide to gain access to your host's docker daemon and issue docker commands from your containers as if you were doing it from your host.
Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it's shared).
This thread from SO gives some more insight about this issue.

How to pass docker run parameter via kubernetes pod

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Resources