can I use ansible to start docker swarm services without compose? - docker

how can I ssh into a service created by docker swarm? I have created a service using docker stack and the yaml file looks like this:
version: '3'
services:
app:
image: "myimage1"
expose:
- "8080"
and I validated that the service is running but I'm not sure how to ssh into the service(container) that was created.

To ssh into container, you would need ssh service running inside container. This is generally not a good practice.
To get access to container shell without having ssh running, you can use:
docker exec -ti bash/sh
Ansible has docker modules for managing containers and images(http://docs.ansible.com/ansible/docker_container_module.html#docker-container, http://docs.ansible.com/ansible/docker_image_module.html#docker-image)

Related

Unable to connect to docker container in Jenkins pipeline when using "Docker outside of Docker" setup

I'm running a Jenkins job based on such agent:
pipeline {
agent {
docker {
image 'cypress/base:10'
args '-v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker -v /usr/local/bin/docker-compose:/usr/local/bin/docker-compose -u root'
}
}
…
note: docker and docker-compose are mounted into my agent container to be able to run docker containers inside the pipeline stages ("Docker outside of Docker" setup)
down the pipeline, I start docker-compose setup that consists of 2 containers - server and webapp
…
sh 'docker-compose up --build --detach'
…
After that, I want to send a GET request to localhost:8080, this is where the web-app should be served from. But I get
Error: connect ECONNREFUSED localhost:8080
The same docker-compose setup works on my dev. machine. Port forwarding is set up correctly (8080:8080 port forwarding is enabled in docker-compose configuration file)
I think it's somewhat related to the "Docker outside of Docker" setup that I do in Jenkins 🤔Maybe port 8080 actually appears to be listening on the host of the agent of my pipeline, not sure…
I will be happy to get fresh ideas on the problem, completely run out of my mind with this.
And just to give more context: I want to run web-app + API server via docker-compose and then run Cypress (outside of docker-compose setup) to do E2E testing via UI
In Jenkins docker out of docker setup, technically, Jenkins is also another container that "shares" the same space with other containers. Meaning, it can communicate to the containers it "created".
In my case, what I did was create a custom bridge network on my docker-compose.yml file
version: "3.8"
services:
app:
build:
context: .
ports:
- 8081:8080
depends_on:
- redisdb
networks:
- frontend
- backend
redisdb:
image: redis
ports:
- 127.0.0.1:6380:6379
networks:
- backend
networks:
frontend: {}
backend: {}
Then once this is created, docker-compose creates these networks with the following format:
{FOLDER_NAME}_frontend (example: pipeline_frontend)
{FOLDER_NAME}_backend
These networks are usually bridge networks.
My Jenkins container originally resides in the default network "bridge". Since my jenkins is in a bridge network and these containers are in a bridge type network, I can connect my jenkins via runtime later in the pipeline.
docker network connect pipeline_frontend jenkins
Now from jenkins, I can communicate directly to the container via its service name.
In my case for example, from my Jenkins, I can curl to http://app:8080
Note: This answer is only applicable if Jenkins container exclusively resides in the host as with the containers it creates. I have not tested this on a setup wherein Jenkins has external nodes.

GITLAB-CI - Join network created by docker-compose

I am creating a gitlab-ci to run e2e tests over my application,
so, given I have this docker-compose.yml:
services:
chrome:
image: zenika/alpine-chrome:latest
command: [
chromium-browser,
"--headless",
"--no-sandbox",
"--disable-gpu",
"--ignore-certificate-errors",
"--reduce-security-for-testing",
"--remote-debugging-address=0.0.0.0",
"--remote-debugging-port=9222",
"https://google.com/",
]
ports:
- "9222:9222"
networks:
- test-e2e
networks:
test-e2e:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
when I run docker-compose up everything just works fine,
and on my local machine I am able to visit localhost:9222 and access the chrome debugger.
However, when I run the same job on gitlab-ci I get a ECONNREFUSED error:
F---F
Failures:
1) Scenario: List of Profiles # src/features/profile.feature:3
✖ Before # dist/node/development/webpack:/hooks/puppeteer.hooks.ts:17
Failed to fetch browser webSocket url from http://localhost:9222/json/version: connect ECONNREFUSED 127.0.0.1:9222
Error: connect ECONNREFUSED 127.0.0.1:9222
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
So it is clear that I cannot join the docker-compose network and access localhost:9222 from the job
My gitlab-ci.yml is pretty straightforward and looks like this:
E2E tests:
stage: test end-to-end
image:
name: docker/compose:1.24.1
entrypoint: ["/bin/sh", "-c"]
services:
- docker:dind
before_script:
- apk --update add nodejs yarn
- docker-compose -f test-e2e.yaml up -d
script:
- yarn test:cucumber
after_script:
- docker-compose -f test-e2e.yaml down
yarn test:cucumber basically runs cucumber and puppeteer trying to access localhost:9222 to get chrome's metadata.
How can I join the network created by docker-compose from the gitlab-ci job?
I don't have access to edit runner configurations
TL; DR On CI your chrome container is reachable using docker:9222 (or more generally <name-of-the-dind-service-on-ci>:<exposed-port>), not localhost:9222
Explanation
As per your gitlab-ci.yml, you will start 2 containers:
a docker/compose:1.24.1 container from which you will run docker-compose and yarn commands
a docker:dind on which a Docker Daemon will run. This container will be reachable from docker/compose:1.24.1 container via hostname docker (see GitlabCI doc on accessing services)
When running a container in Docker, the container is actually started by the Docker daemon and will run and expose ports on the host on which the Daemon is running.
On your machine, the Docker daemon is running locally and will expose the container ports on your local network, allowing you to reach your chrome container via localhost
On the CI, you are running your commands from a docker/compose:1.24.1 container but the Docker daemon is running in another container (a different host): the docker:dind container. chrome container will be created inside docker:dind container and its port exposed from this same container. You simply need to access the docker:dind container which will expose chrome ports.
By using localhost from your docker/compose:1.24.1 container, you won't be able to reach chrome because its port is not exposed on docker/compose:1.24.1 container but from docker:dind container. You need to specify its host (docker) and the exposed port (9222)
If you are using gitlab shared runners, probably you are not allowed to create networks for security reasons.
Try using your own private runners. It is really easy to set up them: https://docs.gitlab.com/runner/install/
An alternative is services.
I think good idea is to start
docker-compose up
inside CI script.
After it you will be able to access. And if you run CI again it will start same containers.

Is it possible to replicate Kubernetes container network environment with docker compose?

Kubernetes has a concept of pods where containers can share ports between them. For example within the same pod, a container can access another container (listening on port 80) via localhost:80.
However on docker-compose, localhost refers to the container itself.
Is there anyway to implement the kubernetes network config on docker?
Essentially I have a kubernetes config that I would like to reuse in a docker-compose config, without having to modify the images.
I seem to have gotten it to work by adding network_mode: host to each of the container configs within my docker-compose config.
Yes you can. You run a service and then you can use network_mode: service:<nameofservice>
version: '3'
services:
mainnetwork:
image: alpine
command: tail -f /dev/null
mysql:
image: mysql
network_mode: service:mainnetwork
environment:
- "MYSQL_ROOT_PASSWORD=root"
mysqltest:
image: mysql
command: bash -c "sleep 10 && mysql -uroot -proot -h 127.0.0.1 -e 'CREATE DATABASE tarun;'"
network_mode: service:mainnetwork
Edit-1
So the network_mode can have below possible values
host
service:(servicename in same compose file)
container:(name or id of a external container already running)
In this case i have used service:mainnetwork, so the mainnetwork needs to be up.
Also this has been tested on Docker 17.06 ce. So I assume you are using a newer version
Using Docker Links mechanism you can wire together containers and then declared ports will be available through localhost.

Set machine name for docker-compose

Is there a way to set the machine name for containers run using docker-compose?
I'd need to start N containers which should have a predictable machine name (Windows containers running under Windows Server 2016):
MyName1
MyName2
...
MyNameN
This is required to test a system with many clients. Is maybe manually running the containers (docker-run) through a script an alternative solution? Anyway, the question is still how can I manually set the machine name (if it is possible)
Your Windows container uses the container id as hostname.
You can define the machine hostname by adding hostname docker parameter.
Inside your docker-compose.yml:
version: '2'
services:
web:
image: ubuntu
hostname: MyName
This is also working with Docker run command:
docker run -it --hostname MyName ubuntu /bin/bash
To have a unique hostname for each machine you have to define multiple docker-compose files or multiple services inside a unique docker compose. You can also loop with the run command:
#!/bin/bash
for i in {1..5}
do
docker run -d --hostname test$i ubuntu
done

Access Docker socket within container

I am attempting to create a container that can access the host docker remote API via the docker socket file (host machine - /var/run/docker.sock).
The answer here suggests proxying requests to the socket. How would I go about doing this?
I figured it out. You can simply pass the the socket file through the volume argument
docker run -v /var/run/docker.sock:/container/path/docker.sock
As #zarathustra points out, this may not be the greatest idea however. See: https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container/
If one intends to use Docker from within a container, they should clearly understand security implications.
Accessing Docker from within the container is simple:
Use the docker official image or install Docker inside the container. Or you may download archive with docker client binary as described here
Expose Docker unix socket from host to container
That's why
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
should do the trick.
Alternatively, you may expose into container and use Docker REST API
UPD: Former version of this answer (based on previous version of jpetazzo post ) advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Considerations:
All host containers will be accessible to container, so it can stop them, delete, run any commands as any user inside top-level Docker containers.
All created containers are created in a top-level Docker.
Of course, you should understand that if container has access to host's Docker daemon, it has a privileged access to entire host system. Depending on container and system (AppArmor) configuration, it may be less or more dangerous
Other warnings here dont-expose-the-docker-socket
Other approaches like exposing /var/lib/docker to container are likely to cause data corruption. See do-not-use-docker-in-docker-for-ci for more details.
Note for users of official Jenkins CI container
In this container (and probably in many other) jenkins process runs as a non-root user. That's why it has no permission to interact with docker socket. So quick & dirty solution is running
docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
after starting container. That allows all users in container to run docker binary with root permissions. Better approach would be to allow running docker binary via passwordless sudo, but official Jenkins CI image seems to lack the sudo subsystem.
I stumbled across this page while trying to make docker socket calls work from a container that is running as the nobody user.
In my case I was getting access denied errors when my-service would try to make calls to the docker socket to list available containers.
I ended up using docker-socket-proxy to proxy the docker socket to my-service. This is a different approach to accessing the docker socket within a container so I though I would share it.
I made my-service able to receive the docker host it should talk to, docker-socker-proxy in this case, via the DOCKER_HOST environment variable.
Note that docker-socket-proxy will need to run as the root user to be able to proxy the docker socket to my-service.
Example docker-compose.yml:
version: "3.1"
services:
my-service:
image: my-service
environment:
- DOCKER_HOST=tcp://docker-socket-proxy:2375
networks:
- my-service_my-network
docker-socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- SERVICES=1
- TASKS=1
- NETWORKS=1
- NODES=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-service_my-network
deploy:
placement:
constraints: [node.role == manager]
networks:
my-network:
driver: overlay
Note that the above compose file is swarm ready (docker stack deploy my-service) but it should work in compose mode as well (docker-compose up -d). The nice thing about this approach is that my-service does not need to run on a swarm manager anymore.

Resources