I am creating a gitlab-ci to run e2e tests over my application,
so, given I have this docker-compose.yml:
services:
chrome:
image: zenika/alpine-chrome:latest
command: [
chromium-browser,
"--headless",
"--no-sandbox",
"--disable-gpu",
"--ignore-certificate-errors",
"--reduce-security-for-testing",
"--remote-debugging-address=0.0.0.0",
"--remote-debugging-port=9222",
"https://google.com/",
]
ports:
- "9222:9222"
networks:
- test-e2e
networks:
test-e2e:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
when I run docker-compose up everything just works fine,
and on my local machine I am able to visit localhost:9222 and access the chrome debugger.
However, when I run the same job on gitlab-ci I get a ECONNREFUSED error:
F---F
Failures:
1) Scenario: List of Profiles # src/features/profile.feature:3
✖ Before # dist/node/development/webpack:/hooks/puppeteer.hooks.ts:17
Failed to fetch browser webSocket url from http://localhost:9222/json/version: connect ECONNREFUSED 127.0.0.1:9222
Error: connect ECONNREFUSED 127.0.0.1:9222
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
So it is clear that I cannot join the docker-compose network and access localhost:9222 from the job
My gitlab-ci.yml is pretty straightforward and looks like this:
E2E tests:
stage: test end-to-end
image:
name: docker/compose:1.24.1
entrypoint: ["/bin/sh", "-c"]
services:
- docker:dind
before_script:
- apk --update add nodejs yarn
- docker-compose -f test-e2e.yaml up -d
script:
- yarn test:cucumber
after_script:
- docker-compose -f test-e2e.yaml down
yarn test:cucumber basically runs cucumber and puppeteer trying to access localhost:9222 to get chrome's metadata.
How can I join the network created by docker-compose from the gitlab-ci job?
I don't have access to edit runner configurations
TL; DR On CI your chrome container is reachable using docker:9222 (or more generally <name-of-the-dind-service-on-ci>:<exposed-port>), not localhost:9222
Explanation
As per your gitlab-ci.yml, you will start 2 containers:
a docker/compose:1.24.1 container from which you will run docker-compose and yarn commands
a docker:dind on which a Docker Daemon will run. This container will be reachable from docker/compose:1.24.1 container via hostname docker (see GitlabCI doc on accessing services)
When running a container in Docker, the container is actually started by the Docker daemon and will run and expose ports on the host on which the Daemon is running.
On your machine, the Docker daemon is running locally and will expose the container ports on your local network, allowing you to reach your chrome container via localhost
On the CI, you are running your commands from a docker/compose:1.24.1 container but the Docker daemon is running in another container (a different host): the docker:dind container. chrome container will be created inside docker:dind container and its port exposed from this same container. You simply need to access the docker:dind container which will expose chrome ports.
By using localhost from your docker/compose:1.24.1 container, you won't be able to reach chrome because its port is not exposed on docker/compose:1.24.1 container but from docker:dind container. You need to specify its host (docker) and the exposed port (9222)
If you are using gitlab shared runners, probably you are not allowed to create networks for security reasons.
Try using your own private runners. It is really easy to set up them: https://docs.gitlab.com/runner/install/
An alternative is services.
I think good idea is to start
docker-compose up
inside CI script.
After it you will be able to access. And if you run CI again it will start same containers.
Related
I am trying to run a test using docker in docker within a Gitlab CI job. My understanding is that enabling the FF_NETWORK_PER_BUILD flag will automatically create a user-defined bridge network that the job runner and all of the created dockers within that job will connect to... but looking at the Gitlab documentation I am slightly confused...
This page: https://docs.gitlab.com/ee/ci/services/
Gives an example of using the docker:dind service with FF_NETWORK_PER_BUILD: "true"
But then when using docker run they still include the --network=host flag.
Here is the given example:
stage: build
image: docker:19.03.1
services:
- docker:dind # necessary for docker run
- tutum/wordpress:latest
variables:
FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking
script: |
docker run --rm --name curl \
--volume "$(pwd)":"$(pwd)" \
--workdir "$(pwd)" \
--network=host \
curlimages/curl:7.74.0 curl "http://tutum-wordpress"
I am trying to ensure that all of my dockers within this job are on their own separate network,
so does using the --network=host flag in this instance connect the new docker to the host server that the actual job runner is on? Or the per-job network that was just created? In what case would you want to create a per-job network and still connect a new docker to the host network?
Would appreciate any advice!
does using the --network=host flag in this instance connect the new docker to the host server that the actual job runner is on? Or the per-job network that was just created?
This is probably confusing because the "host" in --network=host does not mean host as in the underlying runner host / 'baremetal' system. To understand what is happening here, we must first understand how the docker:dind service works.
When you use the service docker:dind to power docker commands from your build job, you are running containers 'on' the docker:dind service; it is the docker daemon.
When you provide the --host option to docker run it refers to the host network of the daemon I.E. the docker:dind container, not the underlying system host.
When you specify FF_NETWORK_PER_BUILD that was specifying the docker network for the build job and its service containers that encapsulates all of your job's containers.
So, in order, the relevant activities happen as follows:
The GitLab runner creates a new docker network for the build
The runner creates the docker:dind and tutum/wordpress:latest services, connected to the network created in step (1)
Your job container starts, also connected to the docker network in step (1)
Your job contacts the docker:dind container and asks it to start a new curl container, connected to the host network of the docker:dind container -- the same network created in step (1), allowing it to reach the service containers.
Without the --network=host flag, the created container would be on a different bridge network and be unable to reach the network created from step (1).
I'm running a Jenkins job based on such agent:
pipeline {
agent {
docker {
image 'cypress/base:10'
args '-v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker -v /usr/local/bin/docker-compose:/usr/local/bin/docker-compose -u root'
}
}
…
note: docker and docker-compose are mounted into my agent container to be able to run docker containers inside the pipeline stages ("Docker outside of Docker" setup)
down the pipeline, I start docker-compose setup that consists of 2 containers - server and webapp
…
sh 'docker-compose up --build --detach'
…
After that, I want to send a GET request to localhost:8080, this is where the web-app should be served from. But I get
Error: connect ECONNREFUSED localhost:8080
The same docker-compose setup works on my dev. machine. Port forwarding is set up correctly (8080:8080 port forwarding is enabled in docker-compose configuration file)
I think it's somewhat related to the "Docker outside of Docker" setup that I do in Jenkins 🤔Maybe port 8080 actually appears to be listening on the host of the agent of my pipeline, not sure…
I will be happy to get fresh ideas on the problem, completely run out of my mind with this.
And just to give more context: I want to run web-app + API server via docker-compose and then run Cypress (outside of docker-compose setup) to do E2E testing via UI
In Jenkins docker out of docker setup, technically, Jenkins is also another container that "shares" the same space with other containers. Meaning, it can communicate to the containers it "created".
In my case, what I did was create a custom bridge network on my docker-compose.yml file
version: "3.8"
services:
app:
build:
context: .
ports:
- 8081:8080
depends_on:
- redisdb
networks:
- frontend
- backend
redisdb:
image: redis
ports:
- 127.0.0.1:6380:6379
networks:
- backend
networks:
frontend: {}
backend: {}
Then once this is created, docker-compose creates these networks with the following format:
{FOLDER_NAME}_frontend (example: pipeline_frontend)
{FOLDER_NAME}_backend
These networks are usually bridge networks.
My Jenkins container originally resides in the default network "bridge". Since my jenkins is in a bridge network and these containers are in a bridge type network, I can connect my jenkins via runtime later in the pipeline.
docker network connect pipeline_frontend jenkins
Now from jenkins, I can communicate directly to the container via its service name.
In my case for example, from my Jenkins, I can curl to http://app:8080
Note: This answer is only applicable if Jenkins container exclusively resides in the host as with the containers it creates. I have not tested this on a setup wherein Jenkins has external nodes.
I am using VSTS build step Docker Compose v 0.* on Hosted Linux Agent.
Here is my docker-compose:
version: '3.0'
services:
storage:
image: blobstorageemulator:1.1
ports:
- "10000:10000"
server:
build: .
environment:
- ENV=--tests
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
depends_on:
- storage
I use run services command.
So basically I am running 2 Linux containers inside another Linux container (Build Agent).
I was able to connect these containers to each other (server uses storage through connection string, which contains storage as a host - http://storage:10000/devstoreaccount1).
Question: how to get access to the server from the build agent container? When I do curl http://localhost:8080 on the next step it returns Failed to connect to localhost port 8080: Connection refused.
PS
Locally I run docker compose and can easily access my exposed port from host OS (I have VirtualBox with Ubuntu 17.10)
UPDATE:
I tried using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' server-container-name to get the IP address of the container running my server app and curl this IP, but I am getting connection timed out now.
there is no way to access it from the host container, you have to perform exec command.
docker-compose -p container_name exec name_from_compose http://localhost:8080
how can I ssh into a service created by docker swarm? I have created a service using docker stack and the yaml file looks like this:
version: '3'
services:
app:
image: "myimage1"
expose:
- "8080"
and I validated that the service is running but I'm not sure how to ssh into the service(container) that was created.
To ssh into container, you would need ssh service running inside container. This is generally not a good practice.
To get access to container shell without having ssh running, you can use:
docker exec -ti bash/sh
Ansible has docker modules for managing containers and images(http://docs.ansible.com/ansible/docker_container_module.html#docker-container, http://docs.ansible.com/ansible/docker_image_module.html#docker-image)
In the Docker docs here they set up a custom bridge network with the containers connected, like so
$ docker network create -d bridge my-bridge-network
$ docker run -d --network=my-bridge-network --name db training/postgres
$ docker run -d --network=my-bridge-network --name web training/webapp python app.py
These two docker containers spin up and connect to the same network.
But I can not find a way to save this configuration like you would commit a docker image, so that I could pull the network configuration and it would pull the containers ready to go.
The creation of the bridge is done on setting up the docker network when calling docker network create ... and configuring docker container network mode is done when calling docker run --network=....
You cannot store the information in docker image that when it starts it should be connected to bridge X, this is not run-time configurable and outside docker image's scope when running inside a docker container.
You could either bash script the docker network & docker runcommands or use docker-compose .yml file to script the configuration each container needs to be run against, e.g.:
version: "2"
services:
my-app
build: .
image: my-app-image
container_name: my_app_container
ports:
- 3000:3000
networks:
- my-network
networks:
my-network:
external: true