In which path does docker and docker-in-docker run its containers? - docker

I have a dockerized nginx server which I can build and run on my local machine without any problems. So now I want to deploy this with the help of a gitlab runner.
This is my simple dockerfile:
FROM nginx
COPY web /usr/share/nginx/html
EXPOSE 80
So if I build and run this on my local machine, it works. But I have a first question at this point: Where does docker run the nginx server? Because if I look at /usr/share there is nothing there.
Now if I push my project to gitlab, register a runner and let it execute the following gitlab-ci file:
image: docker:stable
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
script:
- docker build -t bd24_nginx .
- docker run -d -p 80:80 bd24_nginx
... the job gets done just fine. There are no errors in the console output of the gitlab page. This is the output:
Successfully built 9903dc370422
Successfully tagged bd24_nginx:latest
$ docker run -d -p 80:80 bd24_nginx
b1e24c7cf9af8a43b3c2418d1ca1b90a58e445eb6b0b0ac9cde61f99be8cff7b
Job succeeded
But if I now visit the ip address of my server, the static html test page doesn't show up. So I suspect there is something wrong with the paths? Or is there anything I am missing completely?
Thanks in advance.

Using docker means the data are copied in the corresponding container. When docker has finished running, the container does not keep any data.
You might try to mount some host directory to the docker in order to have a persistent storage.
See this answer for instance.
Hope this helps!

Related

'Cypress could not verify that this server is running' error when using Docker

I am running Cypress version 10.9 from inside Docker in a Mac OS. I set my base URL as localhost:80. As a simple example, I am running an Apache server on localhost:80 which if I go to a web browser, I get the 'It works!' page, so it is indeed up. I also can ping localhost:80 from the same terminal I am executing my Docker Cypress container.
But I get this error every time when attempting to run my Cypress container:
Cypress could not verify that this server is running:
> http://localhost
We are verifying this server because it has been configured as your baseUrl.
I do see there are some stackoverflow posts(ie, [https://stackoverflow.com/questions/53959995/cypress-could-not-verify-that-the-server-set-as-your-baseurl-is-running][1]) that talk about this error. However, the application under test in these posts are inside another Docker container. The Apache page is not under a container.
This is my docker-compose.yml:
version: '3'
services:
# Docker entry point for the whole repo
e2e:
build:
context: .
dockerfile: Dockerfile
environment:
CYPRESS_BASE_URL: $CYPRESS_BASE_URL
CYPRESS_USERNAME: $CYPRESS_USERNAME
CYPRESS_PASSWORD: $CYPRESS_PASSWORD
volumes:
- ./:/e2e
I pass 'http://localhost' from my environment CYPRESS_BASE_URL setting.
This is the docker command I use to build my image:
docker compose up --build
And then to run the Cypress container:
docker compose run --rm e2e cypress run
Some other posts suggest running the docker run command with --network to make sure my Cypress container runs on the same network as the compose network(ref: Why Cypress is unable to determine if server is running?) but I am executing 'docker compose run' which does not have a --network argument.
I also verified that my /etc/hosts has an entry of 127.0.0.1 localhost as other posts have suggested. Any suggestions? Thanks.

Docker Update Code in Volume with Gitlab CI / CD

i am learning docker and i just encountered a problem i cannot solve.
I want to update source code in my docker swarm nodes when i make changes and push them. I just have a index php which echos "Hello World" and shows phpinfo. I am using data volumes since its recommended for production ( bind mounts for dev ).
my problem is: how to i update source code while using volumes? whats the best practice for this scenario?
Currently when i push changes to gitlab in my index php my gitlab-runner recreates the Docker Image and updates my swarm service.
This works when i change the php version in my Dockerfile but changes in index.php wont be affected.
My example Dockerfile looks like this. i just copy the index.php to /var/www/html in the container and thats it.
When i deploy my swarm stack the first time everything works
FROM php:7.4.5-apache
# copy files
COPY src/index.php /var/www/html/
# apahe settings
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf
My gitlab-ci.yml looks like this
build docker image:
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:latest
tags:
- build-image
deploy docker image:
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker service update --with-registry-auth --image $CI_REGISTRY_IMAGE:latest
$SWARM_SERVICE_NAME -d
tags:
- deploy-stack
Docker images generally contain an application's source code and the dependencies required to run it. Volumes are used for persistent data that needs to be preserved across changes to the underlying application. Imagine a database: if you upgraded from somedb:1.2.3 to somedb:1.2.4, you'd need to replace the database application binary (in the image) but would need to preserve the actual database contents (in a volume).
Especially in a clustered environment, don't try storing your application code in volumes. If you delete the part of your deployment setup that attempts this, then when containers redeploy with an updated image, they'll see the updated code.

How to deploy a project using docker with gitlab-ci

I'm fairly new to docker and gitlab-ci with the docker runner.
The docker runner works and I'm fine with it except of one thing. It seems as if the docker runner cannot see locally available images. Which means I may have to create a custom registry unless there's a way to make the docker command to check on the host docker.
What I try to achieve is this:
Build a Dockerfile and fetch a few other git repositories to
Create a new docker image based on the Dockerfile.
Start a new docker container on the host docker which will remain alive even after the job is done.
In other words, I'm trying to generate a docker image and start/replace an existing service in the host's dockerd service.
Right now that's what I came with but it doesn't work as data isn't passed from one job to the other. And even if job build would work I doubt the docker service I created would be accessible from the outside world.
stages:
- test
- prepare
- build
# Build the Dockerfile
prepare_script:
stage: prepare
image: debian:stretch
script:
- apt-get update
- apt-get install -y git python3
- python3 prepare_project.py
# Build and deploy the docker image
build:
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker build -t my-project .
- docker run --add-host db:172.17.42.1 -d --name my-project-inst --restart always -p 8069:8069 myproject
How can I use gitlab-ci to automatically deploy docker images in the host docker service?
The problem I'm trying to solve is to generate the docker file so fetching of git repositories and submodules can be done dynamically without having to hand modify Dockerfiles.

How do i deploy from GitLab CI to Google Container Engine instance using Docker?

I am trying to set up automated deployment using a GitLab CI runner to deploy our 4-container app via docker-compose. I can pull the container images down using docker pull commands, but I'm stuck on how to connect to the Google Compute Engine instance in order to run the full docker-compose script.
Typically, from my local machine, I run something like:
eval $(docker-machine env <machine-instance>)
docker-compose up -d
But my .gitlab-ci.yml script doesn't have docker-machine available.
Do I have to install docker-machine via the script section in my
.gitlab-ci.yml file?
How do I provision the instance without
creating a new one every time? Normally, from my local host, I would
run docker-machine create ... once then just use the eval
command above to reconnect to the instance. But how would this work
with CI?
Here's a sample of my .gitlab-ci.yml:
deploy staging:
image: docker:latest
services:
- docker:dind
environment: staging
stage: deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN my-registry.githost.io
script:
- docker pull my-registry.githost.io/group/project1:develop
- docker pull my-registry.githost.io/group/project2:develop
- docker pull my-registry.githost.io/group/project3:develop
- docker pull my-registry.githost.io/group/project4:develop
- docker-machine ls
Not sure what you need docker-machine for in this case. You might want to get rid of it.
But to go back to your question, the docker image you're using does not come with neither docker-machine, nor docker-compose :
https://github.com/docker-library/docker/blob/36e2107fb879d5d5c3dbb5d8d93aeef0a2d45ac8/1.12/Dockerfile
So you will need to create a new image (or find an existing one) that comes with those two installed.
So in the .gitlab-ci.yml, instead of image: docker:latest, it's going to be something like image: mydocker
You maybe have to install docker-machine in the GitLab CI Runner to use it with GCE
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/drivers/gce/

Docker - issue command from one linked container to another

I'm trying to set up a primitive CI/CD pipeline using 2 Docker containers -- I'll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that's done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.
The jenkins container is using the latest image (Dockerfile).
The node-app container's Dockerfile is:
FROM node:latest
EXPOSE 80
WORKDIR /usr/src/final-exercise
ADD . /usr/src/final-exercise
RUN apt-get update -y
RUN apt-get install -y nodejs npm
RUN cd /src/final-exercise; npm install
CMD ["node", "/usr/src/final-exercise/app.js"]
jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to #alkis):
node-app:
container_name: node-app
build: .
ports:
- 80:80
links:
- jenkins
jenkins:
container_name: jenkins
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69e52b216d48 finalexercise_node-app "node /usr/src/final-" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp node-app
5f7e779e5fbd jenkins "/bin/tini -- /usr/lo" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
I can ping jenkins from node-app and vice versa.
Is this even possible? If not, am I making an architectural mistake here?
Thank you very much in advance, I appreciate it!
EDIT:
I've stumbled upon nsenter and easily entering a container's shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.
node-app:
build: .
ports:
- 80:80
links:
- finalexercise_jenkins_1
jenkins:
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Try the above. You are linking by image name, but you must use container name.
In your case, since you don't specify explicitly the container name, it gets auto-generated like this
finalexercise : folder where your docker-compose.yml is located
node-app : container configs tag
1 : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
The setup of the yml files:
node-app:
build: .
container_name: my-node-app
ports:
- 80:80
links:
- my-jenkins
jenkins:
image: jenkins
container_name: my-jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.
Update
In order to test, log to a bash terminal of the jenkins container
docker exec -it my-jenkins bash
Then try to ping my-node-app, or even telnet for the specific port.
ping my-node-app:80
Or you could
telnet my-node-app 80
Update
What you want to do is easily accomplished by the exec command.
From your host you can execute this (try it so you are sure it's working)
docker exec -i <container_name> ./deploy.sh
If the above works, then your problem delegates to executing the same command from a container. As it is you can't do that, since the container that's issuing the command (jenkins) doesn't have access to your host's docker installation (which not only recognises the command, but holds control of the container you need access to).
I haven't used either of them, but I know of two solutions
Use this official guide to gain access to your host's docker daemon and issue docker commands from your containers as if you were doing it from your host.
Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it's shared).
This thread from SO gives some more insight about this issue.

Resources