I've got a gitlab-ci.yml job that brings up two Docker containers. The first container is a standard Redis container, the second runs a custom process that populates the container. The goal of this job is to get the Redis container's dump.rdb file and pass that along to other jobs. Therefore, my job looks like this:
script:
- cd subdir
- docker run -d --name redis -v $PWD/redis-data:/data redis:latest
- docker run my_custom_image
- docker stop redis
artifacts:
paths:
- subdir/redis-data/dump.rdb
expire_in: 5 minutes
So far so good. my_custom_image connects to the redis container and reports that it loaded everything correctly, and both containers shutdown cleanly. However, subdir/redis-data is empty. No dump.rdb, no nothing. Adding a ls -la subdir/redis-data after the docker stop call confirms this. I can run on my machine and everything is working correctly, it's only on gitlab that this breaks.
It looks to me like the gitlab-runner isn't running every step in the same directory, but that doesn't make much sense. Can anyone tell me why the mounted volume isn't getting the dump.rdb?
I found an old bug report here from someone who was having the same problem. It didn't look like anyone was able to fix the issue, but they suggested just doing a docker cp to pull the data out of the running container. So I did:
script:
- cd subdir
- docker run -d --name redis -v $PWD/redis-data:/data redis:latest
- docker run my_custom_image
- echo SAVE | docker exec -i redis redis-cli
- docker cp redis:/data/dump.rdb redis-data/dump.rdb
- docker stop redis
artifacts:
paths:
- subdir/redis-data/dump.rdb
expire_in: 5 minutes
The SAVE command was to ensure redis had flushed everything to disk before running docker cp to pull the dump file. This is working fine for me, wish it was more elegant but there you go.
Related
i am learning docker and i just encountered a problem i cannot solve.
I want to update source code in my docker swarm nodes when i make changes and push them. I just have a index php which echos "Hello World" and shows phpinfo. I am using data volumes since its recommended for production ( bind mounts for dev ).
my problem is: how to i update source code while using volumes? whats the best practice for this scenario?
Currently when i push changes to gitlab in my index php my gitlab-runner recreates the Docker Image and updates my swarm service.
This works when i change the php version in my Dockerfile but changes in index.php wont be affected.
My example Dockerfile looks like this. i just copy the index.php to /var/www/html in the container and thats it.
When i deploy my swarm stack the first time everything works
FROM php:7.4.5-apache
# copy files
COPY src/index.php /var/www/html/
# apahe settings
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf
My gitlab-ci.yml looks like this
build docker image:
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:latest
tags:
- build-image
deploy docker image:
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker service update --with-registry-auth --image $CI_REGISTRY_IMAGE:latest
$SWARM_SERVICE_NAME -d
tags:
- deploy-stack
Docker images generally contain an application's source code and the dependencies required to run it. Volumes are used for persistent data that needs to be preserved across changes to the underlying application. Imagine a database: if you upgraded from somedb:1.2.3 to somedb:1.2.4, you'd need to replace the database application binary (in the image) but would need to preserve the actual database contents (in a volume).
Especially in a clustered environment, don't try storing your application code in volumes. If you delete the part of your deployment setup that attempts this, then when containers redeploy with an updated image, they'll see the updated code.
I would like to use a standard way of running my docker containers. I have have been keeping a docker_run.sh file, but docker-compose.yml looks like a better choice. This seems to work great until I try to access my website running in the container. The ports don't seem to be set up correctly.
Using the following docker_run.sh, I can access the website at localhost. I expected the following docker-compose.yml file to have the same results when I use the docker-compose run web command.
docker_run.sh
docker build -t web .
docker run -it -v /home/<user>/git/www:/var/www -p 80:80/tcp -p 443:443/tcp -p 3316:3306/tcp web
docker-compose.yml
version: '3'
services:
web:
image: web
build: .
ports:
- "80:80"
- "443:443"
- "3316:3306"
volumes:
- "../www:/var/www"
Further analysis
The ports are reported as the same in docker ps and docker-compose ps. Note: these were not up at the same time.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<id> web "/usr/local/scripts/…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:3307->3306/tcp <name>
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------
web /usr/local/scripts/start_s ... Up 0.0.0.0:3316->3306/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
What am I missing?
As #richyen suggests in a comment, you want docker-compose up instead of docker-compose run.
docker-compose run...
Runs a one-time command against a service.
That is, it's intended to run something like a debugging shell or a migration script, in the overall environment specified by the docker-compose.yml file, but not the standard command specified in the Dockerfile (or the override in the YAML file).
Critically to your question,
...docker-compose run [...] does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag.
Beyond that, the docker run command you show and the docker-compose.yml file should be essentially equivalent.
You don't run docker-compose.yamls the same way that you would run a local docker image that you have either installed or created on your machine. docker-compose files are typically launched running the command docker-compose up -d to run in detached mode. Then when you run docker ps you should see it running. You can also run docker-compose ps as you did above.
I have a dockerized nginx server which I can build and run on my local machine without any problems. So now I want to deploy this with the help of a gitlab runner.
This is my simple dockerfile:
FROM nginx
COPY web /usr/share/nginx/html
EXPOSE 80
So if I build and run this on my local machine, it works. But I have a first question at this point: Where does docker run the nginx server? Because if I look at /usr/share there is nothing there.
Now if I push my project to gitlab, register a runner and let it execute the following gitlab-ci file:
image: docker:stable
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
script:
- docker build -t bd24_nginx .
- docker run -d -p 80:80 bd24_nginx
... the job gets done just fine. There are no errors in the console output of the gitlab page. This is the output:
Successfully built 9903dc370422
Successfully tagged bd24_nginx:latest
$ docker run -d -p 80:80 bd24_nginx
b1e24c7cf9af8a43b3c2418d1ca1b90a58e445eb6b0b0ac9cde61f99be8cff7b
Job succeeded
But if I now visit the ip address of my server, the static html test page doesn't show up. So I suspect there is something wrong with the paths? Or is there anything I am missing completely?
Thanks in advance.
Using docker means the data are copied in the corresponding container. When docker has finished running, the container does not keep any data.
You might try to mount some host directory to the docker in order to have a persistent storage.
See this answer for instance.
Hope this helps!
I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9
I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).