Docker container keeps coming back - docker

I have installed gitlab-ce on my server with the following docker.compose.yml :
gitlab:
image: 'gitlab/gitlab-ce:latest'
hostname: 'gitlab.devhunt.eu'
restart: 'always'
environment:
# ...
volumes:
- '/var/lib/gitlab/config:/etc/gitlab'
- '/var/lib/gitlab/logs:/var/log/gitlab'
- '/var/lib/gitlab/data:/var/opt/gitlab'
I used it for a while and now I want to remove it. I noticed that when I did docker stop gitlab (which is the container name), it kept coming back so I figured it was because of the restart: always. Thus the struggle begins :
I tried docker update --restart=no gitlab before docker stop gitlab. Still coming back.
I did docker stop gitlab && docker rm gitlab. It got deleted but came back soon after
I went to change the docker-compose.yml to restart: no, did docker-compose down. The container got stopped and deleted but came back soon after
I did docker-compose up to apply the change in the compose file, checked that it was successfully taken into account with docker inspect -f "{{ .HostConfig.RestartPolicy }}" gitlab. The response was {no 0}. I did docker-compose down. Again, it got stopped and deleted but came back soon after
I did docker stop gitlab && docker rm gitlab && docker image rm fcc1e4187c43 (the image hash I had for docker-ce). The container got stopped, deleted and the image got deleted. And it seemed that I had finally managed to kill the beast... one hour later, gitlab container was reinstalled with another image hash (3cc8e8a0764d) and the container was starting.
I would stop the docker daemon but I have production websites and databases running and I would like to avoid downtime if possible. Any idea what I can do ?

you-ve set the restart policy to always, set it to unless-stoped.
check the docs https://docs.docker.com/config/containers/start-containers-automatically/

Related

Updating a docker container from image; leaves old images on server

My process for updating a docker image to production (a docker swarm) is as follows:
On dev environment:
docker-compose build
docker push myrepo/name
Then on the prod server, which is a docker swarm:
docker pull myrepo/name
docker service update --image myrepo/name --with-registry-auth containername
This works perfectly; the swarm is updated with the latest image.
However, it always leaves the old image on the live servers and I'm left with something like this:
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myrepo/name latest abcdef 14 minutes ago 1.15GB
myrepo/name <none> bcdefg 4 days ago 1.22GB
myrepo/name <none> cdefgh 6 days ago 1.22GB
Which, over time results in a heap of disk space being unnecessarily used.
I've read that docker system prune is not safe to run on production especially in a swarm.
So, I am having to regularly, manually remove old images e.g.
docker image rm bcdefg cdefgh
Am I missing a step in my update process, or is it 'normal' that old images are left over to be manually removed?
Thanks in advance
since you are using docker swarm and probably multi node setup you could deploy a global service which would do the cleanup for you. We are using Bret Fisher's approach on it:
version: '3.9'
services:
image-prune:
image: internal-image-registry.org/proxy-cache/library/docker:20.10
command: sh -c "while true; do docker image prune -af --filter \"until=4h\"; sleep 14400; done"
networks:
- bridge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
labels:
- "env=devops"
- "application=cleanup-image-prune"
networks:
bridge:
external: true
name: bridge
When adding new hosts it gets deployed automatically on it with our own base docker image and then does the cleanup job for us.
We are still missing some time to inspect newer docker service types which are scheduled on their own. It would probably be wise to move cleanup jobs to the global service replicated jobs provided by docker instead of an infinite loop in a script. It just works for us so we did not make it high priority enough to swap over to it. More info on the replicated jobs

How to implement changes made to docker-compose.yml to detached running containers

The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build

Don't create containers twice with docker-compose up and running docker run containers

I'd like docker-compose to use an already running container for imageA and not create it a second time when calling docker-compose up -d. The original container was run using docker run.
Steps:
I started a container with docker run, eg.
docker run --name imageA -d -p 5000:5000 imageA
I then call docker-compose up -d with a docker-compose.yml file that includes a service with the same name and image as the first container.
version: "3"
services:
imageA:
image: imageA
ports:
- "5000:5000"
imageB:
image: imageB
ports:
- "5001:5001"
What happens:
docker-compose tries to create imageA and fails when it tries to bind port 5000 since container imageA has it bound already.
Question: How can docker-compose "adopt" or "include" the first container without trying to create it a again?
I don't believe this is currently possible. If you compare the outputs of docker ps and docker-compose ps, you should notice that docker-compose ps does not show the imageA running, if it was started with docker run.
Docker-compose is only interested in the services that are defined in the docker-compose files, and it does not seem to use only the container names for that, but labels too, and you cannot add labels to running containers currently.
Other than that, the container started with docker run will also not be (at least by default) in the same internal network as those that are started with docker-compose.
So your best option would be either:
a) Removing the already running container from the compose-file.
b) Calling docker-compose up -d imageB to run only the individual service, so that the compose updates only that or
c) just stopping the already running container and starting it again with compose.
Docker containers should anyway be created in a way that it is easy and acceptable to just restart them when needed.
Adding --no-recreate flag will prevent recreation of the container, if it already exists.
Example:
docker-compose -f docker-compose-example.yaml up -d --no-recreate

docker-compose restart interval

I have a docker-compose.yml file with a following:
services:
kafka_listener:
build: .
command: bundle exec ./kafka foreground
restart: always
# other services
Then I start containers with: docker-compose up -d
On my amazon instance kafka-server (for example) fails to start sometimes, so ./kafka foregound script fails. When typing docker ps I see a message: Restarting (1) 11 minutes ago. I thought docker should restart failed container instantly, but it seems it doesn't. After all, container has been restarted in about 30 minutes since first failed attempt.
Is there any way to tell Docker-Compose to restart container instantly after failure?
You can use this policy :
on-failure
The on-failure policy is a bit interesting as it allows you to tell Docker to restart a container if the exit code indicates error but not if the exit code indicates success. You can also specify a maximum number of times Docker will automatically restart the container. like on-failure:3 It will retry 3 times.
unless-stopped
The unless-stopped restart policy behaves the same as always with one exception. When a container is stopped and the server is rebooted or the Docker service is restarted, the container will not be restarted.
Hope this will help you in this problem.
Thank you!

Jenkins inside docker loses configuration when container is restarted

I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9

Resources