Ran docker container `--restart always` rebooted server, container/image did not restart - docker

docker run \
-d \
-e "SOME_ENV_VAR=someValue" \
-h some.host.com \
--link db-thing:db \
--name someName \
-p 5555:5555 \
--restart always \
-v /someFile:/otherFile:ro \
-v /someDir/:/otherDir/ \
web-thing
I'm using docker 1.7.1 on CentOS. I started some containers with --restart always, then rebooted the server. Docker came back up, but none of the containers/images restarted. I thought they might depend on each other, so restarted the db-thing image, but even then the others still didn't restart. What could keep the containers from restarting?
Does this have anything to do with this: How to setup linkage between docker containers so that restarting won't break it?

I tried again and it worked. Doh! My best guess is that I was developing my docker commands in a file (to check into source control) and I must have forgotten to run the version of the command that had --restart always. Embarrassing!

Related

How to completely erase a Docker container of GitLab Server from machine?

While writing an automated deployment script for a self-hosted GitLab server I noticed that my uninstallation script does not (completely) delete the GitLab server settings, nor repositories. I would like the uninstaller to completely remove all traces of the previous GitLab server installation.
MWE
#!/bin/bash
uninstall_gitlab_server() {
gitlab_container_id=$1
sudo systemctl stop docker
sudo docker stop gitlab/gitlab-ce:latest
sudo docker rm gitlab/gitlab-ce:latest
sudo docker rm -f gitlab_container_id
}
uninstall_gitlab_server <some_gitlab_container_id>
Observed behaviour
When running the installation script, the GitLab repositories are preserved, and the GitLab root user account password is preserved from the previous installation.
Expected behaviour
I would expect the docker container and hence GitLab server data to be erased from the device. Hence, I would expect the GitLab server to ask for a new root password, and I would expect it to not display previously existing repositories.
Question
How can I completely remove the GitLab server that is installed with:
sudo docker run --detach \
--hostname $GITLAB_SERVER \
--publish $GITLAB_PORT_1 --publish $GITLAB_PORT_2 --publish $GITLAB_PORT_3 \
--name $GITLAB_NAME \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_ROOT_EMAIL=$GITLAB_ROOT_EMAIL -e GITLAB_ROOT_PASSWORD=$gitlab_server_password \
gitlab/gitlab-ce:latest)
Stopping and removing the containers doesn't remove any host/Docker volumes you may have mounted/created.
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
You need to rm -rf $GITLAB_HOME

Multiple Teamcity agents with Docker

Ok,
I can somewhat sense my question has nothing to do with Teamcity but rather the subtle issues surrounding docker. I am trying to fire off one Teamcity agent with
docker run -it -d -e SERVER_URL="192.168.100.15:8111" \
--restart always \
--name="teamcity-agent_1" \
--mount src=docker_volumes_1,dst=/var/lib/docker,type=volume \
--mount src=$(pwd)/config,dst=/etc/docker,type=bind \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
Works like a charm. Then I try to fire off a second agent (up to three agents are free). This used to work perfectly fine but has recently stopped...
docker run -it -d -e SERVER_URL="192.168.100.15:8111" \
--restart always \
--name="teamcity-agent_2" \
--mount src=docker_volumes_2,dst=/var/lib/docker,type=volume \
--mount src=$(pwd)/config,dst=/etc/docker,type=bind \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
In this second container docker wouldn't start, e.g. docker images results in
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
service docker start
service docker status
Confirm that I have successfully started docker but then going back to docker images and we get the same problem as above. service docker status tells me now that docker is not running!

customizing docker-compose.yml for images from docker store

i'm new to docker and i'm currently experimenting using https://github.com/diginc/docker-pi-hole
It's pretty straightforward if i just imagine it as a light-weight VM, i've pulled the image using docker pull diginc/pi-hole and manually started the image by doing
docker run -d \
--name pi-hole \
-p 53:53/tcp
-p 53:53/udp
-p 8053:80 \
-e TZ=SG \
-v "/Users/me/pihole/:/etc/pihole/" \
-v "/Users/me/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="192.168.0.25" \
--restart=always \
diginc/pi-hole:alpine
everything works well, but in their documentation, it's mentioned to use docker_run.sh
No idea where/how to execute this, and also the authors also suggested using docker-compose, but after pulling the project, i can't find where's the actual directory.
Where is the directory?
What's the typical way of customizing the compose.yml
How to run after i've done my customization?
The docker-run.sh is on the site
https://github.com/diginc/docker-pi-hole/blob/master/docker_run.sh
Just use it

Force a problematic docker container to restart itself?

I often run into an issue with some our docker container applications that the simple fix is to restart the docker container. Unfortunately this is a manual process, and we have broken functionality until we discover which container is problematic and needs to be rebooted. This has me wondering if there is a good technique for auto restarting docker containers in certain situations?
Right now I'm thinking of a combination of the --autorestart flag, along with forcing the application to close when it encounters a known-issue. However, I'm not sure if this is the best approach.
If you're application is able to detect issues, you can easily have the container restart itself. The two important things are the --restart flag and that the application exists when it detects an issue.
Start the container in the background (-d) and set a restart policy:
docker run --restart unless-stopped -d [IMAGE] [COMMAND]
With the restart policy, you control what Docker does when the command exists. Using --restart unless-stopped tells Docker to always restart the command, no matter what the exit code of the command was. This way, you can have your application check its health, and if necessary use exit(1) or something similar to shutdown. When that happens, Docker will follow its restart policy and start a new container.
Although Docker doesn't really care about the return code, I would make sure that the application exists with a status code other than 0 to indicate an issue. This might be useful later if you do want to analyze logs or use your container from scripts.
Edit:
I initially used --restart always in the answer, but after some consideration I think it might be better to use --restart unless-stopped here. Its behavior is more predictable, because docker stop does actually stop a service. With --restart always, docker stop will stop the container, but then start a new one again, which isn't necessarily what you want or expect to happen.
Other than --restart=always and --restart=unless-stopped we also have --restart=on-failure[:max-retry] (Docker attempts to restarts the container if the container returns a non-zero exit code. You can optionally specify the maximum number of times that Docker will try to restart the container).
And we can also use Docker HEALTHCHECK instruction:
It tells Docker how to test a container to check that it is still working. This can detect cases such as a web server that is stuck in an infinite loop and unable to handle new connections, even though the server process is still running. We can also restart containers using Healthcheck.
[https://docs.docker.com/engine/reference/builder/#healthcheck][1]
Following are some examples that i have used to restart container and healing:
docker run -d \
--name autoheal \
--health-cmd='stat /etc/nginx/nginx.conf \
--health-interval=2s \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-e DOCKER_SOCK=/var/run/docker.sock \
-e CURL_TIMEOUT=15 \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
docker run -d \
--name autoheal \
--restart=always \
-e autoheal=true \
-e AUTOHEAL_DEFAULT_STOP_TIMEOUT=10 \
-e AUTOHEAL_INTERVAL=5 \
-e AUTOHEAL_START_PERIOD=0 \
-p 8080:8080 testing:latest
docker run -d --health-cmd='curl -sS http://localhost:80 || exit 1' \
--health-timeout=10s \
--health-retries=3 \
--health-interval=5s \
-p 8080:9080 testing:latest
You can use next command to restart container:
docker restart $container

How to pass docker options --mac-address, -v etc in kubernetes?

I have installed a 50 node Kubernetes cluster in our lab and am beginning to test it. The problem I am facing is that I cannot find a way to pass the docker options needed to run the docker container in Kubernetes. I have looked at kubectl as well as the GUI. An example docker run command line is below:
sudo docker run -it --mac-address=$MAC_ADDRESS \
-e DISPLAY=$DISPLAY -e UID=$UID -e XAUTHORITY=$XAUTHORITY \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-v /mnt/lab:/mnt/lab -v /mnt/stor01:/mnt/stor01 \
-v /mnt/stor02:/mnt/stor02 -v /mnt/stor03:/mnt/stor03 \
-v /mnt/scratch01:/mnt/scratch01 \
-v /mnt/scratch02:/mnt/scratch02 \
-v /mnt/scratch03:/mnt/scratch03 \
matlabpipeline $ARGS`
My first question is whether we can pass these docker options or not ? If there is a way to pass these options, how do I do this ?
Thanks...
I looked into this as well and from the sounds of it this is an unsupported use case for Kubernetes. Applying a specific MAC address to a docker container seems to conflict with the overall design goal of easily bringing up replica instances. There are a few workarounds suggested on this Reddit thread. In particular the OP finally decides the following...
I ended up adding the NET_ADMIN capability and changing the MAC to an environment variable with "ip link" in my entrypoint.sh.

Resources