If docker container is a process how it can be restarted? - docker

Docker container is defined in the offitial doc as a process ?
How much is this precise since process by definition are always running and can't be stopped/restarted ?

Containers are a process with some configuration and namespaces attached to them for isolation. That configuration includes which image to use, and any settings you passed on the docker run command or from inside your compose yml file. You can view this configuration with a docker container inspect.
Part of the namespaces attached to the container is a filesystem namespace that includes a read/write layer for any changes you have made inside the container that weren't written to a volume. You can view a list of these changes with a docker diff on your container.
When you stop the container, the running process is killed, however the configuration and the container filesystem remain. If you restart the container, the process is restarted with the same configuration. When you delete a container, this configuration and the read/write filesystem layer are removed.

Related

How to restart rabbitmq inside docker

I am using rabbitmq docker image https://hub.docker.com/_/rabbitmq
I wanted to make changes in rabbitmq.conf file inside container for testing a config, so I tried
rabbitmqctl stop
after making the changes. But this stops the whole docker container.
I even tried
rabbitmq-server restart
but that too doesn't work saying ports are in use.
How do I restart the service without restarting the whole container?
Normally Docker containers are made so that they live while their main process does. If the application exits normally or otherwise, so does the container.
It is easier to accept this behavior than fight it, you just need to create a config on your host machine and mount it inside the container. After that you can make changes to the local file and restart the container to make the application to read the updated configuration. To do so:
# Copy config from the container to your machine
docker cp <insert container name>:/etc/rabbitmq/rabbitmq.config .
# (optional) make changes to the copied rabbitmq.config
...
# Start a new container with the config mounted inside (substitute /host/path
# with a full path to your local config file)
docker run -v /host/path/rabbitmq.config:/etc/rabbitmq/rabbitmq.config <insert image name here>
# Now local config appears inside the container and so all changes
# are available immediately. You can restart the container to restart the application.
If you prefer the hard way, you can customize the container so that it starts a script (you have to write it) instead of rabbimq. The script has to start server in background and check whether the process is alive. You can find hints how to do that in the "Run multiple services in a container" article.

When do I need Docker Volumes?

Trying to make sure I understand the proper usage of docker volumes. If I have a container running MongoDB that I plan to start and stop do I need a volume configured with I "docker run" the first time? My understanding is that if use Docker run once, then docker stop/start my data is saved inside the container. The volume is more useful if multiple containers want access to the data. Is that accurate or am I misunderstanding something?
Starting and stopping a container will not delete the container specific data. However, you upgrade containers by replacing them with new containers. Any changes to the container specific read/write layer will be lost when that happens, and the new container will go back to it's initial state. If there are files inside your container that you want to preserve when the container is replaced, then you need to store those files in a volume, and then mount that same volume in the new container.

using docker-compose on a kubernetes instance with jenkins - mounting empty volumes

I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?
I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.

Using Persistent Volumes in Docker

I have a Docker container running on my Mac. This Docker container has a home folder like:
/home/my_user/my_project/
It is based on a Ubuntu OS image and running on my Mac. When I run this container, the container constantly updates a folder under the my_project folder. Now when I stop and remove this container, it just gets erased and when I start a new instance of the container, the process has to begin all over again, i.e., the container starts writing into the my_project folder, but the old files which it already wrote is completely lost.
How can I make the data written by the container be persistent even after a container delete / restart?
Docker persistent volumes is what I understand that I need, but how can I mount a local folder on my Mac such that the data is written and persisted? This container could run on a Windows machine, so how can I make a persistent volume across different OS?
You need to start your container with the -v flag. So if you were to mount the /home/my_user/my_project directory from the container onto the host to /srv/my_app/data for example, you'd need to use it as follows:
docker run -v /srv/my_app/data:/home/my_user/my_project IMAGE_NAME
There's also a difference between volumes and bind mounts, which I explained here

Run executable from host within docker container

I have a docker container and I would like to start a process in the host OS, and then have it execute in the context of the docker container. That is, my executable is a file in the host filesystem, and I want to start a process in the host OS, but I want to contain that process to the container, so that e.g. the process can only access the container's filesystem, etc.
For various reasons I do not want to copy the executable into the container and execute it there.
I do realize that this is a somewhat strange thing to be trying to do with docker containers!
Mount the executable into the container with a volume like this:
$ docker run -v /path/to/executable:/my_exe debian /my_exe
The only problem is you will also need to take care of making sure any required libraries are also available in the container.

Resources