How to restart rabbitmq inside docker - docker

I am using rabbitmq docker image https://hub.docker.com/_/rabbitmq
I wanted to make changes in rabbitmq.conf file inside container for testing a config, so I tried
rabbitmqctl stop
after making the changes. But this stops the whole docker container.
I even tried
rabbitmq-server restart
but that too doesn't work saying ports are in use.
How do I restart the service without restarting the whole container?

Normally Docker containers are made so that they live while their main process does. If the application exits normally or otherwise, so does the container.
It is easier to accept this behavior than fight it, you just need to create a config on your host machine and mount it inside the container. After that you can make changes to the local file and restart the container to make the application to read the updated configuration. To do so:
# Copy config from the container to your machine
docker cp <insert container name>:/etc/rabbitmq/rabbitmq.config .
# (optional) make changes to the copied rabbitmq.config
...
# Start a new container with the config mounted inside (substitute /host/path
# with a full path to your local config file)
docker run -v /host/path/rabbitmq.config:/etc/rabbitmq/rabbitmq.config <insert image name here>
# Now local config appears inside the container and so all changes
# are available immediately. You can restart the container to restart the application.
If you prefer the hard way, you can customize the container so that it starts a script (you have to write it) instead of rabbimq. The script has to start server in background and check whether the process is alive. You can find hints how to do that in the "Run multiple services in a container" article.

Related

How to modify the configuration of Payara running in Docker when a restart is needed

I'm using the Payara image in Dockerhub. If I want to change a configuration parameter in Payara that requires a restart (via asadmin restart domain) the container stops.
How can you make configuration changes like the above without the container stopping ?
I've raised an issue for this:
https://github.com/payara/docker-payaraserver-full/issues/45
In Docker, containers should be preconfigured in the DockerFile and when you change the configuration, you should rebuild your docker container and restart it. You shouldn't expect that you change the config dynamically without a restart, that's not how most of the Docker containers work.
You still can do what you want with the current Payara docker image if you overwrite the ENTRYPOINT using bin/asadmin start-domain instead of the startInForeground.sh script. This will execute a launcher Java process, which will watch over the server process and restart it when needed. The startInForeground.sh script is used by default to optimize running the server in the container.

If docker container is a process how it can be restarted?

Docker container is defined in the offitial doc as a process ?
How much is this precise since process by definition are always running and can't be stopped/restarted ?
Containers are a process with some configuration and namespaces attached to them for isolation. That configuration includes which image to use, and any settings you passed on the docker run command or from inside your compose yml file. You can view this configuration with a docker container inspect.
Part of the namespaces attached to the container is a filesystem namespace that includes a read/write layer for any changes you have made inside the container that weren't written to a volume. You can view a list of these changes with a docker diff on your container.
When you stop the container, the running process is killed, however the configuration and the container filesystem remain. If you restart the container, the process is restarted with the same configuration. When you delete a container, this configuration and the read/write filesystem layer are removed.

Using SMB shares as docker volumes

I'm new to docker and docker-compose.
I'm trying to run a service using docker-compose on my Raspberry PI. The data this service uses is stored on my NAS and is accessible via samba.
I'm currently using this bash script to launch the container:
sudo mount -t cifs -o user=test,password=test //192.168.0.60/test /mnt/test
docker-compose up --force-recreate -d
Where the docker-compose.yml file simply creates a container from an image and binds it's own local /home/test folder to the /mnt/test folder on the host.
This works perfectly fine, when launched from the script. However, I'd like the container to automatically restart when the host reboots, so I specified 'always' as restart policy. In the case of a reboot then, the container starts automatically without anyone mounting the remote folder, and the service will not work correctly as a result.
What would be the best approach to solve this issue? Should I use a volume driver to mount the remote share (I'm on an ARM architecture, so my choices are limited)? Is there a way to run a shell script on the host when starting the docker-compose process? Should I mount the remote folder from inside the container?
Thanks
What would be the best approach to solve this issue?
As #Frap suggested, use systemd units to manage the mount and the service and the dependencies between them.
This document discusses how you could set up a Samba mount as a systemd unit. Under Raspbian, it should look something like:
[Unit]
Description=Mount Share at boot
After=network-online.target
Before=docker.service
RequiredBy=docker.service
[Mount]
What=//192.168.0.60/test
Where=/mnt/test
Options=credentials=/etc/samba/creds/myshare,rw
Type=cifs
TimeoutSec=30
[Install]
WantedBy=multi-user.target
Place this in /etc/systemd/system/mnt-test.mount, and then:
systemctl enable mnt-test.mount
systemctl start mnt-test.mount
The After=network-online.target line should cause systemd to wait until the network is available before trying to access this share. The Before=docker.service line will cause systemd to only launch docker after this share has been mounted. The RequiredBy=docker.service means that if you start docker.service, this share will be mounted first (if it wasn't already), and that if the mount fails, docker will not start.
This is using a credentials file rather than specifying the username/password in the unit itself; a credentials file would look like:
username=test
password=test
You could just replace the credentials option with username= and password=.
Should I mount the remote folder from inside the container?
A standard Docker container can't mount filesystems. You can create a privileged container (by adding --privileged to the docker run command line), but that's generally a bad idea (because that container now has unrestricted root access to your host).
I finally "solved" my own issue by defining a script to run in the /etc/rc.local file. It will launch the mount and docker-compose up commands on every reboot.
Being just 2 lines of code and not dependent on any particular Unix flavor, it felt to me like the most portable solution, barring a docker-only solution that I was unable to find.
Thanks all for the answers

Auto-restart Docker container when contents of host folder change

I am running a Docker container in CoreOS (host) and mounted a host folder with a container's folder.
docker run -v /home/core/folder_name:/folder_name <container_name>
Now, each time I am changing (insert/delete) some file in that host folder (folder_name), I have to restart the container (container_name) to see the effects.
docker restart <container_name>
Is there any way from the host side or docker side to restart it automatically when there is a change (insert/delete) in the folder?
Restarting the docker container on a folder change is rather antithetical to the whole notion of the -v command in the first place. If you really really really need to restart the container in the manner you are suggesting then the only way to do it is from the docker host. There are a couple tools (I can name off the top of my head, there are definitely more) you could use to monitor the host folder and when a file is inserted or deleted you could trigger the docker restart <container_name> command. Those tools are incron and inotify-tools. Here is another question someone asked similar to yours and the answer recommended using one of the tools I suggested.
Now, there is no way that the files in the host folder are not being changed in the docker container as well. It must be that the program you are using in the docker container isn't updating it's view of the /folder_name folder after it starts up. Is it possible for you to force the program you are running in the docker container to refresh or update? The -v command works via bind mounting and has been a stable feature in docker for quite a while. With bind mounting, the home/core/folder_name folder IS (for all practical purposes) the same folder as /folder_name in the container.
run the command
docker run -t -i -v /home/core/folder_name:/folder_name <container_name> /bin/sh
This command gives you an interactive shell within the container. In this shell issue the command:
cd /folder_name; touch a_file
Now go to /home/core/folder_name on the docker host in a shell or some file browser. The file a_file will be there. You can delete that file on the host and go back to the shell running in the docker container and run ls /folder_name. The file a_file will not be there.
So, you either need to use inotify or incron to go about restarting your container anytime a file changes on the host, or figure out how to work with the program you are running in the docker container to have it update its view of the /folder_name folder.

How to save config file inside a running container?

I am new to docker. I want to run tinyproxy within docker. Here is the image I used to create a docker container: "https://hub.docker.com/r/dtgilles/tinyproxy/".
For some unknown reason, when I mount the log file to the host machine, I can see the .conf file, but I can't see log file and the proxy server seems doesn't work.
Here is the command I tried:
docker run -v $(pwd):/logs -p 8888:8888 -d --name tiny
dtgilles/tinyproxy
If I didn't mount the file, then every time when run a container, I need to change its config file inside container.
Does anyone has any ideas about saving the changes in container?
Question
How to save a change committed by/into a container?
Answer
The command docker commit creates a new image from a container's changes (from the man page).
Best Practice
You actually should not do this to save a configuration file. A Docker image is supposed to be immutable. This increases sharing, and image customization through mounted volume.
What you should do is create the configuration file on the host and share it at through parameters with docker run. This is done by using the option -v|--volume. Check the man page, you'll then be able to share files (or directories) between host and containers allowing to persists the data through different runs.

Resources