background:
We are using registry_mirrors & insecure_registries options in docker daemon.json file. We would like to stop setting the location of the mirrors hard-coded.
Question
Is it possible to use env variable inside daemon.json? So instead of writing ip X.Y.Z.W:PORT one would write ${REPO1}. Hopefully it will be possible to change REPO1 var without restarting the daemon
Remarks
The solution must allow to change the repo location without restarting the daemon
EDIT
it is not possible to use the following inside daemon.json
1. ${VAR_NAME}
Possible workaround
Use custom hostname and redefine it in /etc/hosts. This allows to change repo ip without restarting the daemon. But it does not allow to change the port or the protocol
Possible workaround 2
Some options can be reconfigured when the daemon is running without requiring to restart the process. We use the SIGHUP signal in Linux to reload, and a global event in Windows with the key Global\docker-daemon-config-$PID. The options can be modified in the configuration file but still will check for conflicts with the provided flags. The daemon fails to reconfigure itself if there are conflicts, but it won’t stop executionsource
So one can edit the registry and do sudo systemctl reload docker or sudo kill -SIGHUP $(pidof dockerd). This does not restart the existing containers nor the daemon itself
Related
I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
I am using rabbitmq docker image https://hub.docker.com/_/rabbitmq
I wanted to make changes in rabbitmq.conf file inside container for testing a config, so I tried
rabbitmqctl stop
after making the changes. But this stops the whole docker container.
I even tried
rabbitmq-server restart
but that too doesn't work saying ports are in use.
How do I restart the service without restarting the whole container?
Normally Docker containers are made so that they live while their main process does. If the application exits normally or otherwise, so does the container.
It is easier to accept this behavior than fight it, you just need to create a config on your host machine and mount it inside the container. After that you can make changes to the local file and restart the container to make the application to read the updated configuration. To do so:
# Copy config from the container to your machine
docker cp <insert container name>:/etc/rabbitmq/rabbitmq.config .
# (optional) make changes to the copied rabbitmq.config
...
# Start a new container with the config mounted inside (substitute /host/path
# with a full path to your local config file)
docker run -v /host/path/rabbitmq.config:/etc/rabbitmq/rabbitmq.config <insert image name here>
# Now local config appears inside the container and so all changes
# are available immediately. You can restart the container to restart the application.
If you prefer the hard way, you can customize the container so that it starts a script (you have to write it) instead of rabbimq. The script has to start server in background and check whether the process is alive. You can find hints how to do that in the "Run multiple services in a container" article.
I have a process in an Ubuntu docker container. If it crashes, I want to restart it automatically.
What is the best way to go about it?
I checked systemd (which is the normal Linux method) but docker doesn't support it. inittab is also deprecated.
Docker offers such functionality, all you have to do is to define a restart policy for the container.
You should choose one of the available policies no,always,on-failure,unless-stopped and adjust your docker run command accordingly.
From docs:
To configure the restart policy for a container, use the --restart
flag when using the docker run command
For your case, choose one of always or on-failure.
Note: The above is valid only if the process you have mentioned is the container's entrypoint.
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
I run a specific docker image for the first time:
docker run [OPTIONS] image [CMD]
Some of the options I supply include --link (link with other containers) and -p (expose ports)
I noticed that if I kill that container and simply do docker start <container-id>, Docker honors all the options that I specified during the run command including the links and ports.
Is this behavior explicitly documented and can I always count on the start command to reincarnate the container with all the options I supplied in the run command?
Also, I noticed that killing/starting a container which is linked to another container updates the upstream container's /etc/hosts file automatically:
A--(link)-->B (A has an entry in /etc/hosts for B)
If I kill B, B will normally get a new IP address. I notice that when i start B, the entry for B in A's /etc/hosts file is automatically updated... This is very nice.
I read here that --link does not handle container restarts... Has this been updated recently? If not, why am I seeing this behavior?
(Im using Docker version 1.7.1, build 786b29d)
Yes, things work as you describe :)
You can rely on the behaviour of docker start as it doesn't really "reincarnate" your container; it was always there on disk, just in a stopped state. It will also retain any changes to files, but changes in RAM, such as process state, will be lost. (Note that kill doesn't remove a container, it just stops it with a SIGKILL rather than a SIGTERM, use docker rm to truly remove a container).
Links are now updated when a container changes IP address due to a restart. This didn't use to be the case. However, that's not what the linked question is about - they are discussing whether you can replace a container with a new container of the same name and have links still work. This isn't possible, but that scenario will be covered by the new networking functionality and "service" objects which is currently in the Docker experimental channel.