Run docker container on a specific node in the swarm cluster - docker

I'm trying to implement a backup system. This system requires the execution of a docker container at the time of the backup on the specific node. Unfortunately I have not been able to get it to execute on the required node.
This is the command I'm using and executing on the docker swarm manager node. It is creating the container on the swarm manager node and not the one specified in the constraint. What am I missing?
docker run --rm -it --network cluster_02 -v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
-e constraint:node.hostname==storageBeta \
comp/backup create $BACKUP_NAME

You are using an older classic Swarm method to try running your container, but almost certainly using Swarm Mode. If you installed your swarm with docker swarm init and can see nodes with docker node ls on the manager, then this is the case.
Classic Swarm ran as a container that was effectively a reverse proxy to multiple docker engines, intercepting calls to commands like docker run and sending them to the appropriate node. It is generally recommended to avoid this older swarm implementation unless you have a specific use case and take the time to configure mTLS on each of your docker hosts.
Swarm Mode provides an HA manager using Raft for quorum (same as etcd), handles encryption of all management requests, configures overlay networking, and works with a declarative model, giving the target state, rather than imperative commands to run. It's a very different model from classic Swarm. Notably, Swarm Mode only works on services and stacks, and completely ignores the docker run and other local commands, e.g.:
docker service create \
--name backup \
--constraint node.hostname==storageBeta \
--network cluster_02 \
-v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
comp/backup create $BACKUP_NAME
Note that jobs are not well supported in Swarm Mode yet (there is an open issue seeking community feedback on including this functionality). It is currently focused on running persistent services that do not normally exit unless there is an error. If your command is expected to exit, you can include an option like --restart-max-attempts 0 to prevent swarm mode from restarting it. You may also have additional work to do if the network is not swarm scoped.
I'd also recommend converting this to a docker-compose.yml file and deploying with docker stack deploy to better document the service as a config file.

Related

Docker container communication restrictions

My setup is based on running two Docker containers, one with an API and the other with a DB.
This methodology makes it possible that both containers have an exposed port to web services.
But what I want is that the DB container (toolname-db) can only be exposed to the API container (toolname-api). This makes sure that the DB is not not exposed to web services directly.
How do I have to alter my setup in order to make sure what I want is possible?
Currently I use the following commands:
sudo docker build -t toolname .
sudo docker run -d -p 3333:3333 --name=toolname-db mdillon/postgis
sudo docker run -it -p 4444:4444 --name=toolname-api --network=host -d toolname
A container will only be reachable from outside Docker space if it has published ports. So you need to remove the -p option from your database container.
For the two containers to be able to talk to each other they need to be on the same network. Docker's default here is for compatibility with what's now a very old networking setup, so you need to manually create a network, though it doesn't need any special setting.
Finally, you don't need --net host. That disables all of Docker's networking setup; port mappings with -p are disabled, and you can't communicate with containers that don't themselves have ports published. (I usually see it recommended as a hack to work around hard-coded localhost connection strings.)
That leaves your final setup as:
sudo docker build -t toolname .
sudo docker network create tool
sudo docker run -d --net=tool --name=toolname-db mdillon/postgis
sudo docker run -d --net=tool -p 4444:4444 --name=toolname-api toolname
As #BentCoder suggests in a comment, it's very common to use Docker Compose to run multiple containers together. If you do, it creates a network for you which can save you a step.

unhealthy docker container not restarted by docker native health check

I have implemented docker native health check by adding HEALTHCHECK command in Docker file as shown below,
HEALTHCHECK --interval=60s --timeout=15s --retries=3 CMD ["/svc/app/healthcheck/healthCheck.sh"]
set the entry point for the container
CMD [".././run.sh"]
executing the docker run command as shown below,
docker run -d --net=host --pid=host --publish-all=true -p 7000:7000/udp applicationname:temp
healthCheck.sh is exiting with 1, when my application is not up and I can see the container status as unhealthy, but it is not getting restarted.
STATUS
Up 45 minutes (unhealthy)
Below are the docker and OS details:
[root#localhost log]# docker -v
Docker version 18.09.7, build 2d0083d
OS version
NAME="CentOS Linux"
VERSION="7 (Core)"
How to restart my container automatically when it becomes unhealthy?
Docker only reports the status of the healthcheck. Acting on the healthcheck result requires an extra layer running on top of docker. Swarm mode provides this functionality and is shipped with the docker engine. To enable:
docker swarm init
Then instead of managing individual containers with docker run, you would declare your target state with docker service or docker stack commands and swarm mode will manage the containers to achieve the target state.
docker service create -d --net=host applicationname:temp
Note that host networking and publishing ports are incompatible (they make no logical sense together), net requires two dashes to be a valid flag, and changing the pid namespace is not supported in swarm mode. Many other features should work similar to docker run.
https://docs.docker.com/engine/reference/commandline/service_create/
There is no auto restart mechanism for unhealth container currently, see this, but you can make a workaround as mentioned here:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
It add docker unix domain socket to the monitor container, then it could monitor all unhealthy container and restart it for you if other container is not healthy.

Unable to access Dockerized NiFi UI from remote host

I'm trying to stand up a temporary NiFi server to support a proof of concept demo for a customer. For these types of short lived servers I like to use Docker when possible. I'm able to get the NiFi container up and running with out any issues but I can't figure out how to access its UI from the browser on a remote host. I've tried the following docker run variations:
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here} \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
My NiFi version is 1.8.0. I'm fairly certain that my problems are related to the host-headers blocker feature added to version 1.5.0. I've seen a few questions similar to mine but no solutions.
Is it possible to access the NiFi UI from a remote host after version 1.5.0?
Can host-headers blocker be disabled for a non-prod demo?
Would a non-Docker install on my server present the same host-headers blocker issues?
Should a use 1.4 for my demo and save myself a headache?
While there was a bug around 1.5.0 surrounding the host headers in Docker that issue was resolved and, additionally, the host header check now is only enforced for secured environments (you will see a note about this in the logs on container startup).
The commands you provide in your question are all workable for accessing NiFi on the associated mapped port in each example and I have verified this in 1.6.0, 1.7.0, and 1.8.0. You may want to evaluate the network security settings of your remote machine in question (cloud provided instances, for example. will typically require explicit security groups exposing ports).
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
The docker-compose.yml is shared here

Is it a bad practice to start/stop docker containers dynamically?

I have some micro services that accept arguments to run.
At some point I might need them like below:
docker run -d -e 'MODE=a' --name x_service_a x_service
docker run -d -e 'MODE=b' --name x_service_b x_service
docker run -d -e 'X_SOURCE=a' -e 'MODE'='foo' --name y_service_afoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='foo' --name y_service_bfoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='bar' --name y_service_bbar y_service
I do this with another service I wrote called 'coordinator' which uses docker engine api to monitor, start and stop these micro services.
The reason I can't make docker compose (as in my above example) because I can't have two running x_service with identical config.
So is it fine to manage them with docker engine API?
Services are generally scaled up and scaled down based on organizations needs. This translates to starting and stopping containers dynamically.
Many a times, the same docker image is started with different configurations. Think of a company managing various Wordpress websites for different customers.
So the answer to your question if it is a bad practice to start/stop docker containers dynamically, the answer is NO.
There are multiple ways to manage docker containers, some like to manage with just docker commands, some with docker-compose and some with more advanced management platforms.

How can I replicate containers inside docker service when I restart docker daemon?

I created docker service for the image percona XtraDB cluster with 3 replicas using the following command
docker service create \
--name mysql-galera \
--replicas 3 \
-p 3306:3306 \
--network mynet \
--env MYSQL_ROOT_PASSWORD=mypassword \
--env DISCOVERY_SERVICE=10.0.0.2:2379 \
--env XTRABACKUP_PASSWORD=mypassword \
--env CLUSTER_NAME=galera \
perconalab/percona-xtradb-cluster:5.6
I had already initialized docker swarm with three machines ( named with mach1, mach2, mach3) and all are joined as managers. And the replicas equally distributed to each of the three machines
When I tried to stop the docker daemon in mach2, docker created one more replica container in mach3. Again I restarted the docker daemon, mach3 was still running the two replicas and nothing on mach2. I manually removed the container in mach3 and mach2 was up with the 3rd replica
What should I do to automatically replicate containers on a restarted docker machine?
I dont't think docker service re balance option is available in docker swarm ..... you can re distribute container by updating service with --force option
docker service update mysql-galera --force
What you are looking for is a way to rebalance your replicaset automatically based on the availability of masters / slave nodes with capacity. However doing this would involve killing healthy containers and rescheduling them.
There is a open issue in docker swarm for this
https://github.com/moby/moby/issues/24103
Right now if you want to do this the best way seems to be to scle up / down or create a rollout which will attempt to reschedule the containers in the swarm.

Resources