container fails to start in swarm mode - docker-swarm

I tried to start the docker service using swarm mode, but I am not able to connect to port 8080
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3tdzofpn6qo5 vigilant_wescoff replicated 0/1 shantanu/abc:latest *:8080->8080/tcp
~ $ docker service ps 3tdzofpn6qo5
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iki0t3x1oqmz vigilant_wescoff.1 shantanuo/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Ready Ready 1 second ago
z88nyixy7u10 \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 5 minutes ago
zf4fac2a4dlh \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 11 minutes ago
zzqj4lldmxox \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-6-134.ap-south-1.compute.internal Shutdown Complete 14 minutes ago
z8eknet7oirq \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-20-50.ap-south-1.compute.internal Shutdown Complete 17 minutes ago
I used docker for aws (community version)
https://docs.docker.com/docker-for-aws/#docker-community-edition-ce-for-aws
But I guess that should not make any difference and the container should work. I have tested it using docker run command it works as expected.
In case of swarm mode, how do I know what exactly is going wrong?

You can use docker events on managers to see what the orchestrator is doing (but you can't see the history).
You can use docker events on workers to see what containers/networks/volumes etc. are doing (but you can't see the history).
You can look at the docker service logs to see current and past container logs
You can use docker container inspect to see the exit (error) code of the stopped containers in that service task list.

Related

Al docker images exit 126 status

I have just installed Ubuntu 20.0 and installed docker using snap. I'm trying to run some different docker images for hbase and rabbitmq but each time I start an image, it immediately exists with 126 status.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d58720fce3a dajobe/hbase "/opt/hbase-server" 5 seconds ago Exited (126) 4 seconds ago hbase-docker
b7a84731a05b harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) 59 seconds ago optimistic_goldwasser
294b95ef081a harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) About a minute ago goofy_tu
I have tried everything and tried to use docker inspect on separate images, but nothing gives away, why the containers exit out immediately. Any suggestions?
EDIT
When i run the command i run the following
$ sudo bash start-hbase.sh
It gives the output exactly like it should
Starting HBase container
Container has ID 3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
Updating /etc/hosts to make hbase-docker point to (hbase-docker)
Now connect to hbase at localhost on the standard ports
ZK 2181, Thrift 9090, Master 16000, Region 16020
Or connect to host hbase-docker (in the container) on the same ports
For docker status:
$ id=3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
$ docker inspect $id
I think the issue might be due to some permissions, because i tried to chck the logs as suggested in the comments, and get this error:
/bin/bash: /opt/hbase-server: Permission denied
Check if the filesystem is mounted with noexec option using mount command or in /etc/fstab. If yes, remove it and remount the filesystem (or reboot).
Quick solution is restart service docker and network-manager

Ensuring Docker container will start automatically when host starts

Is there a way to start a Docker container automatically when the host starts? Before, I use the ‘—restart always’ parameter with docker run but it only works if Docker Engine is not killed.
As your comment, I think you have misunderstood about --restart always.
Once docker run --restart always container is run, the container is restarted every time the host is restarted even though you stop the container explicitly .
For example.
$ docker run --restart always --detach --name auto-start-redis redis
d04dfbd73eb9d2ba5beac41363aa5c45c0e034e08173daa6146c3c704e0cd1da
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 4 seconds ago Up 4 seconds 6379/tcp auto-start-redis
$ reboot
# After reboot-------------------------------
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." About a minute ago Up 21 seconds 6379/tcp auto-start-redis
$ docker stop auto-start-redis
auto-start-redis
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 2 minutes ago Exited (0) 30 seconds ago auto-start-redis
$ reboot
# After reboot-------------------------------
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 3 minutes ago Up 12 seconds 6379/tcp auto-start-redis
However, of course, it is based upon a premise that docker-host is auto-started. docker-host in here means docker daemon process itself. Usually docker-host will auto-start by default but if it is not, you need to set it yourself.
I am not sure which OS you are using but when it comes to Ubuntu16, you can make it with systemctl command.
$ sudo systemctl enable docker
# To tell systemd to start services automatically at boot, you must enable.
If you use docker swarm, you can make global service with --mode global flag that ensures run on every node in docker swarm.
docker service create --mode global ...
If you don't use docker swarm, the best solution I think is to use init system of your system like systemd as #I.R.R said. You can make your own service file for systemd and specify the condition when the service starts like below.
[Unit]
Description=Your App
After=docker
Refer to this article by digital ocean.

How do I remove old service images after an update?

I'm toying around with Docker Swarm. I've deployed a service an performed a couple of updates to see how it works. I'm observing that docker is keeping the old images around for the service.
How do I clean those up?
root#picday-manager:~# docker service ps picday
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bk6pw0t8vw4r picday.1 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
ocbcjpnc71e6 \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
lcqqhbp8d99q \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 11 minutes ago
db7mco0d4uk0 picday.2 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
z43p0lcdicx4 \_ picday.2 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
These are containers, not images. In docker, there's a rather significant difference between the two (images are the definition used to create a container). Inside of a swarm service, they are referred to as tasks. To adjust how many docker keeps by default, you can change the global threshold with:
docker swarm update --task-history-limit 1
The default value for this is 5.
To remove individual containers, you can remove the container from the host where it's running with:
docker container ls -a | grep picday
docker container rm <container id>

How to restart multiple containers in a docker swarm

I currently have 8 containers across 4 different host machines in my docker setup.
ubuntu#ip-192-168-1-8:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
yyyytxxxxx8 mycontainers global 8/8 myapplications:latest
Running a ps -a on the service yields the following.
ubuntu#ip-192-168-1-8:~$ docker service ps -a yy
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
\_ mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
My question is how can i execute a restart of all the containers using the service ID? I dont want to manually log into every node and execute a restart.
In the latest stable version of docker 1.12.x, it is possible to restart the container by updating the service configuration, but in the docker 1.13.0 which is released soon, even if the service setting is not changed, by specifying the --force flag, the container will be restarted. If you do not mind to use the 1.13.0 RC4 you can do it now.
$ docker service update --force mycontainers
Update: Docker 1.13.0 has been released.
https://github.com/docker/docker/releases/tag/v1.13.0
Pre-Docker 1.13, I found that scaling all services down to 0, waiting for shutdown, then scaling everything back up to the previous level works.
docker service scale mycontainers=0
# wait
docker service scale mycontainers=8
Updating the existing service, swarm will recreate all containers. For example, you can simply update a property of the service to archive restarting.

docker service create image command like `docker run`

Does anyone use docker service create with command like docker run -it ubuntu bash ?
e.g: docker service create --name test redis bash.
I want to run a temp container for debugging on production environment in swarm mode with the same network.
This is my result:
user#ubuntu ~/$ docker service ps test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bmig9qd9tihw7q1kff2bn42ab test.1 redis ubuntu Ready Ready 3 seconds ago
9t4za9r4gb03az3af13akpklv \_ test.1 redis ubuntu Shutdown Complete 4 seconds ago
1php2be7ilp7psulwp31b3ib4 \_ test.1 redis ubuntu Shutdown Complete 10 seconds ago
drwyjdggd13n1emb66oqchmuv \_ test.1 redis ubuntu Shutdown Complete 15 seconds ago
b1zb5ja058ni0b4c0etcnsltk \_ test.1 redis ubuntu Shutdown Complete 21 seconds ago
When you create a service that startup Bash it will imediately stop because it is in detached mode.
You can have the same behavior if you run docker run -d ubuntu bash

Resources