Cannot connect to docker swarm service task - docker

I'm following a series of blog posts on Docker Swarm and trying to make an example in the last section of https://lostechies.com/gabrielschenker/2016/09/11/docker-and-swarm-mode-part-2/ (Service Discovery and Load Balancing) work. The idea is to start 3 instances of a "whoami" service called bar, that simply reports it's host's hostname and 1 instance of a Nginx service called foo, from which to exec /bin/bash and fire requests to bars via curl. However, my services exit immediately after start and won't let me execute any commands on them.
Given an existing Docker Swarm setup with 1 manager and 2 workers, on the manager node:
# docker service create --name foo --replicas 1 --network test nginx
194bw6mbgwyhmyl82zcxbyzat
# docker service create --name bar --replicas 3 --network test --publish 8000:8000 jwilder/whoami
alhz41p6usu7pbyesiiqh2hrd
# docker service ls
ID NAME REPLICAS IMAGE COMMAND
194bw6mbgwyh foo 0/1 nginx
alhz41p6usu7 bar 0/3 jwilder/whoami
# docker service ps foo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
5vlgohetx4l95hm2mcggd4r6a foo.1 nginx docker-swarm-1 Running Running 5 seconds ago
# docker service ps bar
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
f1w9dxlaqgjlscwkf6ocdrui9 bar.1 jwilder/whoami docker-swarm-2 Running Running 23 seconds ago
7xg7p0rc8oerp0p6nvnm3l73i bar.2 jwilder/whoami docker-swarm-2 Running Running 24 seconds ago
8m2ct4pcc8t263z1n4zmitn5y bar.3 jwilder/whoami docker-swarm-3 Running Running 25 seconds ago
And, as a result:
# docker exec -it 5vlgohetx4l95hm2mcggd4r6a /bin/bash
Error response from daemon: No such container: 5vlgohetx4l95hm2mcggd4r6a
What am I doing wrong?

The id that command docker service ps <service> gives is not actually a container id, but a task id. To find out the container id, run docker inspect --format="{{.Status.ContainerStatus.ContainerID}}" <task id>. Alternatively, you can use just plain docker ps on the node where service task is running and find out the correct container by its name.

Related

Docker container not created after stack deploy. Where can I find error logs?

I have a single-node swarm. My stack has two services. I deployed like so:
$ docker stack deploy -c /tmp/docker-compose.yml -c /tmp/docker-compose-prod.yml ide-controller"
Creating network ide-controller_default
Creating service ide-controller_app
Creating service ide-controller_traefik
No errors. However, according to docker ps, only one container is created. The ide-controller_traefik container was not created.
When I check docker stack services, it says 0/1 for the traefik container:
ID NAME MODE REPLICAS IMAGE PORTS
az4n6brex4zi ide-controller_app replicated 1/1 boldidea.azurecr.io/ide/controller:latest
1qp623hi431e ide-controller_traefik replicated 0/1 traefik:2.3.6 *:80->80/tcp, *:443->443/tcp
Docker service logs has nothing:
$ docker service logs ide-controller_traefik -n 1000
$
There are no traefik containers in docker ps -a, so I can't check logs:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
922fdff58c25 boldidea.azurecr.io/ide/controller:latest "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 3000/tcp ide-controller_app.1.py8jrtmufgsf3inhqxfgkzpep
How can I find out what went wrong or what is preventing the container from being created?
docker service ps <service-name/id> has an error column that can expose errors encountered by libswarm trying to create containers, such as bad image names.
Or, for a more detailed look, docker service inspect <service-name/id> has the current, and previous, service spec, as well as some root level nodes that will trace the state of the last operation and its message.

Docker: Only one docker container running when I start two containers from the same container ID

In scenario 1 and 3, two docker containers are running. But in scenario 2, when I start the container with same container ID(twice), I see only one container running. What is the logic/reason behind this?(I was expecting two instances to be running)
SCENARIO 1:
$ docker create busybox ping www.google.com
163a5907dcfd7f37be0debb1153f0307a962a7709aa6c418ddab1f833a3bc4b8
$ docker create busybox ping www.google.com
178c343d16fe7930b78532d234e735f203cad6a7fa3d932d12c71a433922c2b2
$ docker start 163a5907dcfd7f37be0debb1153f0307a962a7709aa6c418ddab1f833a3bc4b8
163a5907dcfd7f37be0debb1153f0307a962a7709aa6c418ddab1f833a3bc4b8
$ docker start 178c343d16fe7930b78532d234e735f203cad6a7fa3d932d12c71a433922c2b2
178c343d16fe7930b78532d234e735f203cad6a7fa3d932d12c71a433922c2b2
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
178c343d16fe busybox "ping www.google.com" About a minute ago Up 11 seconds jovial_maxwell
163a5907dcfd busybox "ping www.google.com" About a minute ago Up 3 seconds relaxed_hofstadter
SCENARIO 2:
$ docker start 163a5907dcfd7f37be0debb1153f0307a962a7709aa6c418ddab1f833a3bc4b8
$ docker start 163a5907dcfd7f37be0debb1153f0307a962a7709aa6c418ddab1f833a3bc4b8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
163a5907dcfd busybox "ping www.google.com" 3 minutes ago Up 4 seconds relaxed_hofstadter
SCENARIO 3:
$ docker run busybox ping www.google.com
$ docker run busybox ping www.google.com
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0880fa44941 busybox "ping www.google.com" 6 seconds ago Up 6 seconds xenodochial_bohr
df85aab07d43 busybox "ping www.google.com" 13 seconds ago Up 13 seconds trusting_keldysh
When you run docker create or docker run, a container is created from the given image, it is assigned a unique ID, and that container is run. Thus, if you run the same command twice, you get two containers, each with distinct ID, for the same image and you can run them separately.
When you start a container by its ID, the command applies to that particular container. When you restart it, it will not do anything because that container is already running.

Ensuring Docker container will start automatically when host starts

Is there a way to start a Docker container automatically when the host starts? Before, I use the ‘—restart always’ parameter with docker run but it only works if Docker Engine is not killed.
As your comment, I think you have misunderstood about --restart always.
Once docker run --restart always container is run, the container is restarted every time the host is restarted even though you stop the container explicitly .
For example.
$ docker run --restart always --detach --name auto-start-redis redis
d04dfbd73eb9d2ba5beac41363aa5c45c0e034e08173daa6146c3c704e0cd1da
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 4 seconds ago Up 4 seconds 6379/tcp auto-start-redis
$ reboot
# After reboot-------------------------------
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." About a minute ago Up 21 seconds 6379/tcp auto-start-redis
$ docker stop auto-start-redis
auto-start-redis
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 2 minutes ago Exited (0) 30 seconds ago auto-start-redis
$ reboot
# After reboot-------------------------------
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04dfbd73eb9 redis "docker-entrypoint..." 3 minutes ago Up 12 seconds 6379/tcp auto-start-redis
However, of course, it is based upon a premise that docker-host is auto-started. docker-host in here means docker daemon process itself. Usually docker-host will auto-start by default but if it is not, you need to set it yourself.
I am not sure which OS you are using but when it comes to Ubuntu16, you can make it with systemctl command.
$ sudo systemctl enable docker
# To tell systemd to start services automatically at boot, you must enable.
If you use docker swarm, you can make global service with --mode global flag that ensures run on every node in docker swarm.
docker service create --mode global ...
If you don't use docker swarm, the best solution I think is to use init system of your system like systemd as #I.R.R said. You can make your own service file for systemd and specify the condition when the service starts like below.
[Unit]
Description=Your App
After=docker
Refer to this article by digital ocean.

docker service replicas remain 0/1

I am trying out docker swarm with 1.12 on my Mac. I started 3 VirtualBox VMs, created a swarm cluster of 3 all fine.
docker#redis1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2h1m8equ5w5beetbq3go56ebl redis3 Ready Active
8xubu8g7pzjvo34qdtqxeqjlj redis2 Ready Active Reachable
cbi0lyekxmp0o09j5hx48u7vm * redis1 Ready Active Leader
However, when I create a service, I see no errors yet replicas always displays 0/1:
docker#redis1:~$ docker service create --replicas 1 --name hello ubuntu:latest /bin/bash
76kvrcvnz6kdhsmzmug6jgnjv
docker#redis1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
76kvrcvnz6kd hello 0/1 ubuntu:latest /bin/bash
docker#redis1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What could be the problem? Where do I look for logs?
Thanks!
The problem is that your tasks (calling bin/bash) exits quickly since it's not doing anything.
If you look at the tasks for your service, you'll see that one is started and then shutdown within seconds. Another one is then started, shutdown and so on, since you're requested that 1 task be running at all times.
docker service ps hello
If you use ubuntu:latest top for instance, the task will stay up running.
This also can happen if you specify a volume in your compose file that is bound to a local directory that does not exist.
If you look at the log (on some Linux systems, this is journalctl -xe), you'll see which volume can't be bound.
In my case, the replicas were not working and a 0/0 was shown as I did not build them before.
As I saw here, when u publish to swarm with a
docker-compose.yml you need to build them before
So, I decided to do a full system prune, and next to it, a build and a deploy (here, my stack was called demo and I did not have previous services or containers running):
docker stack rm demo
docker system prune --all
docker-compose build
docker stack deploy -c ./docker-compose.yml demo
After this, all was up and running and now services replicas are up on swarm
PS C:\Users\Alejandro\demo> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
oi0ngcmv0v29 demo_appweb replicated 2/2 webapp:1.0 *:80->4200/tcp
ahuyj0idz5tv demo_express replicated 2/2 backend:1.0 *:3000->3000/tcp
fll3m9p6qyof demo_fileinspector replicated 1/1 fileinspector:1.0 *:8080->8080/tcp
The way I maintain the replicas working, at the moment, in dev mode:
Angular/CLi app:
command: >
bash -c "npm install && ng serve --host 0.0.0.0 --port 4200"
NodeJS Backend (Express)
command: >
bash -c "npm install && set DEBUG=myapp:* & npm start --host 0.0.0.0 --port 3000"

Docker container not starting (docker start)

I created the container with the following command:
docker run -d -p 52022:22 basickarl/docker-git-test
Here are the commands:
root#basickarl:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root#basickarl:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e4ac54468455 basickarl/docker-git-test:latest "/bin/bash" 7 minutes ago Exited (0) 26 seconds ago adoring_lumiere
22d7c5d83871 basickarl/docker-git-test:latest "/bin/bash" 2 hours ago Exited (127) About an hour ago thirsty_wright
root#basickarl:~# docker attach --sig-proxy=false e4
FATA[0000] You cannot attach to a stopped container, start it first
root#basickarl:~# docker start e4
e4
root#basickarl:~# docker attach --sig-proxy=false e4
FATA[0000] You cannot attach to a stopped container, start it first
root#basickarl:~#
Not much to say really, I'm expecting the container to start and stay upp. Here are logs:
root#basickarl:~# docker logs e4
root#basickarl:~#
You are trying to run bash, an interactive shell that requires a tty in order to operate. It doesn't really make sense to run this in "detached" mode with -d, but you can do this by adding -it to the command line, which ensures that the container has a valid tty associated with it and that stdin remains connected:
docker run -it -d -p 52022:22 basickarl/docker-git-test
You would more commonly run some sort of long-lived non-interactive process (like sshd, or a web server, or a database server, or a process manager like systemd or supervisor) when starting detached containers.
If you are trying to run a service like sshd, you cannot simply run service ssh start. This will -- depending on the distribution you're running inside your container -- do one of two things:
It will try to contact a process manager like systemd or upstart to start the service. Because there is no service manager running, this will fail.
It will actually start sshd, but it will be started in the background. This means that (a) the service sshd start command exits, which means that (b) Docker considers your container to have failed, so it cleans everything up.
If you want to run just ssh in a container, consider an example like this.
If you want to run sshd and other processes inside the container, you will need to investigate some sort of process supervisor.
What I need is to use Docker with MariaDb on different port /3301/ on my Ubuntu machine because I already had MySql installed and running on 3306.
To do this after half day searching did it using:
docker run -it -d -p 3301:3306 -v ~/mdbdata/mariaDb:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mariaDb mariadb
This pulls the image with latest MariaDb, creates container called mariaDb, and run mysql on port 3301. All data of which is located in home directory in /mdbdata/mariaDb.
To login in mysql after that can use:
mysql -u root -proot -h 127.0.0.1 -P3301
Used sources are:
The answer of Iarks in this article /using -it -d was the key :) /
how-to-install-and-use-docker-on-ubuntu-16-04
installing-and-using-mariadb-via-docker
mariadb-and-docker-use-cases-part-1
Good luck all!

Resources