What would prevent docker service create ... command from completing? It never returns to the command prompt or reports any kind of error. Instead, it repeatedly switches between, "new", "assigned", "ready", and "starting".
Here's the command:
docker service create --name proxy -p 80:80 -p 443:443 -p 8080:8080 --network proxy -e MODE=swarm vfarcic/docker-flow-proxy
This is from "The DevOps 2.1 Toolkit - Docker Swarm", a book by Viktor Farcic, and I've come to a problem I can't get past...
I'm trying to run a reverse proxy service on a swarm of three nodes, all running locally on a LinuxMint system, and docker service create never returns to the command prompt. It sits there and loops between statuses and always says overall progress: 0 out of 1 tasks...
This is on a cluster of three nodes all running locally, created with docker-machine with one manager and two workers.
While testing, I've successfully created other overlay networks and other services using the same cluster of nodes.
As far as I can tell, the only significant differences between this and the other services I created for these nodes are:
There are three published ports. I double-checked the host and there's nothing else listening on any of those ports.
I don't have a clue what the vfarcic/docker-flow-proxy does.
I'm running this on a (real, not VirtualBox/VMWare/Hyper-V) Linux Mint 19 system (based on Ubuntu 18.04) with Docker 18.06. My host has 16Gb RAM and plenty of spare disk space. It was set up for the sole purpose of learning Docker after having too many problems on a Windows host.
Update:
If I Ctrl-C in the terminal where I tried to create the service, I get this message:
Operation continuing in background.
Use docker service ps p7jfgfz8tqemrgbg1bn0d06yp to check progress.
If I use docker service ps --no-trunc p473obsbo9tgjom1fd61or5ap I get:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
dbzvr67uzg5fq9w64uh324dtn proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Running Starting 11 seconds ago
vzlttk2xdd4ynmr4lir4t38d5 \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 16 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
q5dlvb7xz04cwxv9hpr9w5f7l \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 49 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
z86h0pcj21joryb3sj59mx2pq \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
uj2axgfm1xxpdxrkp308m28ys \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
So, no mention of node-2 or node-3 and I don't know what the error message means.
If I then use docker service logs -f proxy I get the following messages repeating every minute or so:
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Starting HAProxy
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Getting certs from http://202.71.99.195:8080/v1/docker-flow-proxy/certs
I can only guess that it fails to get whatever certs it's looking for. There doesn't seem to be anything at that address, but an IP address lookup shows that it leads to my ISP. Where's that coming from?
There may be more than one motives why docker can't create the service. The simplest to see why is to see the last status of the service by running:
docker service ps --no-trunc proxy
where you have an ERROR column that describes the last error.
Alternatively you can run
docker service logs -f proxy
to see what's going on inside the proxy service.
This was an old question, but I was experimenting with docker and had this same issue.
For me this issue was resolved by publishing the ports before stipulating the name.
I believe docker functions after stipulating the name as: --name "name" "image to use" "commands to pass to image on launch"
So it continues attempting to pass those options to the container as commands and fails instead of using them as options when running the service.
Add -t flag into command to prevent container(s) exit.
The same problem happened to me as well.
I think in my case, the image had some kind of error. I did the following experiments:
I first pulled the image, and created the service
docker service create --name apache --replicas 5 -p 5000:80 [image repo]
The same error from the question happened.
Like the first answer above, I tried a different order
docker service create --replicas 5 -p 5000:80 --name apache [image repo]
Same error occurred
I tried to create the service w/o the image downloaded locally, but the same error occurred.
I tried a different image, which worked before, then the service was created successfully.
The first image I used was swedemo/firstweb. (This is NOT my repo, it's my prof's)
I didn't examine what part of this image was the problem but hope this can give clue to others.
For me, it was some startup script issues causing the container to exit with 0 code.
As my docker service has to have at least one replica (given as constraints), docker swarm try to recreate it. The container fail right after being relaunched by docker daemon hence causing the loop.
Related
I am using Docker version 17.09.0-ce, and I see that containers are marked as unhealthy. Is there an option to get the container restart instead of keeping the container as unhealthy?
Restarting of unhealty container feature was in the original PR (https://github.com/moby/moby/pull/22719), but was removed after a discussion and considered to be done later as enhancement of RestartPolicy.
At this moment you can use this workaround to automatically restarting unhealty containers: https://hub.docker.com/r/willfarrell/autoheal/
Here is a sample compose file:
version: '2'
services:
autoheal:
restart: always
image: willfarrell/autoheal
environment:
- AUTOHEAL_CONTAINER_LABEL=all
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Simply execute docker-compose up -d on this
You can restart automatically an unhealthy container by setting a smart HEALTHCHECK and a proper restart policy.
The Docker restart policy should be one of always or unless-stopped.
The HEALTHCHECK instead should implement a logic that kills the container when it's unhealthy.
In the following example I used curl with its internal retry mechanism and piped it (in case of failure/service unhealthy) to the kill command.
HEALTHCHECK --interval=5m --timeout=2m --start-period=45s \
CMD curl -f --retry 6 --max-time 5 --retry-delay 10 --retry-max-time 60 "http://localhost:8080/health" || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'
The important step to understand here is that the retry logic is self-contained in the curl command, the Docker retry here actually is mandatory but useless. Then if the curl HTTP request fails 3 times, then kill is executed. First it sends a SIGTERM to all the processes in the container, to allow them to gracefully stop, then after 10 seconds it sends a SIGKILL to completely kill all the processes in the container. It must be noted that when the PID1 of a container dies, then the container itself dies and the restart policy is invoked.
kill docs: https://linux.die.net/man/1/kill
curl docs: https://curl.haxx.se/docs/manpage.html
docker restart docs: https://docs.docker.com/compose/compose-file/compose-file-v2/#restart
Gotchas: kill behaves differently in bash than in sh. In bash you can use -1 to signal all the processes with PID greater than 1 to die.
For standalone containers, Docker does not have native integration to restart the container on health check failure though we can achieve the same using Docker events and a script. Health check is better integrated with Swarm. With health check integrated to Swarm, when a container in a service is unhealthy, Swarm automatically shuts down the unhealthy container and starts a new container to maintain the container count as specified in the replica count of a service.
You can try put in your Dockerfile something like this:
HEALTHCHECK --interval=5s --timeout=2s CMD curl --fail http://localhost || kill 1
Don't forget --restart always option.
kill 1 will kill process with pid 1 in container and force container exit. Usually the process started by CMD or ENTRYPOINT has pid 1.
Unfortunally, this method likely don't change container's state to unhealthy, so be careful with it.
Unhealthy docker containers may be restarted with simple crontab rule:
* * * * * docker ps -f health=unhealthy --format "docker restart {{.ID}}" | sh
Docker has a couple of ways to get details on container health. You can configure health checks and how often they run. Also, health checks can be run on applications running inside a container, like http (this would use curl --fail option.) You can view the health_status event to get details.
For detailed information on an unhealthy container the inspect command comes in handy, docker inspect --format='{{json .State.Health}}' container-name (see https://blog.newrelic.com/2016/08/24/docker-health-check-instruction/ for more details.)
You should resolve the error condition causing the "unhealthy" tag (anytime the health check command runs and gets an exit code of 1) first. This may or may not require that Docker restart the container, depending on the error. If you are starting/restarting your containers automatically, then either trapping the start errors or logging them and the health check status can help address errors quickly. Check the link if you are interested in auto start.
According to https://codeblog.dotsandbrackets.com/docker-health-check/
Create container and add " restart: always".
In the use of healthcheck, pay attention to the following points:
For standalone containers, Docker does not have native integration to restart the container on health check failure though we can achieve the same using Docker events and a script. Health check is better integrated with Swarm. With health check integrated to Swarm, when a container in a service is unhealthy, Swarm automatically shuts down the unhealthy container and starts a new container to maintain the container count as specified in the replica count of a service.
I am trying to get a Docker Container running. I am following this guide: http://opendata.cern.ch/docs/cms-guide-docker.
The container refuses to start and give me access to the shall I expect.
Running the following command (as mentioned in the guide) does nothing, the process exits with a non-0 exit code. The first time I ran it, it downloaded the container image but did not land me into the sell as the guide says it would.
$ docker run --name opendata-2010 -it cmsopendata/cmssw_4_2_8 /bin/bash
I can see the container, it exits soon as it starts.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
be670158d200 cmsopendata/cmssw_5_3_32 "/opt/cms/entrypoint…" 34 minutes ago Exited (139) 3 seconds ago opendata
These are other things I have tried to no avail.
$ docker exec -it be670158d200 /bin/bash
Error response from daemon: Container be670158d200ae85871fbda810fa6074dcb7bc8fc606f000710f630add1b80b6 is not running
$ docker start --attach be670158d200
failed to resize tty, using default size
My question is similar to this: Docker - Container is not running, but I know that unlike in that question, here I should be getting the shell.
I am running this in Windows Subsystem for Linux 2 - Ubuntu 20.04, docker version 19.03.8 - build afacb8b7f0. Any help is greatly appreciated, thanks.
I had the same error with below logs
dockerd[15309]: time="2022-01-11T11:13:35.133154132+05:30" level=error msg="Handler for POST /v1.41/exec/94553dc2f9aaa3c1245df7384138786a8a576af99105a285258fce8b980b4660/resize returned error: timeout waiting for exec session ready"
This is a bug in docker 20.10 version and can be solved by downgrading containerd rpm
Removed:
containerd.io.x86_64 0:1.4.4-3.1.el7
Installed:
containerd.io.x86_64 0:1.4.3-3.1.el7
I am using docker for first time. I created docker image for DB2 and when started to login to the instance using command,
sudo docker exec -i -t db2 /bin/bash
I got following error:
Error response from daemon: Container [id] is not running
I also tried to start the instance with:
sudo docker start [id]
It returned error message as:
Error response from daemon: driver failed programming external connectivity on endpoint db2 ([id]): Bind for 0.0.0.0:50000 failed: port is already allocated
Error: failed to start containers: [id]
Can someone help on this?
If you have a look to your error message, it shows that you're trying to run an entrypoint in a container [id] which uses port 50000, that is already being used.
That's why docker start [id] doesn't work.
This can be caused by several things (let me add some of them instead concrete which is the problem because you haven't expressed many details).
docker exec should be used with container id that are already running, not images, not entrypoints. So, maybe you missed do docker run before docker exec. Try to do docker run -it db2 /bin/bash if db2 is your docker image.
Other possibility is that your container started and entrypoint exited by any reason, without releasing port 50000. So, when you tried to re-launch without having released port, if container exited but wasn't removed, is not possible for other docker be started using this port. Let me recommend you to do docker container prune to clean exited previous containers.
Maybe you're starting two or more containers from the same image (maybe db2) without doing any port mapping. If you want to run several instances of the same docker image you can do two things:
Use docker swarm, kubernetes or similar to scale container (pod). It lets you use the same port 50000.
Use a port mapping in docker run command: For example,
for first container, do docker run -d -p 50001:50000 [docker-image] [entrypoint]
for second container, do docker run -d -p 50002:50000 [docker-image] [entrypoint]
In this way, you'll have several mappings from different ports to the same 50000 avoiding this error of port-reusing, but I'm not sure if this is what you want to do. I'm only trying to help you with the little information you provided.
I hope anyway it's helpful.
I am using Docker version 17.09.0-ce, and I see that containers are marked as unhealthy. Is there an option to get the container restart instead of keeping the container as unhealthy?
Restarting of unhealty container feature was in the original PR (https://github.com/moby/moby/pull/22719), but was removed after a discussion and considered to be done later as enhancement of RestartPolicy.
At this moment you can use this workaround to automatically restarting unhealty containers: https://hub.docker.com/r/willfarrell/autoheal/
Here is a sample compose file:
version: '2'
services:
autoheal:
restart: always
image: willfarrell/autoheal
environment:
- AUTOHEAL_CONTAINER_LABEL=all
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Simply execute docker-compose up -d on this
You can restart automatically an unhealthy container by setting a smart HEALTHCHECK and a proper restart policy.
The Docker restart policy should be one of always or unless-stopped.
The HEALTHCHECK instead should implement a logic that kills the container when it's unhealthy.
In the following example I used curl with its internal retry mechanism and piped it (in case of failure/service unhealthy) to the kill command.
HEALTHCHECK --interval=5m --timeout=2m --start-period=45s \
CMD curl -f --retry 6 --max-time 5 --retry-delay 10 --retry-max-time 60 "http://localhost:8080/health" || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'
The important step to understand here is that the retry logic is self-contained in the curl command, the Docker retry here actually is mandatory but useless. Then if the curl HTTP request fails 3 times, then kill is executed. First it sends a SIGTERM to all the processes in the container, to allow them to gracefully stop, then after 10 seconds it sends a SIGKILL to completely kill all the processes in the container. It must be noted that when the PID1 of a container dies, then the container itself dies and the restart policy is invoked.
kill docs: https://linux.die.net/man/1/kill
curl docs: https://curl.haxx.se/docs/manpage.html
docker restart docs: https://docs.docker.com/compose/compose-file/compose-file-v2/#restart
Gotchas: kill behaves differently in bash than in sh. In bash you can use -1 to signal all the processes with PID greater than 1 to die.
For standalone containers, Docker does not have native integration to restart the container on health check failure though we can achieve the same using Docker events and a script. Health check is better integrated with Swarm. With health check integrated to Swarm, when a container in a service is unhealthy, Swarm automatically shuts down the unhealthy container and starts a new container to maintain the container count as specified in the replica count of a service.
You can try put in your Dockerfile something like this:
HEALTHCHECK --interval=5s --timeout=2s CMD curl --fail http://localhost || kill 1
Don't forget --restart always option.
kill 1 will kill process with pid 1 in container and force container exit. Usually the process started by CMD or ENTRYPOINT has pid 1.
Unfortunally, this method likely don't change container's state to unhealthy, so be careful with it.
Unhealthy docker containers may be restarted with simple crontab rule:
* * * * * docker ps -f health=unhealthy --format "docker restart {{.ID}}" | sh
Docker has a couple of ways to get details on container health. You can configure health checks and how often they run. Also, health checks can be run on applications running inside a container, like http (this would use curl --fail option.) You can view the health_status event to get details.
For detailed information on an unhealthy container the inspect command comes in handy, docker inspect --format='{{json .State.Health}}' container-name (see https://blog.newrelic.com/2016/08/24/docker-health-check-instruction/ for more details.)
You should resolve the error condition causing the "unhealthy" tag (anytime the health check command runs and gets an exit code of 1) first. This may or may not require that Docker restart the container, depending on the error. If you are starting/restarting your containers automatically, then either trapping the start errors or logging them and the health check status can help address errors quickly. Check the link if you are interested in auto start.
According to https://codeblog.dotsandbrackets.com/docker-health-check/
Create container and add " restart: always".
In the use of healthcheck, pay attention to the following points:
For standalone containers, Docker does not have native integration to restart the container on health check failure though we can achieve the same using Docker events and a script. Health check is better integrated with Swarm. With health check integrated to Swarm, when a container in a service is unhealthy, Swarm automatically shuts down the unhealthy container and starts a new container to maintain the container count as specified in the replica count of a service.
I'd love a consistent way to drop a docker container into an error state to do some testing around container errors.
I was hopeful when I saw bantl23/error on the docker hub, but it happily starts with no error.
I like the idea of a container that you can cause to fail on demand from outside - R0MANARMY's point is valid though - Docker monitors the process it starts, and if the process exits then the container goes to the Exited status, there isn't really a concept of an error state.
Having said that, if you want to test an Exited container then the image you mentioned does work, but it's limited - it runs, waits for 10 seconds and then exits:
docker run -d bantl23/error
If you want something you can control from the outside, I've put a very simple image together for that - sixeyed/bad-server. It's an HTTP server that you can force into an error state by hitting http://ip:8080/err:
> docker run -d -p 80:8080 sixeyed/bad-server
8b4bd7ffd96d543c9b51c7709267894d2bc75daa99ea80250d5e7846f98a6526
> docker logs -f 8b4
+ exec app
Listening on port 8080
Responding to path:
test
err!
> docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4bd7ffd96d sixeyed/bad-server "go-wrapper run" 37 seconds ago Exited (1) 10 seconds ago fervent_hawking
While the logs were running, I hit http://localhost/test and then http://localhost/err - which caused the container to exit.