Docker IPtables chain getting removed - docker

The DOCKER chain in IPtables is getting flushed automatically without reboot. I have to restart the docker service in-order to re-create the chain after every time it is removed. I even saved the IPtables containing DOCKER chain to /etc/iptables/rules.v4 and installed iptables-persistent, however still the IPtables gets flushed somehow and the restored one does not contain the DOCKER chain. Any idea as to what is the reason behind the same. This is happening on an Ubuntu box.
Thanks in advance.

FWIW, this may not be on the Docker side. I had a similar situation and finally found that a previous sysadmin had set cron to reload iptables config every minute

Related

Apply docker's iptables rules after injecting my own rules (easywall)

I'm running Matrix via ansible-playbook (https://github.com/spantaleev/matrix-docker-ansible-deploy) on a vps and I recently installed https://github.com/jpylypiw/easywall to set up iptables rules on it.
When I apply my easywall rules, docker's ones are deleted, so I found out that I can stop the playbook, restart docker.service and finally start the playbook and everything will work just fine.
Now if I wanted to avoid this process, I thought I could iptables-save docker's rules before applying easywall and then iptables-restore -n them, but for some reasons this isn't working.
Is there something else I could try? Thanks

Docker error: Cannot start service ...: network 7808732465bd529e6f20e4071115218b2826f198f8cb10c3899de527c3b637e6 not found

When starting a docker container (not developed by me), docker says a network has not been found.
Does this mean the problem is within the container itself (so only the developer can fix it), or is it possible to change some network configuration to fix this?
I'm assuming you're using docker-compose and seeing this error. I'd recommend
docker-compose up --force-recreate <name>
That should recreate the containers as well as supporting services such as the network in question (it will likely create a new network).
shutdown properly first, then restart
docker-compose down
docker-compose up
I was facing this similar issue and this worked for me :
Try running this
- docker container ls -a and remove the container id by docker container rm ca877071ac10 (this is the container id ).
The problem was there were some old container instances which were not removed. Once all the old terminated instances get removed, you can start the container with docker-compose file
This can be caused by some old service that has not been killed, first add
--remove-orphans flag when bringing down your container to remove any undead services running, then bring the container back up
docker-compose down --remove-orphans
docker-compose up
This is based in this answer.
In my case the steps that produced the error where:
Server restart, containers from a docker-compose stack remained stopped.
Network prune ran, so the network associated with stack containers where deleted.
Running docker-compose --project-name "my-project" up -d failed with the error described in this topic.
Solved simply adding force-recreate, in this way:
docker-compose --project-name "my-project" up -d --force-recreate
This possibly works because with this containers are recreated linked with the also recreated network (previously pruned as described in the pre conditions).
Apparently VPN was causing this. Turning off VPN and resetting Docker to factory settings has solved the problem in two computers in our company. A third, personal computer that did not have VPN never showed the problem.
Amongst other things docker system prune will remove 'all networks not used by at least one container' allowing them to be recreated next docker-compose up
More precisely docker network prune can also be used.

docker swarm services stuck in preparing

I have a swarm stack deployed and I removed couple services from the stack and tried to deploy them again. these services are showing with desired state remove and current state preparing .. also their name got changed from the custom service name to a random docker name. swarm also trying to start these services which are also stuck in preparing. I ran docker system prune on all nodes and them removed the stack. all the services in the stack are not existent anymore except for the random ones. now I cant delete them and they still in preparing state. the services are not running anywhere in the swarm but I want to know if there is a way to remove them.
I had the same problem. Later I found that the current state, 'Preparing' indicates that docker is trying to pull images from docker hub. But there is no clear indicator in docker service logs <serviceName> available in the docker-compose-version above '3.1'.
But it sometimes imposes the latency due to n\w bandwidth or other docker internal reasons.
Hope it helps! I will update the answer if I find more relevant information.
P.S. I identified that docker stack deploy -c <your-compose-file> <appGroupName> is not stuck when switching the command to docker-compose up. For me, it took 20+ minutes to download my image for some reasons.
So, it proves that there is no open issues with docker stack deploy,
Adding reference from Christian to club and complete this answer.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine. Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine. There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation. In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.

How to remove docker container even if root filesystem does not exists?

I have one container that is dead, but I can't remove it, as you can see below.
How can I remove it? Or how can I clean my system manually to remove it?
:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78b0dcaffa89 ubuntu:latest "bash -c 'while tr..." 30 hours ago Dead leo.1.bkbjt6w08vgeo39rt1nmi7ock
:~$ docker rm --force 78b0dcaffa89
Error response from daemon: driver "aufs" failed to remove root filesystem for 78b0dcaffa89ac1e532748d44c9b2f57b940def0e34f1f0d26bf7ea1a10c222b: no such file or directory
Its possible Docker needs to be restarted.
I just ran into the same error message when trying to remove a container, and restarting Docker helped.
I'm running Version 17.12.0-ce-mac49 (21995)
To restart Docker, go to "Preferences" and click on the little bomb in the upper right hand corner.
In my situation I have Docker running off of a expansion drive on my MacBook. After coming out of sleep mode, the expansion drive was automatically ejected (undesirable). But after mounting the drive again, I realized Docker needed to be restarted in order to initialize everything again. At this point I was able to remove containers (docker rm -f).
Maybe its not the same situation, but restarting Docker is a useful thing to try.
While browsing related issues, I found something similar "Driver aufs failed to remove root filesystem", "device or resource busy", and at around 80% below, there was a solution which said to use docker stop cadvisor; then docker rm [dead container]
Edit 1: docker stop cadvisor instead of docker stop deadContainerId
As the error message states, docker was configured to use AUFS as storage driver, but they recommend to use Overlay2 instead, as you can read on this link:
https://github.com/moby/moby/issues/21704#issuecomment-312934372
So I changed my configuration to use Overlay2 as docker storage driver. When we do that it removes EVERYTHING from old storage drive, it means that my "Dead" container was gone also.
It is not exactly a solution for my original question, but the result was accomplished.
Let me share how I got here. My disk on the host was getting full while working with docker containers, ended up getting failed to remove root filesystem myself as well. Burned some time before I realized that my disk is full, and then also after freeing up some space, with trying to restart docker. Nothing worked, only closing everything and rebooting the machine. I hope you'll save some time.

Docker: kill/stop/restart a container, parameters maintained?

I run a specific docker image for the first time:
docker run [OPTIONS] image [CMD]
Some of the options I supply include --link (link with other containers) and -p (expose ports)
I noticed that if I kill that container and simply do docker start <container-id>, Docker honors all the options that I specified during the run command including the links and ports.
Is this behavior explicitly documented and can I always count on the start command to reincarnate the container with all the options I supplied in the run command?
Also, I noticed that killing/starting a container which is linked to another container updates the upstream container's /etc/hosts file automatically:
A--(link)-->B (A has an entry in /etc/hosts for B)
If I kill B, B will normally get a new IP address. I notice that when i start B, the entry for B in A's /etc/hosts file is automatically updated... This is very nice.
I read here that --link does not handle container restarts... Has this been updated recently? If not, why am I seeing this behavior?
(Im using Docker version 1.7.1, build 786b29d)
Yes, things work as you describe :)
You can rely on the behaviour of docker start as it doesn't really "reincarnate" your container; it was always there on disk, just in a stopped state. It will also retain any changes to files, but changes in RAM, such as process state, will be lost. (Note that kill doesn't remove a container, it just stops it with a SIGKILL rather than a SIGTERM, use docker rm to truly remove a container).
Links are now updated when a container changes IP address due to a restart. This didn't use to be the case. However, that's not what the linked question is about - they are discussing whether you can replace a container with a new container of the same name and have links still work. This isn't possible, but that scenario will be covered by the new networking functionality and "service" objects which is currently in the Docker experimental channel.

Resources