Docker containers have lost with no reason - docker

I have been using 3-4 Docker containers for 1-2 months. However, I hibernate my PC instead of shutdown and before hibernate I stop Docker engine everyday for the last weeks. However, today I cannot see my containers and there is only "No containers running" message on Docker dashboard. I restarted many times and finally update to latest version and restarted PC, but still no containers. I also tried Docker factory reset, but nothing is changed. SO, how can I access my containers?
I tried to list containers via: docker container ls, but there is no container listed. So, have my containers has gone even with no reason?

Normally you can list stopped containers with
docker container ls -a
Then check the logs on those containers of they will not start. However...
I also tried Docker factory reset
At this point those containers, images, and volumes are most likely gone. I don't believe there's a recovery after that step.

Related

Cron job to kill all hanging docker containers

I am new to docker containers but we have containers being deployed and due to some internal application network bugs the process running in the container hangs and the docker container is not terminated. While we debug this issue I would like a way to find all those containers and setup a cron job to periodically check and kill those relevant containers.
So how would I determine from "docker ps -a" which containers should be dropped and how would I go about it? Any ideas? We are eventually moving to kubernetes which will help with these issues.
Docker already have a command to cleanup the docker environment, you can use it manually or maybe setup a job to run the following command:
$ docker system prune
Remove all unused containers, networks, images (both dangling and
unreferenced), and optionally, volumes.
refer to the documentation for more details on advanced usage.

Docker is creating random container

I recently installed Docker on my Raspberry Pi 4 and connected it to a portrainer instance on my other server. On my Raspberry Pi I created two docker containers, but somehow docker automatically creates random ubuntu containers with names like:
I don't have an idea why it is doing this: /
But when I delete those Containers, a few hours later there are some other containers again.
I hope anyone can help me with that kind of problem.
Ok i think i solved this question...
I run this webinterface (Portrainer) on my public hosted server. And i only shared my ip with my port for Portrainer as "Endpoint" and now i have disabled the port on my raspberry for all other IPs then my Raspberry PI. And now i solved this problem. No container is created anymore. I came up to this solution, because i saw the infos, this container was created and it "wgets" some ".sh"-file from some ip with executing some shell commands. And i thought, "this is not from mine, this is someone want to mine some bitcoins on my raspberry". (because this script downloaded some mining scripts.....
:PS: My english is very bad. But i hope it helped someone other.
Those random names are created automatically when a container is started without a name attribute. If you did not start an unnamed container yourself (by issuing docker run without the --name option), those are most likely being created by a docker build.
You can delete those stopped containers manually one at a time or use commands like docker system prune (see docker help system prune for documentation) to cleanup your daemon from unused objects (including those stopped containers).

docker-compose restart connection pool full

My team and I are converting some of our infrastructure to docker using docker-compose. Everything appears to be working great the only issue I have is doing a restart it gives me a connection pool is full error. I am trying to figure out what is causing this. If I remove 2 containers or (1 complete setup) it works fine.
A little background on what I am trying to do. This is a Ruby on Rails application that is being ran with multiple different configurations for different teams within an organization. In total the server is running 14 different containers. The host server OS is CentOS, and the compose command is being ran from a MacBook Pro on the same network. I have also tried this with a boot2docker VM with the same result.
Here is the verbose output from the command (using the boot2docker vm)
https://gist.github.com/rebelweb/5e6dfe34ec3e8dbb8f02c0755991ef11
Any help or pointers is appreciated.
I have been struggling with this error message as well with my development environment that uses more than ten containers executed through docker-compose.
WARNING: Connection pool is full, discarding connection: localhost
I think I've discovered the root cause of this issue. The python library requests maintains a pool of HTTP connections that the docker library uses to talk to the docker API and, presumably, the containers themselves. It's my hypothesis that only those of us that use docker-compose with more than 10 containers will ever see this. The problem is twofold.
requests defaults its connection pool size to 10, and
there doesn't appear to be any way to inject a bigger pool size from the docker-compose or docker libraries
I hacked together a solution. My libraries for requests were located in ~/.local/lib/python2.7/site-packages. I found requests/adapters.py and changed DEFAULT_POOLSIZE from 10 to 1000.
This is not a production solution, is pretty obscure, and will not survive a package upgrade.
You can try reset network pool before deploy
$ docker network prune
Docks here: https://docs.docker.com/engine/reference/commandline/network_prune/
I got the same issue with my Django Application. Running about 70 containers in docker-compose. This post helped me since it seems that prune is needed after setting COMPOSE_PARALLEL_LIMIT
I did:
docker-compose down
export COMPOSE_PARALLEL_LIMIT=1000
docker network prune
docker-compose up -d
For future readers. A small addition to the answer by #andriy-baran
You need to stop all containers, delete them and them run network prune (because the prune command removes unused networks only)
So something like this:
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker network prune

Docker hangs and gets corrupted on reboot

We are running a scheduling engine with docker, chronos & mesos.
Running 2 mesos slaves on each node. Sometimes, too many Jobs gets executed on each node and docker becomes unresponsive and docker gets corrupted on rebooting the server. Is there anything wrong with the setup? Not sure, why docker hangs and gets corrupted on reboot?
Thanks
Running Docker containers won't work properly because restarting one agent
will cause Docker containers managed by the other agent to be deleted.
Check out --cgroups_root flag in
https://github.com/apache/mesos/blob/master/docs/configuration/agent.md
This flag only applies to MesosContainerizer (can be used to launch Docker
containers).

Deploying changes to Docker and its containers on the fly

Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.

Resources