I'm facing a notable issue regarding Docker container on EC2 Linux instance. I deployed them 5 months ago and it was running perfectly. But now they stop working.
I have deployed three Docker containers of Cockroach DB, Redis associated with TheThingStack (TheThingsIndustries) using Docker Compose. I tried restarting containers using Docker Compose but it gave me an error for no space remaining. So I suggest and confirmed it later on that my EBS storage of the EC2 instance got full.
So I extended the file system of Linux after increasing EBS storage size from AWS official guideline. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
But still its not restarting and gave me error of "No space". I tried last time by restarting single container of deployed containers using Docker Compose, and now its showing Restarting(255).
I'm attaching multiple pictures of it, maybe it will help anyone answer that.
Nothing was the issue. Sometimes you need to restart the EC2 machine, and I did that then the error was gone. All things are now working well.
As I increased the EBS storage, it showed the increased volume but the file system of Linux didn't increase, the only option I got left was to restart the EC2 machine and the error was gone after restarting the EC2 machine.
Related
I'm attempting to increase the memory allocation of a specific container I'm running on an EC2 instance. I was able to do this locally by adding the mem_limit: 4GB into my docker-compose file (using version 2 not 3) and this did not work until I changed my settings in Docker desktop to be greater than the memory limit I was specifying:
My question is as follows, is it possible to change this memory slider setting from the command line and therefore would it be possible to do it on an EC2 instance and without docker desktop? I've been through the docs but was unable to find anything specific to this!
That's a Docker Desktop setting, which is only necessary because of the way docker containers run in a VM on Windows and Mac computers. On an EC2 Linux server there is no limit like that, docker processes can use as much resources as the server has available.
I have been using 3-4 Docker containers for 1-2 months. However, I hibernate my PC instead of shutdown and before hibernate I stop Docker engine everyday for the last weeks. However, today I cannot see my containers and there is only "No containers running" message on Docker dashboard. I restarted many times and finally update to latest version and restarted PC, but still no containers. I also tried Docker factory reset, but nothing is changed. SO, how can I access my containers?
I tried to list containers via: docker container ls, but there is no container listed. So, have my containers has gone even with no reason?
Normally you can list stopped containers with
docker container ls -a
Then check the logs on those containers of they will not start. However...
I also tried Docker factory reset
At this point those containers, images, and volumes are most likely gone. I don't believe there's a recovery after that step.
I have a server that's been running in docker on coreos. For some reason containerd has stopped running and the docker daemon has stopped working correctly. My efforts to debug haven't gotten far. I'd like to just boot a new instance and migrate, but I'm not sure I can backup my volume without a working docker service. Is it possible to backup my volume without using docker?
Most search results assume a running docker system, and don't work in this case.
By default, docker volumes are stored in /var/lib/docker/volumes. Being that you don't have a working docker setup, you might have to dive into the subfolders to figure out which volume you're concerned with, but that should at least give you a start. If it's helpful, in a working docker environment, you can inspect docker volumes outlined here, and get all necessary information you would need to carry this out.
My team and I are converting some of our infrastructure to docker using docker-compose. Everything appears to be working great the only issue I have is doing a restart it gives me a connection pool is full error. I am trying to figure out what is causing this. If I remove 2 containers or (1 complete setup) it works fine.
A little background on what I am trying to do. This is a Ruby on Rails application that is being ran with multiple different configurations for different teams within an organization. In total the server is running 14 different containers. The host server OS is CentOS, and the compose command is being ran from a MacBook Pro on the same network. I have also tried this with a boot2docker VM with the same result.
Here is the verbose output from the command (using the boot2docker vm)
https://gist.github.com/rebelweb/5e6dfe34ec3e8dbb8f02c0755991ef11
Any help or pointers is appreciated.
I have been struggling with this error message as well with my development environment that uses more than ten containers executed through docker-compose.
WARNING: Connection pool is full, discarding connection: localhost
I think I've discovered the root cause of this issue. The python library requests maintains a pool of HTTP connections that the docker library uses to talk to the docker API and, presumably, the containers themselves. It's my hypothesis that only those of us that use docker-compose with more than 10 containers will ever see this. The problem is twofold.
requests defaults its connection pool size to 10, and
there doesn't appear to be any way to inject a bigger pool size from the docker-compose or docker libraries
I hacked together a solution. My libraries for requests were located in ~/.local/lib/python2.7/site-packages. I found requests/adapters.py and changed DEFAULT_POOLSIZE from 10 to 1000.
This is not a production solution, is pretty obscure, and will not survive a package upgrade.
You can try reset network pool before deploy
$ docker network prune
Docks here: https://docs.docker.com/engine/reference/commandline/network_prune/
I got the same issue with my Django Application. Running about 70 containers in docker-compose. This post helped me since it seems that prune is needed after setting COMPOSE_PARALLEL_LIMIT
I did:
docker-compose down
export COMPOSE_PARALLEL_LIMIT=1000
docker network prune
docker-compose up -d
For future readers. A small addition to the answer by #andriy-baran
You need to stop all containers, delete them and them run network prune (because the prune command removes unused networks only)
So something like this:
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker network prune
I am trying to deploy an application (https://github.com/DivanteLtd/open-loyalty/) to amazon web services or AWS. This app has a docker-compose file. So, i am directly running 'ecs-cli compose up' from ecs-cli from my local machine.
It runs succesfully and runs all the containers, but after some time what it shows an error.
ExitCode: 137 Reason: OutOfMemoryError: Container killed due to memory usage
I don't understand what's its for. Can you please help?
Thank You.
Docker has an OOM-killer that lurks in the dark and is killing your instance.
This happens either because your container needs more memory than allowed in its mem_limit setting (defined in your aws compose yml file), or because your docker host is running out of memory.
You'd typically address this by tweaking the mem_limit settings for each of your containers and/or by switching to a larger EC2 instance.