OutOfMemoryError while deploying after deploying container on AWS - docker

I am trying to deploy an application (https://github.com/DivanteLtd/open-loyalty/) to amazon web services or AWS. This app has a docker-compose file. So, i am directly running 'ecs-cli compose up' from ecs-cli from my local machine.
It runs succesfully and runs all the containers, but after some time what it shows an error.
ExitCode: 137 Reason: OutOfMemoryError: Container killed due to memory usage
I don't understand what's its for. Can you please help?
Thank You.

Docker has an OOM-killer that lurks in the dark and is killing your instance.
This happens either because your container needs more memory than allowed in its mem_limit setting (defined in your aws compose yml file), or because your docker host is running out of memory.
You'd typically address this by tweaking the mem_limit settings for each of your containers and/or by switching to a larger EC2 instance.

Related

Docker container keeps showing Restarting(255)

I'm facing a notable issue regarding Docker container on EC2 Linux instance. I deployed them 5 months ago and it was running perfectly. But now they stop working.
I have deployed three Docker containers of Cockroach DB, Redis associated with TheThingStack (TheThingsIndustries) using Docker Compose. I tried restarting containers using Docker Compose but it gave me an error for no space remaining. So I suggest and confirmed it later on that my EBS storage of the EC2 instance got full.
So I extended the file system of Linux after increasing EBS storage size from AWS official guideline. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
But still its not restarting and gave me error of "No space". I tried last time by restarting single container of deployed containers using Docker Compose, and now its showing Restarting(255).
I'm attaching multiple pictures of it, maybe it will help anyone answer that.
Nothing was the issue. Sometimes you need to restart the EC2 machine, and I did that then the error was gone. All things are now working well.
As I increased the EBS storage, it showed the increased volume but the file system of Linux didn't increase, the only option I got left was to restart the EC2 machine and the error was gone after restarting the EC2 machine.

Is there a way to update Docker "Resources" settings from the command line on an EC2 instance?

I'm attempting to increase the memory allocation of a specific container I'm running on an EC2 instance. I was able to do this locally by adding the mem_limit: 4GB into my docker-compose file (using version 2 not 3) and this did not work until I changed my settings in Docker desktop to be greater than the memory limit I was specifying:
My question is as follows, is it possible to change this memory slider setting from the command line and therefore would it be possible to do it on an EC2 instance and without docker desktop? I've been through the docs but was unable to find anything specific to this!
That's a Docker Desktop setting, which is only necessary because of the way docker containers run in a VM on Windows and Mac computers. On an EC2 Linux server there is no limit like that, docker processes can use as much resources as the server has available.

Need to upgrade docker memory in AWS EC2 instance

I am trying to deploy an application in EC2 through docker.
For testing in local I had to increase the docker ram to 4 gb which i did through Docker UI (preference->advance).
now for EC2 instance, Can somebody please suggest how to increase the docker memory from command prompt.
Got the answer, so posting it here,
In EC2, docker size depends on the size of the instance itself.
if the EC2 has 16Gb ram, then docker container will have same ram.

ECS docker container cpu and memory size

I'm using AWS ECS for deploy docker-compose.
In my docker container, one nginx and one flask server is running.
Also I will use c4.large instance.
In my case, How much should I allocate cpu_shares and mem_limit for each image?
I know that there is no exact answer.
But I want to know if in my case, what is general percentage.
Or, any suggestion, will be useful for me.
Thanks!
First, Run your both servers on the local machine using Docker.
Check the Resource cpu_shares and mem_limit allocation with this command
docker stats
This will provide you with all the details. Then, set the same Limits for your ECS Task.
Here is an example:
After running this we can check the stats using the command as provided earlier.

docker service logs empty (?) when deployed in stack on AWS

I've got a docker-compose.yml which, when deployed locally as either using stack or compose yields 3 services (parse-server, mongodb, web-app in nginx). I can get logs from those services using docker service logs <id>.
Using the same docker-compose.yml to deploy the stack to Amazon EC2, docker service logs <id> calls to the running services returns nothing. As if I were cat'ing an empty file.
Does anybody know what could cause this and / or how I can fix it?
When you deploy a swarm to AWS using the Docker Docs buttons or via cloud, I believe it usually pipes all output to CloudWatch, organized by individual container. This is only helpful if that is how you created your swarm.

Resources