Lanch new Docker image when memory limit is reached - docker

Sorry if this is a dumb question but i'm quite new to Docker.
I understand that, if a the --memory parameter is set and the container uses all the memory, Docker will kill the container if the container.
I wonder if it's possible to create a new container (without killing the previous one) when the container reaches a certain memory limit defined by me.

docker does not have built in service scaling.
most implementations ive seen for docker that do this use:
prometheus, a monitoring server that can scrape docker container metrics.
alertmanager, a server that, given metrics to monitor on a prometheus server, can raise alerts when thresholds are reached.
a custom piece of code using the docker golang sdk that increases or decreases the number of service replicas in response to alert thresholds.

Related

Host of docker container gets unresponsive - how to make host independent from container?

How can I prevent that a docker host gets unresponsive if the docker container is under high load?
My docker host server gets unresponsive at certain times, and only a restart helps. We assume this happens when the docker container performs CPU intensive tasks. Whenever this happens, I cannot login to the docker host.
In case I am logged in already, I usually cannot use the shell; sometimes I can use the shell with an about 10 minutes delay for characters to be typed.
There is indeed no limit on a container per default, but there is a large amount of flags allowing you to control a container behaviour at run time.
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows. Docker provides ways to control how much memory, or CPU a container can use, setting runtime configuration flags of the docker run command. This section provides details on when you should set such limits and the possible implications of setting them.
Here is a totally not exhaustive example using some of those flags
docker run -it --cpus="1.5" --memory="1g" ubuntu /bin/bash
Just make sure your limits are set to something sensible allowing your host machine to still do what it is supposed to do (run the daemon or other tasks).
A comprehensive list of all those flags allowing you to control resources is accessible via https://docs.docker.com/config/containers/resource_constraints/

ECS docker container cpu and memory size

I'm using AWS ECS for deploy docker-compose.
In my docker container, one nginx and one flask server is running.
Also I will use c4.large instance.
In my case, How much should I allocate cpu_shares and mem_limit for each image?
I know that there is no exact answer.
But I want to know if in my case, what is general percentage.
Or, any suggestion, will be useful for me.
Thanks!
First, Run your both servers on the local machine using Docker.
Check the Resource cpu_shares and mem_limit allocation with this command
docker stats
This will provide you with all the details. Then, set the same Limits for your ECS Task.
Here is an example:
After running this we can check the stats using the command as provided earlier.

Docker service container auto restart after specific time interval

We have a docker swarm and we normally run service container using Docker create service API. Now we are seeing after certain time interval the services are not responding ( means the application running inside container ). As of now the solution looks like restarting the service after specific time interval.And it worked when we tried it manually .
This is the top command output of Host worker node
And output of docker stats
Wanted to know what is the best approach to fix it. Also can we automate the solution.

Docker swarm get deployment status

After running docker stack deploy to deploy some services to swarm is there a way to programmatically test if all containers started correctly?
The purpose would be to verify in a staging CI/CD pipeline that the containers are actually running and didn't fail on startup. Restart is disabled via restart_policy.
I was looking at docker stack services, is the replicas column useful for this purpose?
$ docker stack services --format "{{.ID}} {{.Replicas}}" my-stack-name
lxoksqmag0qb 0/1
ovqqnya8ato4 0/1
Yes, there are ways to do it, but it's manual and you'd have to be pretty comfortable with docker cli. Docker does not provide an easy built-in way to verify that docker stack deploy succeeded. There is an open issue about it.
Fortunately for us, community has created a few tools that implement docker's shortcomings in this regard. Some of the most notable ones:
https://github.com/issuu/sure-deploy
https://github.com/sudo-bmitch/docker-stack-wait
https://github.com/ubirak/docker-php
Issuu, authors of sure-deploy, have a very good article describing this issue.
Typically in CI/CD I see everyone using docker or docker-compose. A container runs the same in docker as it does docker swarm with respects to "does this container work by itself as intended".
That being said, if you still wanted to do integration testing in a multi-tier solution with swarm, you could do various things in automation. Note this would all be done on a single node swarm to make testing easier (docker events doesn't pull node events from all nodes, so tracking a single node is much easier for ci/cd):
Have something monitoring docker events, e.g. docker events -f service=<service-name> to ensure containers aren't dying.
always have healthchecks in your containers. They are the #1 way to ensure your app is healthy (at the container level) and you'll see them succeed or fail in docker events. You can put them in Dockerfiles, service create commands, and stack/compose files. Here's some great examples.
You could attach another container to the same network to test your services remotely 1-by-1 using tasks. with reverse DNS. This will avoid the VIP and let you talk to a specific replica(s).
You might get some stuff out of docker inspect <service-id or task-id>
Another solution might be to use docker service scale - it will not return until service is converged to specified amount of replicas or will timeout.
export STACK=devstack # swarm stack name
export SERVICE_APP=yourservice # service name
export SCALE_APP=2 # desired amount of replicas
docker stack deploy $STACK --with-registry-auth
docker service scale ${STACK}_${SERVICE_APP}=${SCALE_APP}
One drawback of that method is that you need to provide service names and their replica counts (but these can be extracted from compose spec file using jq).
Also, in my use case I had to specify timeout by prepending timeout command, i.e. timeout 60 docker service scale, because docker service scale was waiting its own timeout even if some containers failed, which could potentially slow down continuous delivery pipelines
References
Docker CLI: docker service scale
jq - command-line JSON processor
GNU Coreutils: timeout command
you can call this for every service. it returns when converged. (all ok)
docker service update STACK_SERVICENAME

Is it wrong to run a single process in docker without providing basic system services?

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH

Resources