Setting CPU and Memory limits globally for all docker containers - docker

There are many examples which talks about setting memory, cpu etc using docker run. Is it possible to set it globally so every container created with those values?

There should be other ways like using AppArmor maybe, I'll check, but the first thing that comes to my mind is this project from a friend
Docker enforcer
https://github.com/piontec/docker-enforcer
This project is a docker plugin that will kill containers if they don't meet certain pre-defined policies such as having strict memory and cpu limits

Related

Is it possible to run a large number of docker containers?

A small introduction to history. I am building a small service (website) where the user is provided with all sorts of tools that work according to the parameters specified by the user himself. In my implementation, it turns out that the tools are one big script that runs in the docker. It turns out that my service should launch a new docker container for each user.
I was thinking about using "aws fargate" or "gcloud run", or any other resource that makes it possible to run a docker container.
But I'm interested. What if there are 1000 or 10000 users, each one will have its own docker container, is that good? Do the services (aws, gcloud) have any restrictions, or is it a bad implementation?
Based upon my understanding you have suggested that you instantiate a Docker container for each of your users, I think there are a couple of issues with this:
Depending on how many users you have you get into the realms of too many containers. (each container will consume resources, not just Memory and CPU but also TCP/IP pool exhaustion.)
Isolation -> Read containers are not VMs

Unsure on how to Orchestrate docker containers

Im new to docker and am wanting to accomplish something but I am unsure on how to Orchestrate my docker containers to do this.
What I want to do:
I have an API that in simple does a calculation from a requested file. It loads the file (around 80mb) from disk to memory then keep it in memory for 2 hours (caching).
Im wanting to have an architecture where for example when the container gets overwhelmed with requests a new one fires up, and when the original container frees its memory and the requests slow down then the container shuts down.
Is Memory and CPU Container Orchestration possible?
Thank You,
/Jeremy
Docker itself is not dedicated to the orchestration multiple containers. You need to use some container orchestration environment. The most popular are Kubernetes, Docker Swarm, and Apache Mesos. Or if you want to run in the Cloud, then some vendor-specific, like AWS ECS.
Here's a good list of container clustering toolkit.
In all these environments it's possible to configure what you described. If you're completely new to the topic, then I recommend installing Docker-for-Desktop which comes with built-in Kubernetes and play with that in your local.
For sure, container orchestration system is what you want to be able efficiently manage your docker containers.
You can find current complete list of solutions for production environment in this spreadsheet
Tools, like kubernetes will give you reach set of benefits eg
Provisioning and deployment of containers
Redundancy and availability of containers
Scaling up or removing containers to spread application load evenly
across host infrastructure
Allocation of resources between containers
Load balancing of service discovery between containers
Health monitoring of containers and hosts
In Kubernetes there is a Horizontal Pod Autoscaler, that
automatically scales the number of pods in a replication controller,
deployment, replica set or stateful set based on observed CPU
utilization (or, with custom metrics support, on some other
application-provided metrics). Note that Horizontal Pod Autoscaling
does not apply to objects that can’t be scaled, for example,
DaemonSets.
As for beginning I would recommend you start with minikube.
More advanced ways are setup manually cluster using kubeadm either look into the cloud providers
Please be aware that you will not have option to modify cloud based control plane. More info in my related answer

Are docker containers safe enough to run third-party untrusted containers side-by-side with production system?

We plan to allow execution of third-party micro-services code on our infrastructure interacting with our api.
Is dockerizing safe enough? Are there solutions for tracking resources(network, ram,cpu)container consumes?
You can install portainer.io (see its demo, password tryportainer)
But to truly isolate those third-party micro-services, you could run them in their own VM defined on your infrastructure. That VM would run a docker daemon and services. As long as the VM has access to the API, those micro-services containers will do fine, and won't lead/have access to anything directly from the infrastructure.
You need to define/size your VM correctly to allocate enough resources for the containers to run, each one assuring their own resource isolation.
Docker (17.03) is a great tool to secure isolate processes. It uses Kernel namespaces, Control groups and some kernel capabilities in order to isolate processes that run in different containers.
But, those processes are not 100% isolated from each other because they use the same kernel resources. Every dockerize process that make an IO call will leave for that period of time its isolated environment and will enter a shared environment, the kernel. Although you can set limits per container, like how much processor or how much RAM it may use you cannot set limits on all kernel resources.
You can read this article for more information.

Cgroups and docker - misunderstanding

I am trying to understand docker in connection to cgroups or cgroups in connection to docker.
I know that cgroups make it possible to manage resources for particular process - for example we can assign some piece of RAM to firefox and some policies on CPU.
However, why it is so strong in connection with containers (docker) ?
After all, I can also use cgroups (in the same way) without docker. For example, if I launch apacher server within container and without container - in both cases I can control consumed resources.
Moreover, I can also use cgroups in connection to VirtualBox - I can't magic of docker.
Can you tell me, where I wrong ? I think that I don't understand something.
Docker Uses CGroups, You can use CGroups manually, But docker is not limited to CGroups features:
Resource limiting
Prioritization
Accounting
Control
Docker Provides:
Rapid application deployment
Portability across machines
Version control and component reuse
Lightweight footprint and minimal overhead
Simplified maintenance
Less Disk foot print using AUFS
Large Community with lots of pre built images.
Security
...
And the list can go on and on, Not even mentioning manually configuring CGroups, Network namespaces, AUFS, ... can be time consuming with alot of mistakes.

Limiting a Docker Container to a single cpu core

I'm trying to build a system which runs pieces of code in consistent conditions, and one way I imagine this being possible is to run the various programs in docker containers with the same layout, reserving the same amount of memory, etc. However, I can't seem to figure out how to keep CPU usage consistent.
The closest thing I can seem to find are "cpu shares," which, if I understand the documentation, limit cpu usage with respect to what other containers/other processes are running on the system, and what's available on the system. They do not seem to be capable of limiting the container to an absolute amount of cpu usage.
Ideally, I'd like to set up docker containers that would be limited to using a single cpu core. Is this at all possible?
If you use a newer version of Docker, you can use --cpuset-cpus="" in docker run to specify the CPU cores you want to allocate:
docker run --cpuset-cpus="0" [...]
If you use an older version of Docker (< 0.9), which uses LXC as the default execution environment, you can use --lxc-conf to configure the allocated CPU cores:
docker run --lxc-conf="lxc.cgroup.cpuset.cpus = 0" [...]
In both of those cases, only the first CPU core will be available to the docker container. Both of these options are documented in the docker help.
I've tried to provide a tutorial on container resource alloc.
https://gist.github.com/afolarin/15d12a476e40c173bf5f

Resources