Release some memory of a running Docker container on the fly - docker

Suppose there is a Docker container created with fixed memory configuration. Can I release some memory from it on the fly? Since we can do this with a simple command lxc-cgroup -n name memory.limit_in_bytes 256M in LXC.

You can't release container memory without either:
Restarting the container
Clearing the host's memory
This is because Docker container's are represented as processes on the host system. Therefore you would need to free the memory associated with the container process. As far as I understand this is difficult especially since processes may rely on shared memory structures etc.
If you really need memory released you can use the following commands to help clear the hosts memory, assuming you are on Linux:
Flush data in memory buffers to disk:
$ sync
Clear OS page cache:
$ echo 1 > /proc/sys/vm/drop_caches

A container on it's own doesn't use much memory. They are very light weight by design. Think of it as a fenced off area on your host, just a bit of tape to divide your container from the rest of the host.
The kernel has to house the cgroups, namespaces, interfaces, devices etc but this is negligible.
Docker itself will also introduce a small amount of overhead for it's management of the container but that is normally negligible (Unless you, say, map a thousand ports into a container).
The main use of memory will be the processes that run in the container. A container won't use 256MB if you set that as the upper limit, that will just be the upper limit before allocations start failing.
If you want to release used memory, either the process you are running in the container needs to have that ability, or you can restart the process using the memory.

Related

Why does a docker container need more RAM at startup than during operation?

I have a docker compose file with multiple docker containers. If I put these Docker containers in depends_on one after the other, all containers start without problems and I end up with 300MB of RAM memory left.
But when I try to start all containers at the same time I get an OOM error.
Is the increased RAM consumption normal?
If so, could someone tell me if it is coming from the Docker side (extra processes being started) or if it is possibly due to my container.
The increased ram is completely normal in your case. By using the depends_on flag you are spinning up each container one at a time and without it you are spinning up multiple containers concurrently.
Docker-compose is going to use extra RAM behind the scenes to spin up your containers. Because it worked using the depends_on flag each container had that extra RAM and therefore was able to launch. Without the depends_on flag and therefore the concurrent launch I’m assuming the amount of RAM surpassed the 300MB that you had available.
I would recommend configuring memory limits and reservations for each container - especially with a scenario like yours where your RAM is very limited.

container has its own disk but shared memory?

I am new to docker, just a question on contain basis, below is a picture from a book:
It says that containers share the the host computer's CPU, OS and memory but each container has its own computer name, ip address and disk.
I am a little bit confused about the disk, isn't that disk is just like memory as resource? if a container has 1gb data inside, it must get allocated 1gb disk space by the host computer from its own disk just like memory? so container's disk is also shared?
You can make that diagram more precise by saying that each container has its own filesystem. /usr in a container is separate from /usr on other containers or on the host, even if they share the same underlying storage.
By way of analogy to ordinary processes, each process has its own address space and processes can't write to each other's memory, even though they share the same underlying memory hardware. The kernel assigns specific blocks (pages) of physical memory to specific process address spaces. If you go out of your way, there are actually a couple of ways to cause blocks of memory to be shared between processes. The same basic properties apply to container filesystems.
On older Docker installations (docker info will say devicemapper) Docker uses a reserved fixed-size disk area. On newer Docker installations (docker info will say overlay2) Docker can use the entire host disk. The Linux kernel is heavily involved in mapping parts of the host disk (or possibly host filesystem) into the per-container filesystem spaces.

Docker container not utilizing cpu properly

Single docker container is working good for less number of parellel processes but when we increase the number of parellel processes to 20-30 the process execution get slows. The processes are getting slow but still the docker is utilizing only 30-40% of cpu.
I have tried following things to make docker utilize proper cpu and don't slow down the processes -
I have explicitly allocated the cpu and ram to the docker container.
I have also increased the number of file descriptors, number of process and stack size using ulimit.
even after doing this two thing still the container is not utilizing cpu properly. I am using docker exec to start multiple processes in single running container. Is there any efficient way to use single docker container for executing multiple processes or to make container use 100% of cpu?
The configuration i am using is
Server - aws ec2 t2.2Xlarge ( 8 core, 32 gb ram)
Docker version - 18.09.7
Os- ubuntu 18.04
When you run something on machine it consumes following resources:1. CPU 2. RAM 3. DISK I/O 4. Network Bandwidth. If your container is exhausting any one resource listed above than it is possible that other resources available. So monitor your system matrices to find the root cause.

Create docker container with non-shareable memory and non-shareable cpus

I am using Docker 17.03 -ee version
I have to create a docker container with a variable amount of memory and variable number of cpus dynamically and this hardware should not be shared between the containers.
As an example, let us consider i have
Host with 8GB memory and 4 cores.
Create a docker container (d1) with 3GB memory and 1 cpus
Create a docker container (d2) with 5GB memory and 3 cpus
Create a docker contianer (d3) with 2GB memory and 2 cpus
I have noticed that docker run takes -m flag using which i can set the memory limit and it also has --cpuset-cpus using which i can assign specific cpu cores to my container.
I was able to create d1 and d2 by using the above flags. While creating i observed that i have to take core of core management i.e i have to maintain the assignment of a cores with containers.
Can i just give number of cores and core assignment is taken care automatically ?
After creating d1 and d2, as i have used up all the available memory and cpuset, i should not be allowed to create further containers.
But if i try to create a container d4 with memory 3GB i am able to successfully create it.
How can i allocate the memory to that specific container without sharing that memory with other containers ?
Is there any already built solution which takes care of managing memory assigned with a container and cpu-cores assigned with a container ? Or Do i have to maintain this information myself and ensure that i should not create new containers when all the memory or cores are used up ?
Since docker 1.13 you can use --cpus which is probably what you're looking for. Pre-1.13 you had to use --cpu-periodand --cpu-quota for which --cpus is actually a helper for.
To limit container to only use 1 core: --cpus=1.0, 2 cores --cpus=2.0 etc. Fractions are allowed as well. But note, that neither --cpus nor --cpuset-cpus does not reserve these cpus. It simply limits how much of cpu (or which cores) can be used but it cannot disallow any other containers/processes from using same resources.
Situation is similar with memory (--memory). You can declare max limit and container simply cannot allocate more resources than your limit. But it does not reserve or "pre-allocate" memory in any way.
If you're looking for a way to force that kind of behavior, you should look into container orchestration solutions, like kubernetes, openshift, docker swarm and such. These solutions were created specifically for a way to easily distribute containers in your cluster while respecting total cluster resources. Then you can simply specify that for example: container X should use max 2 cores, max 500MB memory and have 2 replicas. And if your cluster does not have enough resources, container will simply be left as pending.

How does docker use CPU cores from its host operating system?

My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...

Resources