I have a vm flexible to change CPU and Memory on it. I have 5 containers running on it.The 3 three slave containers execute test cases.But when I run the containers,I see that the entire CPU is being filled.Even I increase the size of CPU ,it is also being filled.How do I overcome it.
Limiting the size of each container is the only option??
You can limit a container's resources (memory and CPU) at run-time. For example :
docker run --cpus=".5" my-container
to allow only 50% of a CPU for my-container.
Those settings are run-time settings, so you cannot set them in your Dockerfile.
Related
I have assigned 2GB of RAM memory as maximum memory limit for the docker container while creating the same. Now that more memory is required, is there way to increase the maximum RAM limit for the existing container?
I used the following command to create the container.
docker run --name=mysql_50000 -m 2g -p 50000:3306 mysql_docker_image_v4
It worked fine as expected. My container has a maximum memory limit of 2GB. I need it to be 3GB now. Any help is appreciated. Thanks
Please use docker update command with appropriate parameters (-m 3G). See doc:
https://docs.docker.com/engine/reference/commandline/update/
I need to have a Docker Container with 6gb of RAM memory.
I tried this command:
docker run -p 5311:5311 --memory=6g my-linux
But it doesn't work because I logged in to the Docker Container and I checked the amount of memory available. This is the output which shows there are only 2gb available:
>> cat /proc/meminfo
MemTotal: 2046768 kB
MemFree: 1747120 kB
MemAvailable: 1694424 kB
I tried setting the preferences -> advance in the Docker Application.
If I set 6gb, it works... I mean, I have a container with 6gb MemTotal.
In this way all my containers will have 6gb...
I was wondering how to allocate 6gb of memory for only one container using some commands or setting something in the Docker File. Any help?
Don't rely on /proc/meminfo for tracking memory usage from inside a docker container. /proc/meminfo is not containerized, which means that the file is displaying the meminfo of your host system.
Your /proc/meminfo indicates that your Host system has 2G of memory available. The only way you'll be able to make 6G available in your container without getting more physical memory is to create a swap partition.
Once you have a swap partition larger or equal to ~4G, your container will be able to use that memory (by default, docker imposes no limitation to running containers).
If you want to limit the amount of memory available to your container explicitly to 6G, you could do docker run -p 5311:5311 --memory=2g --memory-swap=6g my-linux, which means that out of a total memory limit of 6G (--memory-swap), upto 2G may be physical memory (--memory). More information about this here.
There is no way to set memory limits in the Dockerfile that I know of (and I think there shouldn't be: Dockerfiles are there for building containers, not running them), but docker-compose supports the above options through the mem_limit and memswap_limit keys.
I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.
I am having a Docker image of virtual size 6.5 GB
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
Image_Name latest d8dcd701981e About an hour ago 6.565 GB
but the RAM in my system is only 4GB , the container is working at a good speed though , I am really confused as how the RAM allocation is done for the docker containers . Is there any limit to the RAM size being allocated to a container as in the end docker container is just another isolated process running in the Operating system.
The virtual size of an image has nothing to do with memory allocation
If your huge image, once launched as a container, does very little, it won't reserve much memory (not consume much CPU)
For more on memory allocation, see this answer: you can limit at runtime the maximum memory allocation. And that, independently of the image size.
For example:
$ docker run -ti -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash
We set memory limit and disabled swap memory limit, this means the processes in the container can use 300M memory and as much swap memory as they need (if the host supports swap memory).
I have created a container:
docker run -c=20 -i -t ubuntu:latest /bin/bash
I tried to use -c flag to control CPU usage and maximize it in 50 %. When I am running md5sum /dev/urandom inside container, it use up 100 % CPU in host machine.
The -c flag for docker run command modifies the container’s CPU share weighting relative to the weighting of all other running containers.
It does not restrict the container's use of CPU from the host machine.
You can use the --cpu-quota flag to limit CPU usage, for example:
$ docker run -ti --cpu-quota=50000 ubuntu:latest /bin/bash
The --cpu-quota is usually used in conjunction with --cpu-period. Please see more details on the Docker run reference document:
https://docs.docker.com/reference/run/#runtime-constraints-on-resources
It seems that you are running a single container, so this is the expected result.
You might find this blog post helpful.
Every new container will have 1024 shares of CPU by default. This
value does not mean anything, when speaking of it alone. But if we
start two containers and both will use 100% CPU, the CPU time will be
divided equally between the two containers because they both have the
same CPU shares (for the sake of simplicity I assume that there are no
other processes running).
Take a look here, this is apparently what you were looking for:
https://docs.docker.com/engine/reference/run/#cpu-period-constraint
The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use --cpu-period to set the period of CPUs to limit the container’s CPU usage. And usually --cpu-period should work with --cpu-quota.
Examples:
$ docker run -it --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash
If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms.
period and quota definition:
Within
each given "period" (microseconds), a group is allowed to consume only up to
"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
group exceeds this limit (for that period), the tasks belonging to its
hierarchy will be throttled and are not allowed to run again until the next
period.