Is Memory consumption is also Dynamic just like CPU for Docker Containers - docker

I want to run multiple containers on a single host by providing limits on CPU & Memory. If my host has 1024 cpu shares & I assign them as 512 & 512 to two containers, it means that the first container can take as much as 1024 if second container is not using any cpu. But if both of them are using cpu, then both get limited to 512.
Is it also true for memory usage? Or somehow can I set it that way?
Here is the scenario:
I have 1024 Mb of RAM available for containers and I have two containers, I want each one to take 512 Mb of RAM but should be able to extend to more than 512 if other container is not using it. How is it possible?

In the case of memory you provide to Docker a fixed amount of memory (and swap) in bytes, kilobytes, megabytes,..., and that amount will limit the memory that container can allocate, no matter if the host has memory free or if it is being used by other process.
When limiting the memory it's important taking care of how Docker (or cgroups) limit the memory and swap of the container. From Docker v1.5 (and fixed in v1.6) Docker lets limit the memory and swap independently. Check Docker documentation to more details about this.

Related

Docker, set Memory Limit for group of container

I'm running a docker environment with roundabout 25 containers for my private use. My system has 32GB Ram.
Especially RStudio Server and JupyterLab often need a lot of memory.
So I limited the memory to for both container at 26 GB.
This works good as long not both application storing dataframes in memory. If now RServer stores some GB and Jupyter also is filling memory to the limit my system crashes.
Is there any way to configure that these two container together are allowed to use 26GB ram max.
Or a relative limit like Jupyter is allowed to use 90% of free memory.
As I'm working now with large datasets it can happen all the time that (because I forget to close a kernel or something else) the memory can increase to the limit and I want just the container to crash and not the whole system.
And I don't want lower the limit for Jupyter further as the biggest dataset on its own need 15 GB of memory.
Any ideas?

Does Google Cloud Run memory limit apply to the container size?

For cloud run's memory usage from the docs (https://cloud.google.com/run/docs/configuring/memory-limits)
Cloud Run applications that exceed their allowed memory limit are terminated.
When you configure memory limit settings, the memory allocation you are specifying is used for:
Operating your service
Writing files to disk
Running binaries or other processes in your container, such as the nginx web server.
Does the size of the container count towards "operating your service" and counts towards the memory limit?
We're intending to use images that could already approach the memory limit, so we would like to know if the service itself will only have access to what is left after subtracting container size from the limit
Cloud Run PM here.
Only what you load in memory counts toward your memory usage. So for example, if you have a 2GB container but only execute a very small binary inside it, then only this one will count as used memory.
This means that if your image contains a lot of OS packages that will never be loaded (because for example you inherited from a.big base image), this is fine.
Size of the container image you deploy to Cloud Run does not count towards the memory limit. For example, if your container image is 3 GiB, you can still run on a 256 MiB memory environment.
Writing new files to local filesystem, or (obviously) allocating more memory within your app will count towards the memory usage of your container. (Perhaps also obvious, but worth mentioning) the operating system will "load" your container's entrypoint executable to memory (well, to execute it). That will count towards the available memory as well.

elasticsearch - max_map_count vs Heap size

I am using the official elasticsearch docker image. Since ES requires a certain level of memory mapped regions, (as documented), I increased it using
docker-machine ssh <NAME> sudo sysctl -w vm.max_map_count=262144
I also read here that the memory allocated should be around 50% of the total system memory.
I am confused about how these two play together. How does allocating more memory mapped regions affect the RAM allocated. Is it the part of the RAM, or is it taken above the RAM allocation for elasticsearch?
To sum it up very shortly, the heap is used by Elasticsearch only and Lucene will use the rest of the memory to map index files directly into memory for blazing fast access.
That's the main reason why the best practice is to allocate half the memory to the ES heap to let the remaining half to Lucene. However, there's also another best practice to not allocate more than 32-ish GB of RAM to the ES heap (and sometimes it's even less than 30B).
So, if you have a machine with 128GB of RAM, you won't allocate 64GB to ES but still a maximum 32-ish GB and Lucene will happily gobble up all the remaining 96GB of memory to map its index files.
Tuning the memory settings is a savant mix of giving enough memory (but not too much) to ES and making sure Lucene can have a blast by using as much as the remaining memory as possible.

Change CPU capacity of Docker containers

I'm doing an internship focused on Docker and I have to load-balance an application which have a client, a server and a database. My goal is to dynamically scale the number of server containers according their CPU usage. For instance if the CPU usage is over 60% I add a new container on the fly to divide the CPU usage. My problem is that my simulation does not get the CPU usage higher than 20%, it is a very simple simulation where a random users register and go to random pages.
Question : How can I lower the CPU capacity of my server containers using my docker-compose file in order to artificially make the CPU go higher ? I tried to use the cpu_quota and cpu_shares instructions but it's not very documented and I don't know how it works or affects my containers.

How to boot CoreOS with different ramdisk size

I am trying to boot CoreOS from a PXE server using ramdisk.
However, no matter what size of ramdisk I specify (with ramdisk_size) CoreoOS always takes half of the memory as a ramdisk.
Can anyone tell me how to specify the ramdisk size at boot?
This has to do with the default nature of temporary filesystems in that they will default to 50% as the limit but won't reserve that much memory; actual usage will grow over time, but will not exceed the 50% limit.
Also you'll find this in the official CoreOS docs regarding PXE:
The filesystem will consume more RAM as it grows, up to a max of 50%.
The limit isn't currently configurable.

Resources