How to boot CoreOS with different ramdisk size - ramdisk

I am trying to boot CoreOS from a PXE server using ramdisk.
However, no matter what size of ramdisk I specify (with ramdisk_size) CoreoOS always takes half of the memory as a ramdisk.
Can anyone tell me how to specify the ramdisk size at boot?

This has to do with the default nature of temporary filesystems in that they will default to 50% as the limit but won't reserve that much memory; actual usage will grow over time, but will not exceed the 50% limit.
Also you'll find this in the official CoreOS docs regarding PXE:
The filesystem will consume more RAM as it grows, up to a max of 50%.
The limit isn't currently configurable.

Related

Docker, set Memory Limit for group of container

I'm running a docker environment with roundabout 25 containers for my private use. My system has 32GB Ram.
Especially RStudio Server and JupyterLab often need a lot of memory.
So I limited the memory to for both container at 26 GB.
This works good as long not both application storing dataframes in memory. If now RServer stores some GB and Jupyter also is filling memory to the limit my system crashes.
Is there any way to configure that these two container together are allowed to use 26GB ram max.
Or a relative limit like Jupyter is allowed to use 90% of free memory.
As I'm working now with large datasets it can happen all the time that (because I forget to close a kernel or something else) the memory can increase to the limit and I want just the container to crash and not the whole system.
And I don't want lower the limit for Jupyter further as the biggest dataset on its own need 15 GB of memory.
Any ideas?

Does Google Cloud Run memory limit apply to the container size?

For cloud run's memory usage from the docs (https://cloud.google.com/run/docs/configuring/memory-limits)
Cloud Run applications that exceed their allowed memory limit are terminated.
When you configure memory limit settings, the memory allocation you are specifying is used for:
Operating your service
Writing files to disk
Running binaries or other processes in your container, such as the nginx web server.
Does the size of the container count towards "operating your service" and counts towards the memory limit?
We're intending to use images that could already approach the memory limit, so we would like to know if the service itself will only have access to what is left after subtracting container size from the limit
Cloud Run PM here.
Only what you load in memory counts toward your memory usage. So for example, if you have a 2GB container but only execute a very small binary inside it, then only this one will count as used memory.
This means that if your image contains a lot of OS packages that will never be loaded (because for example you inherited from a.big base image), this is fine.
Size of the container image you deploy to Cloud Run does not count towards the memory limit. For example, if your container image is 3 GiB, you can still run on a 256 MiB memory environment.
Writing new files to local filesystem, or (obviously) allocating more memory within your app will count towards the memory usage of your container. (Perhaps also obvious, but worth mentioning) the operating system will "load" your container's entrypoint executable to memory (well, to execute it). That will count towards the available memory as well.

How to calculate the docker limits

I create my docker (python flask).
How can I calculate what is the limit to put for memory and CPU?
Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?
With an application already running inside of a container, you can use docker stats to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.
This is more about the consumption of your specific Flask application, you can probably take use the resource module in Python to calculate them.
More information here and here.

Change CPU capacity of Docker containers

I'm doing an internship focused on Docker and I have to load-balance an application which have a client, a server and a database. My goal is to dynamically scale the number of server containers according their CPU usage. For instance if the CPU usage is over 60% I add a new container on the fly to divide the CPU usage. My problem is that my simulation does not get the CPU usage higher than 20%, it is a very simple simulation where a random users register and go to random pages.
Question : How can I lower the CPU capacity of my server containers using my docker-compose file in order to artificially make the CPU go higher ? I tried to use the cpu_quota and cpu_shares instructions but it's not very documented and I don't know how it works or affects my containers.

Is Memory consumption is also Dynamic just like CPU for Docker Containers

I want to run multiple containers on a single host by providing limits on CPU & Memory. If my host has 1024 cpu shares & I assign them as 512 & 512 to two containers, it means that the first container can take as much as 1024 if second container is not using any cpu. But if both of them are using cpu, then both get limited to 512.
Is it also true for memory usage? Or somehow can I set it that way?
Here is the scenario:
I have 1024 Mb of RAM available for containers and I have two containers, I want each one to take 512 Mb of RAM but should be able to extend to more than 512 if other container is not using it. How is it possible?
In the case of memory you provide to Docker a fixed amount of memory (and swap) in bytes, kilobytes, megabytes,..., and that amount will limit the memory that container can allocate, no matter if the host has memory free or if it is being used by other process.
When limiting the memory it's important taking care of how Docker (or cgroups) limit the memory and swap of the container. From Docker v1.5 (and fixed in v1.6) Docker lets limit the memory and swap independently. Check Docker documentation to more details about this.

Resources