Virtualbox & Vagrant keep consuming memory constantly - memory

I am running a Windows2012R2 vagrant box, using VirtualBox as a provider, on a host system: 14.04.1-Ubuntu/3.16.0-40-generic with 32GB of physical memory. For some reason, VirtualBox and/or Vagrant are completely eating up all of my machine's memory and I want to understand why and fix the problem.
When I first start the vagrant box, I have 92% free memory, with 0% cached
By the time it has booted and has its hostname set, its more like 10% free, with 32% cached (if I shut the VM down and snapshot it after it first boots, the memory will only go down to about 35% free).
After it's booted, it will continue to consume/cache more and more memory until there is 0% free
My vagrants are configured as such:
vb.customize ["modifyvm", :id, "--memory", "4096"]
vb.customize ["modifyvm", :id, "--vram", "64"]
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ["modifyvm", :id, "--autostart-enabled", "off"]
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "40"]
Seems to me that there should be plenty of unused RAM on my box, still. In fact, I should easily be able to run three of these at the same time without all the host memory getting consumed, right?
(Incidentally, if I shut all the VMs down, I don't get much memory back, whereas, if I "vagrant destroy" them all, all of my host machine's memory will be instantly restored. I don't know if that helps identify the problem, but I figure, it can't hurt.)
Anyone have any ideas? Thanks in advance for any help on this one!

Related

Docker Ram Utilization

I am currently experiencing some sort of memory leakage on docker desktop, I am running one container at the moment but the service 'Virtual Machine Service' is currently sitting at around 8gb it doesn't change after restart my PC.
Rebooting machine did not fixed the issue
I doubt that it is a memory leakage issue. In Settings/Resources/Memory section of Docker Desktop, you might see that 8GB is allocated and you can change this value. Mine was also 8GB, I am guessing Docker allocates half of the available memory (16GB in my case) but I could not see it written in the config options doc.
Hope that helps ;)

Containers: high cpu usage in %soft (soft IRQ) for network-intensive workloads

I'm trying to debug some performance issues on a RHEL8.3 server.
The server is actually a Kubernetes worker nodes and hosts several Redis containers (PODs).
These containers are doing a lot of network I/O (iptraf-ng reports about 500 kPPS and 1.5Gbps).
The server is an high-end Dell server with 104 cpus and 10Gbps NICs.
The issue I'm trying to debug is related to soft IRQs. In short: despite my attempts to set IRQ affinity of the NIC on a specific range of dedicated CPUs, the utility "mpstat" is still reporting a lot of CPU spent in "soft%" on all the CPUs where the "redis-server" process is running (even if redis-server has been moved using taskset to a non-overlapping range of dedicated CPU cores).
For more details consider the attached screenshot redis_server_and_mpstat:
the "redis-server" with PID 3592506 can run only on CPU 80 (taskset -pc 3592506 returns 80 only)
as can be seen from the "mpstat" output, it's running close to 100%, with 25-28% of the time spent in "%soft" time
In the attempt to address this problem, I've been using the Mellanox IRQ affinity script (https://github.com/Mellanox/mlnx-tools/blob/master/ofed_scripts/set_irq_affinity.sh) to "move" all IRQs related to the NICs on a separate set of CPUs (namely CPUs 1,3,5,7,9,11,13,15,17 that belong to NUMA1) for both NICs (eno1np0, eno2np1) that compose the "bond0" bonded interface used by the server, see the screenshot set_irq_affinity. Moreover the "irqbalance" daemon has been stopped and disabled.
The result is that mpstat is now reporting a consistent CPU usage from CPUs 1,3,5,7,9,11,13,15,17 in "%soft" time, but at the same time the redis-server is still spending 25-28% of its time spent in "%soft" column (i.e. nothing has changed for redis-server).
This pattern is repeated for all instances of "redis-server" running on that server (there's more than 1), while other CPUs having no redis-server scheduled, are 100% idle.
Finally in a different environment based on RHEL7.9 (kernel 3.10.0) and a non-containerized deployment of Redis, I see that, when running the "set_irq_affinity.sh" script to move IRQs away from Redis CPUs, Redis %soft column goes down to zero.
Can you help me to understand why running redis into a Kubernetes container (with kernel 4.18.0), the redis-server process will continue to spend a consistent amount of time in %soft handling, despite NIC IRQs having affinity on different CPUs ?
Is it possible that the time the redis-server process spends in "soft IRQ" handling is due to the veth virtual ethernet device created by the containerization technology (in this case the Kubernetes CNI is Flannel, using all default settings) ?
Thanks

Docker containers' memory consumption never decreases (or does it?)

we're running several containers on a single docker host, mainly to run R and Python apps for data analysis. So when I load a big table into one of the containers, its memory footprint on the docker host increases.
However, when I close the Jupyter Notebook or R session, the container's memory footprint appears to remain unchanged on the host. It seems that the memory consumption of a docker container can only go up, and not down.
So I know that Linux in general occupies memory which is not needed by other applications (stuff is cached). However, how is this dealt with in the case of Docker containers? From the individual containers' perspectives there is a lot of memory (we don't want to limit the memory available to containers), and even if it is not needed inside this particular container, it would remain "occupied" in the container, and therefore inaccessible by other containers. And the host doesn't know if this memory is really needed or simply used for caching.
So how is this dealt with? I can imagine a situation where several people have started containers in which they have loaded or generated big data sets, but this was only temporary, and now the host's memory is all occupied because the memory is not freed.
I'm pretty sure that this is not how it works, so can someone explain this to me, please?
Many thanks,
Enno
In the Docker documentation, under resource constraints, there is an explanation about limiting memory to containers. When running a container, the memory is not freed based on the processes running in the container. The docs explain how the host system manages memory:
It is important not to allow a running container to consume too much of the host machine’s memory. On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. Any process is subject to killing, including Docker and other important applications. This can effectively bring the entire system down if the wrong process is killed.
Docker attempts to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system...
The Docker containers can use memory but is prevented by the Docker daemon from crashing the host system. The memory allotted to Docker containers can also be limited:
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine.
We do not want to limit memory to the containers, but there are options to do so, like --memory=<value> , --memory-swap , and --memory-reservation. So no, the host cannot free up memory of a container that is running, but it does prevent the risk of all memory being occupied and making the kernel potentially kill a crucial system process.
Please excuse the formatting. Hope this helps; I also linked the related documentation. Also, not completely related, but maybe you can check this out about using a Java application in a container:
Why the docker container memory usage doesn't decrease?

Can docker share memory and CPU between containers as needed?

If I am running multiple docker containers with bursty memory and CPU utilization, will they be able to use the full capacity of the host machine? Or will they be limited to their CPU and memory limits of the individual container definitions?
For example:
If I were running 3 containers that burst to 1GB of memory once per day, at disjoint times.
And similarly if those same containers instead were CPU heavy, and bursted to 1CPU unit per day at disjoint times.
Could I run those 3 containers on a box with only 1.1GB of memory, or 1.1 CPU unit respectively?
Docker containers are not VM's,
They run in a cage over the host OS kernel, so there's no hypervisor magic behind.
Processes running inside a container are not much different from host processes from a kernel point of view. They are just highly isolated.
Memory and cpu scheduling will be handled by the "host". What you set on docker settings are CPU shares, to give priority and bounds to some containers.
So yes, containers with sleeping processes won't consume much cpu/memory if the used memory is correctly freed after the processing spike, otherwise, that memory would be swapped out, with no much performance impact.
Instantiating a docker container will only consume memory resources. As long as no process is running, you will see zero cpu usage by it.
I would recommend reviewing cgroups documentation, and actually docs for cgroups v2, since they are better structured that v1 docs. See chapter 5 for cpu and memory controllers: https://www.kernel.org/doc/Documentation/cgroup-v2.txt
When you don't need to explicitly specify --memory and --cpu-shares option at the container startup, the container will have all the cpu share and memory available for use on the instance. If no other process is consuming the resources,then the container can use all the cpu and memory available.
In theory you should be able to run the 3 containers on the instance.
Make sure non of the containers tie up the memory or cpu resources.

Docker reserve a certain amount of memory for container

I'm running npm inside a docker container and every so often it aborts because it cannot allocate enough memory. I see some flags like --memory (How do I set resources allocated to a container using docker?) for the docker run command that seem to limit the maximum amount of memory that a container can consume, but haven't seen anything yet that would allow me to reserve an amount of memory for the container and abort immediately if it cannot be allocated.
This is not how memory management works under Linux.
If you run full virtualization, like QEMU, then all memory can be allocated and passed down into the VM. That VM then boots the kernel and the memory is managed by the kernel in the VM.
In Docker, or any other container/namespace system, the memory is managed by the kernel that runs docker and the "containers". The process that is run in container still runs like a normal process but in a different cgroup. Each cgroup has limits, like how much memory the kernel will hand out to userland, or what network interfaces it sees, but it still runs on same kernel.
An analogy of this is that docker is a "glorified ulimit". Processes under this limit still behave as normal Linux processes
they allocate memory as-needed
they will cause OOM issues if they pass some limit, or host runs out of memory
And just like you can't pre-allocate memory for Firefox, you can't pre-allocate memory for a Docker container.
You can't reserve memory in docker, only limit it with --memory.
See: https://docs.docker.com/engine/reference/run/ for more detail.
Specifically look at the user memory constraints section.
memory=inf, memory-swap=inf (default) >>>> There is no memory limit for
the container. The container can use as much memory as needed.
Note that's the default. So like other processes on the system npm will use all it can get/need.
So either free up some memory or add more.
As others have said, you cannot reserve memory for processes, and therefore containers. However, you could have the node app called from a script that will check the available memory and exit if it is below a certain threshold.

Resources