How to increase docker memory limits with rancher desktop on MacOs? - docker

I'm using rancher desktop for docker on mac os and trying to run multiple containers with docker-compose. When I run couple of them, everything is ok, but when I run all of them, at least one container always ends up with OOM error(usually it's cassandra, but I think it just depends on start up order and memory consumption).
docker stats shows me that memory limit is 2.9GB. If I understand correctly, I have 2.9GB for all containers. And it's not enough for all of my containers.
I've tried to increase virtual machine memory in rancher desktop settings, gave it 17GB, but I still have only 2.9GB docker limit.
How can I increase available memory to docker with rancher desktop?

I think your solution is given in this issue:
https://github.com/rancher-sandbox/rancher-desktop/issues/2855
On arm64 machines you will need to upgrade to Monterey 12.4 or later to use more than 3 GiB of memory for the virtual machine.

Related

What's the fastest/best way to run a Windows docker container on Windows?

I have a fairly large Windows docker image + container (it has MSVS and lots of tools, based on Windows server 2022). It runs quite slowly even on my fast 16-core Threadripper Windows 11 desktop; it seems hugely disk-bound as well as taking over 50GB of disk space (and it eats more, the longer I use it). The host machine has WSL2 and Docker Desktop (with the WSL2 back-end enabled), and Hyper-V is enabled. The container is self-contained; it doesn't bind-mount any volumes from the host.
Looking at Task Manager, the C disk is pinned at 100% active time with very slow response rates; that's never good. Using procmon I see most of the disk accesses are from "vmmem" and "docker-index", and my c:\ProgramData\Docker\windowsfilter dir fills up pretty often. And I never get more than 1 or 2 CPUs worth of compute, even though I've allocated 8 CPUs to the container (probably just because it's so disk-bound).
I've read various things about how to speed up docker containers on Windows, but since I'm not 100% clear on the underlying architecture (is dockerd running in a VM? What about docker-index? The container itself? What's the filesystem driver in the container?) I'm not sure how to go about speeding it up.
Should I remove Docker Desktop and move to "plain" Windows docker as in https://lippertmarkus.com/2021/09/04/containers-without-docker-desktop/? I don't care about the desktop GUI; I only use the CLI anyway (docker build, docker-compose up, etc.).
Should I run Docker from within WSL? Would that even work with a Windows image/container?
Should I get a clean machine and only run the docker image on it?
Any other ideas?
The fastest way is:
Install a Linux distro;
Enter Linux OS
Install Docker (https://docs.docker.com/engine/install/ubuntu/);
Make your container up with docker build or docker-compose up.

Why is my docker container using high cpu, but my docker host is barely utilized?

I'm trying to understand the relationship between docker containers and their host machines. My setup is as follows:
Hypervisor: Proxmox (4x 10 core Xeon, 80 threads total)
Docker Host: LXC on Proxmox, 40 cores allocated
Docker Host OS: Ubuntu 22.10
What I'm seeing:
I have ~16 containers running within docker. Most are utilizing a fraction of a percentage of a cpu as reported by docker stats. One in particular is hovering around 100% utilization, sometimes spiking well above 100%.
When I look at the cpu utilization on the host lxc container, it's around 96% idle. I'm confused as to why the docker container is running so 'hot' and not using more of the available hardware. I've found a lot of documentation around setting limits, but not the opposite - which should be the default behavior.
Seeing as though the CPU is allowed to burst past 100%, I'm not seeing any performance type issues - but seeing that 100% having on my monitoring charts is bothering me:)
Any ideas of an action I can do to remediate this, or do I just leave it as-is?
You can limit the CPU use of the docker container, Use the following flag with the docker command
--cpus="1.0"
Example
docker run --cpus="1.0" --name my_container <docker image name>

What exactly does the Docker Desktop memory limit apply to?

The Docker Desktop for Mac docs state
By default, Docker Desktop is set to use 2 GB runtime memory, allocated from the total available memory on your Mac. To increase the RAM, set this to a higher number. To decrease it, lower the number.
https://docs.docker.com/desktop/mac/
I am unclear what this refers to. Is it the memory available to the desktop software, to the docker virtual machine, to a running container, or to something else?
Docker Desktop for Mac runs a hidden Linux VM, and this setting controls the memory allocation for that VM. So this is the total memory available to all containers combined. Options like docker run -m can still set a per-container memory limit.

Is there a way to update Docker "Resources" settings from the command line on an EC2 instance?

I'm attempting to increase the memory allocation of a specific container I'm running on an EC2 instance. I was able to do this locally by adding the mem_limit: 4GB into my docker-compose file (using version 2 not 3) and this did not work until I changed my settings in Docker desktop to be greater than the memory limit I was specifying:
My question is as follows, is it possible to change this memory slider setting from the command line and therefore would it be possible to do it on an EC2 instance and without docker desktop? I've been through the docs but was unable to find anything specific to this!
That's a Docker Desktop setting, which is only necessary because of the way docker containers run in a VM on Windows and Mac computers. On an EC2 Linux server there is no limit like that, docker processes can use as much resources as the server has available.

Docker using less memory than allowed amount specified in GUI

I am using a docker linux container on a windows 10 machine. I have the following options specified in advanced tab of docker gui:
and yet my docker container is just using 14GiB of memory.
I want to be able to use all the memory in docker i can after leaving a safe amount for windows processes. I wont be using the ram for anything other than in docker and what windows will use to run.

Resources