I've seen that on Windows and Mac it's very easy to change the RAM containers are given - you just go into the GUI. But how do you do this on Linux, where it's a CLI instead of a GUI?
The Docker docs mention an -m flag, but this flag doesn't give any response (just prints the entirety of the help output again) so I don't know whether it worked. It also seems specific to containers, whereas I'd like to change the global default.
Lastly, is there a way to check the current default RAM, so I can make sure whatever I do in the end actually worked?
On native Linux, Docker can use all available host memory. It uses a lightweight kernel-based isolation mechanism that generally shares resources like CPU cores and memory (and on modern installations, disk space) using the standard kernel mechanism. There isn't a control or setting to limit or increase this.
On other platforms Docker runs a hidden Linux VM to be able to run a Linux kernel to use these isolation mechanisms, and the Docker Desktop memory control affects the memory allocation for that VM.
This is how I "check" the Docker container memory:
Open the linux command shell and -
Step 1: Check what containers are running.
docker ps
Step 2: Note down the 'CONTAINER ID' of the container you want to check and issue the following command:
docker container stats <containerID>
eg:
docker container stats c981
This will give an output like:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c981c9482284 registry 0.00% 4.219MiB / 1.944GiB 0.21% 9.66kB / 0B 0B / 0B 14
'MEM USAGE / LIMIT' column will give you the actual memory usage and default memory allocated.
Note : press ctrl+c to come out of the view and back to command prompt.
Related
Hey guys I am running WSL 2 with a docker container running on the WSL2 but the docker container itself (its a KVM running QEMU VM) is limited to 4 GB I need a lot more than 4GB I need alt east 8 GB to run the things I want to run in QEMU VM (Theres a reason why I am running QEMU and no I cannot go without it)
I am running docker desktop and if I inspect the docker container it says the following
I have edited the .wslconfig file. and set the limit to 20GB as well as set the swap file to 1 and I have tried a command with docker run insert_docker_name_here it --memory 8000 -m
that then says it cant find the docker container for some reason but its under docker desktop.
I have tried looking on the internet for an answer but everything seems to point to .wslconfig file, --memory, or some vague answer that doesnt help at all, is there a way I can edit my docker container file and set it to use 8GB or more??
Please help - I am new to docker and would appreciate the asssistance
There is no memory limit on docker containers by default. Using --memory will only specify the upper limit of memory the container can use. You need to examine how the docker container is started and remove any limit there.
Also the WSL2 has no memory limit by default and will just grab as much memory as it needs. The value in .wslconfig is also the upper limit. If you just remove all the limits, it should just use all available memory.
This leaves QEMU itself. Have you checked what the guest RAM size is? (-m parameter on the commandline)
OK, so my title may not actually be linked to a possible solution, however this is my problem.
I am running a Python 3 Jupyter notebook inside a docker container in from my windows 10 kaby-lake (2 physical cores, 4 virtual cores) laptop.
I noticed while doing heavy computing from there, my CPU usage seen in the task monitor is very low (~15%).
When going on the details for each process, the VBoxHeadless.exe actually uses 24% of the processor, which matches docker stats command which yields 97-100% CPU usage, and therefore makes sense from a single-core operation point of view.
My actual issue is that even though on thread is filled in terms of CPU time, windows (I guess) does not decide that it may actually be useful to speed up the CPU, and therefore it runs at 1.7GHz (with other apps in high performance mode, I usually hit the max 3.5GHz that the computer is capable of).
Therefore, how can I induce the higher clock speeds (nominal 2.7GHz or max 3.5GHZ) (considering that they would probably double my single threaded speed) from docker itself or inside windows 10?
You need to configure the docker machine running docker. If you haven't created a custom one, the default docker machine named 'default' will only have access to one cpu.
You can check all the configuration for this docker-machine by running:
docker-machine inspect default
You need to purge this default machine and recreate it:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" --virtualbox-cpu-count "2" --virtualbox-memory "2048" default
You can check all the avaible configuration options for the machine by running:
docker-machine create --help
Defining CPU Shares can help you but not exactly.
CPU limits are based on shares as these shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
e.g. The --cpu-shares parameter defines a share between 0-768. If a container defines a share of 768, while another defines a share of 256, the first container will have 50% share while the other one having 25% of the available share total.
Below the first container will be allowed to have 75% of the share. The second container will be limited to 25%.
docker run -d --name p1 --cpuset-cpus 0 --cpu-shares 768 image_name
docker run -d --name p2 --cpuset-cpus 0 --cpu-shares 256 image_name
sleep 5
docker stats --no-stream
docker rm -f p1 p2
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.
I need to have a Docker Container with 6gb of RAM memory.
I tried this command:
docker run -p 5311:5311 --memory=6g my-linux
But it doesn't work because I logged in to the Docker Container and I checked the amount of memory available. This is the output which shows there are only 2gb available:
>> cat /proc/meminfo
MemTotal: 2046768 kB
MemFree: 1747120 kB
MemAvailable: 1694424 kB
I tried setting the preferences -> advance in the Docker Application.
If I set 6gb, it works... I mean, I have a container with 6gb MemTotal.
In this way all my containers will have 6gb...
I was wondering how to allocate 6gb of memory for only one container using some commands or setting something in the Docker File. Any help?
Don't rely on /proc/meminfo for tracking memory usage from inside a docker container. /proc/meminfo is not containerized, which means that the file is displaying the meminfo of your host system.
Your /proc/meminfo indicates that your Host system has 2G of memory available. The only way you'll be able to make 6G available in your container without getting more physical memory is to create a swap partition.
Once you have a swap partition larger or equal to ~4G, your container will be able to use that memory (by default, docker imposes no limitation to running containers).
If you want to limit the amount of memory available to your container explicitly to 6G, you could do docker run -p 5311:5311 --memory=2g --memory-swap=6g my-linux, which means that out of a total memory limit of 6G (--memory-swap), upto 2G may be physical memory (--memory). More information about this here.
There is no way to set memory limits in the Dockerfile that I know of (and I think there shouldn't be: Dockerfiles are there for building containers, not running them), but docker-compose supports the above options through the mem_limit and memswap_limit keys.
I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.
I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin