I develop a application which runs in three containers on my development box with a quadcore with hyperthreading, meaning there are 8 cores to be used by the system and docker.
Thy CPU allocation for the containers is done by docker-compose as follows:
redis: cpu_shares: 1024
mysql: cpu_shares: 1024
app: cpu_shares: 4096
I am troubled by timing out requests to redis. The load is minimal, but the utilization of redis is in bursts with longer breaks, at least in the development environment.
Hence, I assume docker is not assigning enough CPU shares to the redis container. I thought allready to put a constant artificial load on redis to let docker assign more CPU shares to it.
Is there an other way of ensuring a certain CPU share for a container?
With Docker for Mac your containers are all running in a HyperKit VM. The VM has an allocation of CPU and memory which is a subset of the total on your Mac.
You can change the allocation in Preferences - by default the Docker VM has 2 CPUs and 2GB RAM.
Related
I have a VPS running Ubuntu 20.04 with 8 CPU cores. I'm planning to use Docker in Swarm mode to serve the frontend (Vue), backend (Django) and database (PostgreSQL) through a docker-compose.yml file.
When I execute docker swarm init, Docker starts 3 containers (frontend, backend and database).
Will a single replica of each container utilize all 8 CPU cores? Or should I initiate more replicas to utilize all 8 CPU cores of my VPS?
By default, if you do not change any of the default settings or set any limits inside the compose files (resource constraints more on this - https://docs.docker.com/config/containers/resource_constraints/ or compose https://docs.docker.com/compose/compose-file/compose-file-v3/#resources), container has unlimited CPU limit and unlimited MEM limit. So one replica of each service will have ALL of the memory available if the application has means to use it.
You would probably still want to look into more replicas for options on rolling updates (updates of the SW with limited or close to zero downtime).
It seems to be available under Hyper-V isolation mode.
Is this possible in process isolation mode?
In particular, want to limit under Windows node of Kubernetes.
However, physical memory inside the container seems the amount of all memory capacity is allocated, even if set resources: limits: memory:.
For Hyper-V isolation mode the memory limit is 1GB.
For process isolation mode it is unlimited so basically the same memory as the host.
You can find more details documented here.
You can set the limits while using docker-compose, for example:
services:
mssql:
image: microsoft/mssql-server-windows-express
mem_limit: 2048m
would result with 2GB of available memory.
Please let me know if that helped.
Single docker container is working good for less number of parellel processes but when we increase the number of parellel processes to 20-30 the process execution get slows. The processes are getting slow but still the docker is utilizing only 30-40% of cpu.
I have tried following things to make docker utilize proper cpu and don't slow down the processes -
I have explicitly allocated the cpu and ram to the docker container.
I have also increased the number of file descriptors, number of process and stack size using ulimit.
even after doing this two thing still the container is not utilizing cpu properly. I am using docker exec to start multiple processes in single running container. Is there any efficient way to use single docker container for executing multiple processes or to make container use 100% of cpu?
The configuration i am using is
Server - aws ec2 t2.2Xlarge ( 8 core, 32 gb ram)
Docker version - 18.09.7
Os- ubuntu 18.04
When you run something on machine it consumes following resources:1. CPU 2. RAM 3. DISK I/O 4. Network Bandwidth. If your container is exhausting any one resource listed above than it is possible that other resources available. So monitor your system matrices to find the root cause.
CPU usage on our metrics box is at 100% intermittently causing:
'Internal server error' when rendering Grafana dashboards
The only application running on our machine is Docker with 3 subcontainers
cadvisor
graphite
grafana
Machine spec
OS Version Ubuntu 16.04 LTS
Release 16.04 (xenial)
Kernel Version 4.4.0-103-generic
Docker Version 17.09.0-ce
CPU 4 cores
Memory 4096 MB
Memory reservation is unlimited
Network adapter mgnt
Storage
Driver overlay2
Backing Filesystem extfs
Supports d_type true
Native Overlay Diff true
Memory swap limit is 2.00GB
Here is a snippet from cAdvisor:
The kworker and ksoftirqd processes change status constently from 'D' to 'R' to 'S'
Are the machine specs correct for this setup?
How can I get the CPU usage to 'normal' levels?
By default, a Docker container (just like any process on host) has access to all memory and cpu resources of the machine.
Docker provides options to limit the container resource consumption. You check the following doc dedicated to Limiting a container's resources.
We have a CentOS machine running Docker with a couple of containers. When running top, I see the process dockerd which sometimes is using a lot of cpu. Does this cpu utilization contain the cpu usage inside the containers?