is there way to know which container is using which gpu device? - docker

Let say I have a docker container is running A,B,C and GPU 1,2,3.
I can check the gpu process ID with
nvidia-smi
some times container itself hold the gpu memory after it used up.
so I want to find which gpu container is running which gpu and restart it
I can not kill the gpu-pid because I do not have the sudo permission.
is there way to know which container using which gpu pid?
container A -> using gpu A

Related

Using KinD to create a local cluster and the CPU maintains high usage

I'm using KinD to create a local cluster and noted that the CPU percentage usage stays relatively high, between 40-60 for docker.hyperkit on Mac OS Catalina 10.15.6. Within Docker for mac I limited the resources to CPUs: 4 and Memory:6.00 GB.
My KinD cluster consists of a control plane node and three worker nodes. Is this CPU usage normal for docker for mac? Can I check to see the utilization per container?
Each kind "node" is a Docker container, so you can inspect those in "normal" ways.
Try running kind create cluster to create a single-node cluster. If you run docker stats you will get CPU, memory, and network utilization information; you can also get the same data through the Docker Desktop application, selecting (whale) > Dashboard. This brings up some high-level statistics on the container. Sitting idle on a freshly created cluster, this seems to be consistently using about 30% CPU for me. (So 40-60% CPU for a control-plane node and three workers sounds believable.)
Similarly, since each "node" is a container, you can docker exec -it kind-control-plane bash to get an interactive debugging shell in a node container. Once you're there, you can run top and similar diagnostic commands. On my single node I see the top processes as kube-apiserver (10%), kube-controller (5%), etcd (5%), and kubelet (5%). Again, that seems reasonably normal, though it might be nice if it used less CPU sitting idle.

How long does it take you to create a container from a 4GB docker image?

I just need some kind of reference on how long it should take to create a container based on a 4GB docker image. On my computer it is currently taking >60 seconds, which causes docker-compose to timeout. Is this normal for a modern workstation with SSD disks and a decent CPU? Why does it take so long?
The docker context is ~6MB, so that should not be the issue here, but I know it could be had the context been larger.
It's running on a linux host, so it's also not the IO-overhead tax you pay when running docker in a VM like Docker for MAC does.
I just don't understand why it's so slow, if it's expected for these large images, or if I should try some other technology instead of Docker (like a virtual machine, ansible scripts or whatever).

Any way to limit a running docker container's net io

I know we can limit cpu and memory using the command
docker container update, but this command has no option about network io. Is there any way to fulfill this?

Docker container not utilizing cpu properly

Single docker container is working good for less number of parellel processes but when we increase the number of parellel processes to 20-30 the process execution get slows. The processes are getting slow but still the docker is utilizing only 30-40% of cpu.
I have tried following things to make docker utilize proper cpu and don't slow down the processes -
I have explicitly allocated the cpu and ram to the docker container.
I have also increased the number of file descriptors, number of process and stack size using ulimit.
even after doing this two thing still the container is not utilizing cpu properly. I am using docker exec to start multiple processes in single running container. Is there any efficient way to use single docker container for executing multiple processes or to make container use 100% of cpu?
The configuration i am using is
Server - aws ec2 t2.2Xlarge ( 8 core, 32 gb ram)
Docker version - 18.09.7
Os- ubuntu 18.04
When you run something on machine it consumes following resources:1. CPU 2. RAM 3. DISK I/O 4. Network Bandwidth. If your container is exhausting any one resource listed above than it is possible that other resources available. So monitor your system matrices to find the root cause.

How docker allocate memory to the process in container?

Docker first initializes a container and then execute the program you want. I wonder how docker manages the memory address of container and the program in it.
Docker does not allocate memory, it's the OS that manages the resources used by programs. Docker (internally) uses cgroups which is a kernel service. The reason that ps command (on the host) won't show up processes running in a container, is that containers run in different cgroups which are isolated from each other.
Rather than worrying about the docker memory, you would need to look at the underlying host (VM/instance) where you are running the docker container. The number of containers is determined by a number of factors including what your app is that runs on the container.
See here for the limits that you can run into Is there a maximum number of containers running on a Docker host?

Resources