Using multiple processors in a Docker container - docker

I have a docker container on the centos 7 server. This container needs the most processor power to run calculations faster. The host server has four processors as shown in the image below.
But when we see the statistics of the host server (While the code Is running ), the Docker container uses only 25% of the host server's resources.
And when we look at the status of the container's processor from the docker stats command, all 100% of it is in use .
I made some changes in --cpuscpus container to use three processors 0, 1 and 2 but no change.
Where is the problem?

Related

What are the problems with running Docker containers where the Linux kernel version of the base image differs from the OS kernel version? [duplicate]

How can docker run on a Debian host maybe an OpenSUSE in a container? It uses different kernel, with separated modules. Also older Debian versions have used older kernels, so how can run it on a kernel version 3.10+ ? Older kernels have only older built in functions, how can an old distro manage new features?
What is "the trick" in it?
Docker never uses a different kernel: the kernel is always your host kernel.
If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise, it won't.
"Containers" Are Just Process Configuration
The key thing to understand is that a Docker container is not a virtual machine: it doesn't create a new virtual computer on which to run the software. Instead, Docker starts processes in your existing OS, just like you start new processes from the command line.
The difference between a "containerized" process and an ordinary process is the restrictions put on the containerized process and the changes to how it sees the environment around it. (These are passed on to any child processes started by the containerized process.) Typical restrictions and changes include:
Instead of using the host's root filesystem, mount a different filesystem on / (usually one supplied with the container's image). Parts of the host filesystem may be mounted underneath the new process' root filesystem, e.g. by using docker run -v /u/myprogram-data:/var/data/myprogram so that when the containerized process reads or writes /var/data/myprogram/file this reads/writes /u/myprogram-data/file in the host filesystem.
Create a separate process space for the containerized process so that it can see only itself and its children (with ps or similar commands), but cannot see other processes running on the host.
Create a separate user namespace so that the users in the container are different from those in the host: e.g., UID 1234 in the containerized process will not be the same as UID 1234 for non-containerized
Create a separate set of network interfaces with their own IP addresses, often using a "virtual router" and address translation between those and the host network interfaces. (E.g., the host, when it receives a packet on port 8080, forwards it to port 80 on the container processes' virtual network interface.)
All of this is done by facilities built into the kernel; you can do any of it yourself without Docker if you write a program to do the appropriate setup and set the appropriate parameters when it starts a new process.
Compatibility
So what does "compatible enough" mean? It depends on what requests the program makes of the kernel (system calls) and what features it expects the kernel to support. Some programs make requests that will break things; others don't. For example, on an Ubuntu 18.04 (kernel 4.19) or similar host:
docker run centos:7 bash works fine.
docker run centos:6 bash fails with exit code 139, meaning it terminated with a segmentation violation signal; this is because the 4.19 kernel doesn't support something that that build of bash tried to do.
docker run centos:6 ls works fine because it's not making a request the kernel can't handle, as bash was.
If you try docker run centos:6 bash on an older kernel, say 4.9 or earlier, you'll find it will work fine. (At least as far as I tested it.)
How can docker run on a Debian host maybe an OpenSUSE in a container
Because the kernel is the same and will support the Docker engine to run all those container images: the host kernel should be 3.10 or more, but its list of system calls is fairly stable.
See "Architecting Containers: Why Understanding User Space vs. Kernel Space Matters":
Applications contain business logic, but rely on system calls.
Once an application is compiled, the set of system calls that an application uses (i.e. relies upon) is embedded in the binary (in higher level languages, this is the interpreter or JVM).
Containers don’t abstract the need for the user space and kernel space to share a common set of system calls.
In a containerized world, this user space is bundled up and shipped around to different hosts, ranging from laptops to production servers.
Over the coming years, this will create challenges.
From time to time new system calls are added, and old system calls are deprecated; this should be considered when thinking about the lifecycle of your container infrastructure and the applications that will run within it.
See also "Why kernel version doesn't match Ubuntu version in a Docker container?":
There's no kernel inside a container. Even if you install a kernel, it won't be loaded when the container starts. The very purpose of a container is to isolate processes without the need to run a new kernel.

Docker using less memory than allowed amount specified in GUI

I am using a docker linux container on a windows 10 machine. I have the following options specified in advanced tab of docker gui:
and yet my docker container is just using 14GiB of memory.
I want to be able to use all the memory in docker i can after leaving a safe amount for windows processes. I wont be using the ram for anything other than in docker and what windows will use to run.

How docker allocate memory to the process in container?

Docker first initializes a container and then execute the program you want. I wonder how docker manages the memory address of container and the program in it.
Docker does not allocate memory, it's the OS that manages the resources used by programs. Docker (internally) uses cgroups which is a kernel service. The reason that ps command (on the host) won't show up processes running in a container, is that containers run in different cgroups which are isolated from each other.
Rather than worrying about the docker memory, you would need to look at the underlying host (VM/instance) where you are running the docker container. The number of containers is determined by a number of factors including what your app is that runs on the container.
See here for the limits that you can run into Is there a maximum number of containers running on a Docker host?

Docker container with HBA card

How can i attach a HBA card (which is on my physical server running on centos 7) to a docker container? As I'm doing POC for migration to docker from existing environment this is much needed. It's similar to direct IO in VMware ESXi(Attaching a physical hba to VM can be done via Direct I/O).
Docker isn't a hypervisor, containers aren't VMs, and "attaching devices" to a container doesn't necessarily make sense -- a container is just a process running on your host.
You can expose a device node in /dev to a container using the --device flag to docker run, although exposing a block device inside a container usually leads to other complications (e.g., a normal container can't mount filesystems, so you would need to run it with --privileged, which may or may not be acceptable form a security perspective depending on your environment).
For storage, it is more common to mount devices on the host, and then expose those filesystems to container as Docker volumes (-v /host/path:/container/path).

Is there a maximum number of containers running on a Docker host?

Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?
There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
How you are configuring your docker containers.
What you are running in your containers.
What kernel, distribution and docker version you are on.
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
Mounts an AUFS and shm file system (1048576 mounts max per fs type)
Create's an AUFS layer on top of the image (127 layers max)
Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
Docker API/daemon internal data to manage container. (~400k per container)
Creates kernel cgroups and name spaces
Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )
Docker options
Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
--net=none and --net=host would remove the networking overheads.
Container services
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
Example
1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes
From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.
The reason is:
This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.
With Docker-compose, I am able to run over 6k containers on single host (with 190GB memory). container image is under 10MB. But Due to bridge limitation i have divided the containers in batches with multiple services, each service have 1k containers and separate subnet network.
docker-compose -f docker-compose.yml up --scale servicename=1000 -d
But after reaching 6k even though memory is still available around 60GB, it stops scaling and suddenly spikes up the memory. There should be bench-marking figures published by docker team to help, but unfortunately its not available. Kubernetes on the other hand publishes bench-marking stats clearly about the number of pods recommended per node.

Resources