How docker allocate memory to the process in container? - docker

Docker first initializes a container and then execute the program you want. I wonder how docker manages the memory address of container and the program in it.

Docker does not allocate memory, it's the OS that manages the resources used by programs. Docker (internally) uses cgroups which is a kernel service. The reason that ps command (on the host) won't show up processes running in a container, is that containers run in different cgroups which are isolated from each other.

Rather than worrying about the docker memory, you would need to look at the underlying host (VM/instance) where you are running the docker container. The number of containers is determined by a number of factors including what your app is that runs on the container.
See here for the limits that you can run into Is there a maximum number of containers running on a Docker host?

Related

What does host's top command show about CPU/Memory utilization by kubenetes pods

If I issue top command in a node of a kubenetes cluster, would it show CPU and Memory utilization of processes that are running inside docker as part of Kubenetes cluster? If so how would they be named?
As we can see from Isolate containers with a user namespace
Linux namespaces provide isolation for running processes, limiting their access to system resources without the running process being aware of the limitations
Docker uses Linux namespaces to isolate the processes, it never changes the fact, the processes running in docker is simply ones in host with limitations.
So you can always see the CPU and Memory utilization of all the processes inside or outside Docker.
For how would they be named, they named with the process name run inside docker.

Do Docker Containers allow for changing of resources [CPU, Memory, Storage] while running?

I have several database processes running in docker containers. Under certain conditions, I'd like to throttle the Memory and Storage used by the container. I understand that docker-compose lets you restrict resource use, but I want to keep the container running if possible.
yes, you can look into docker update command, you can update memory and CPU but storage is not listed in the docker-update command you can look further here for storage option.
Update a container with cpu-shares and memory
To update multiple resource configurations for multiple containers:
$ docker update --cpu-shares 512 -m 300M abebf7571666
Extended description
The docker update command dynamically updates container configuration.
You can use this command to prevent containers from consuming too many
resources from their Docker host. With a single command, you can place
limits on a single container or on many. To specify more than one
container, provide a space-separated list of container names or IDs.
Warning:
The docker update and docker container update commands are not supported for Windows containers.
docker-update-command

I'm still confused by Docker containers and images

I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.

Finding how much CPU a Docker container has access to from within the container

Is it possible from within the Docker container to inspect how much CPUs it has access to? Let us assume that the docker run command that started the container may or may not have given the relevant flag(s).
My OS/distro of interest for the container is Ubuntu, but I'm also curious to know if different OSes have different means to address it.
DOCKER STATS
Docker stats command will give details of resource usage for each running container

Docker container with HBA card

How can i attach a HBA card (which is on my physical server running on centos 7) to a docker container? As I'm doing POC for migration to docker from existing environment this is much needed. It's similar to direct IO in VMware ESXi(Attaching a physical hba to VM can be done via Direct I/O).
Docker isn't a hypervisor, containers aren't VMs, and "attaching devices" to a container doesn't necessarily make sense -- a container is just a process running on your host.
You can expose a device node in /dev to a container using the --device flag to docker run, although exposing a block device inside a container usually leads to other complications (e.g., a normal container can't mount filesystems, so you would need to run it with --privileged, which may or may not be acceptable form a security perspective depending on your environment).
For storage, it is more common to mount devices on the host, and then expose those filesystems to container as Docker volumes (-v /host/path:/container/path).

Resources