Jenkins in a container is much slower than on the server itself - docker

We recently had our jenkins redone. We decided to have the new version on a docker container on the server.
While migrating, I noticed that the jenkins is MUCH slower when its in a container than when it ran on the server itself.
This is a major issue and could mess up our migration.
I tried looking for ways to give more resources to the container with not much help.
How can I speed the jenkins container/ give it all the resources it needs on the server (the server is dedicated only to jenkins).
Also, how do I devide these resources when I want to start up slave containers as well?

Disk operations
One thing that can go slow with Docker is when the process running in a container is making a lot of I/O calls to the container file system. The container file system is a union file system which is not optimized for speed.
This is where docker volumes are useful. Additionally to providing a location on the file system which survives container deletion, disk performance on a docker volume is good.
The Jenkins Docker image defines the JENKINS_HOME location as a docker volume, so as long as your Jenkins jobs are making their disk operations within that location you should be fine.
If you determine that disk access on that volume is still too slow, you could customize the mount location of that volume on your docker host so that it would end up being mounted on a fast drive such as a SSD.
Another trick is to make a docker volume mounted to RAM with tmpfs. Note that such a volume does not offer persistence and that data at that location will be lost when the container is stopped or deleted.
JVM memory exhaustion / Garbage collector
As Jenkins is a Java application, another potential issue comes in mind: memory exhaustion. In the case the JVM on which the Jenkins process runs on is too limited in memory, the Java garbage collector will runs too frequently. You can witness that when you realize your Java app is using too much CPU (the garbage collector uses CPU). If that is the case, give more memory to the JVM:
docker run-p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Xmx2048m -Djava.awt.headless=true" jenkins/jenkins:lts
Network
Docker containers have a virtual network stack and custom network settings. You also want to make sure that all network related operation are fast.
The DNS server might be an issue, check it by executing ping <some domain name> from the Jenkins container.

Related

Purpose of Writable but Stateless Partitions under Container OS

Recently I was running a container under Compute Engine's container OS, and my data (my TLS certificate specifically) wasn't getting persisted outside of the container across reboots because I was writing to /etc. After a bit of time, I stumbled upon Disks and file system overview - File system, which explains how their are two types of writable partitions: stateful and stateless. /etc is stateless, and I needed to move my persisted files to /var for stateful storage.
But I'm left wondering about the purpose of writable, stateless partitions. Deploying Containers - Limitations explains how a container OS (on a VM instance) can only run one container. What does a writable but stateless partition enable compared to just writing data within the docker container, since both of those writable locations would be lost on host OS reboot anyway? Only benefit I could see would be sharing data across containers on the same host OS, but the limitation above invalidates that.
The main purpose of COS images is the security: a minimal OS, without useless system libraries and binaries and able to run containers.
So that, the /etc is stateless to not persist changes and updates (backdoors) in the most important executable library of the COS.
On the container side, it lives in memory. You can write what you want on it, it's written in memory (except if you have mount volume in your container, but it's not the purpose here). And you are limited by the amount of memory available in the container. And finally, when you stop the container, it is offloaded from the memory and of course, you lost all the data written in the container.
So now, you need to have in mind that the /etc of your container isn't the same as your /etc of your VM. Same for the /var. The /var of your container is always stateless (if not mounted from a VM volume), the /var of your VM is statefull.
In addition, the lifecycle isn't the same: You can start and stop several containers on your COS VM, without stopping and restarting it. So the VM /etc will live all the VM life, and maybe "view" several containers' life.
Eventually, the COS image is used on a Compute Engine to run a container, and only one at a time. However, this COS image is also used for Kubernetes node pools (GKE on GCP) and, typically with Kubernetes, you can run several Pod (1+ containers) on the same Node (Compute Engine instance).
All this use cases can show you the meaning and the usefulness (or not) of these restrictions and features (and I hope I was clear in my explanations!)

Is it safe to run docker in docker on Openshift?

I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it --privilege.
All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the --privilege permissions.
Is running privilege container on Openshift is dangerous, and if so why and how much?
A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, among other things. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.
I'd suggest that using --privileged at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if e.g. your container needs to mlock(2) you should --cap-add IPC_LOCK for the specific privilege you need, instead of opening up the whole world.
(My understanding is still that trying to run Docker inside Docker is generally considered a mistake and using the host's Docker daemon is preferable. Of course, this also gives unlimited control over the host...)
In short, the answer is no, it's not safe. Docker-in-Docker in particular is far from safe due to potential memory and file system corruption, and even mounting the host's docker socket is unsafe in effectively any environment as it effectively gives the build pipeline root privileges. This is why tools like Buildah and Kaniko were made, as well as build images like S2I.
Buildah in particular is Red Hat's own tool for building inside containers but as of now I believe they still can't run completely privilege-less.
Additionally, on Openshift 4, you cannot run Docker-in-Docker at all since the runtime was changed to CRI-O.

I'm still confused by Docker containers and images

I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.

Docker build extremely slow in a machine running a swarm cluster

I'm using my laptop as a single-noded Docker Swarm cluster.
After deploying my cluster, it gets extremely slow to run a docker build command. Even if a command is cached (e.g. RUN chmod ...), it takes sometimes minutes to complete.
How can I debug this and understand what's the cause of the slowdown?
Context
Number of services in my swarm cluster: 22
Docker version: 18.04-ce
Host OS: Linux 4.15.15
Host Arch: x86_64
Host specs: i7, 16GB of RAM, SSD/HDD hybrid disk (docker images are stored in the HDD part)
Using VMs or docker-machine: No
In this case, it turned out to be too much disk I/O.
As I've mentioned above, my laptop's storage is separated into an SSD disk and an HDD disk. The docker images are stored in the HDD disk, but so are the docker volumes created (which I initially overlooked).
The cluster that I am running locally contains a PostgreSQL database that receives a lot of writes. Those writes were clogging my HDD disk, so the solution to this specific problem was to mount PostgreSQL's storage in a directory stored in the SDD disk. Find below the debug procedure.
I found this out by using iostat like instructed in this blog post:
iostat -x 2 5
By looking at the output of this command, it became clear that my HDD disk's %utilized param was up to 99%, so it was probably the culprit. Next, I ran iotop and dockerd+postgres was at the top of the list.
In conclusion, if your containers are very I/O intensive, they could slow down the whole docker infrastructure to a crawl.

Backup docker windows RUNNING container

I run some Docker Windows containers. I'm searching for some way to backup these containers, while they're running. But when I try to use standart ways to backup containers, I get such errors:
PS C:\Users\roza> docker commit 908d6334d554
Error response from daemon: windows does not support commit of a running container
PS C:\Users\roza> docker export 908d6334d554 -o tar.tar
Error response from daemon: the daemon on this platform does not support export of a container
Why I cannot commit/export running Windows containers?
Is there some (maybe non-standart and very tricky, maybe with usage of external tools) way to create backup of such containers?
This may not be what you want to hear but...
In container world, backup of running containers should not be required. If you lose something when the container exists, then the image should be better segmented. Anything that must survive after the container is killed (log, assets, or even temp folders) should be mapped as volumes. That will give you greater control over backup.
A commit of a Windows container also involves stopping it first, then committing. Another limitation is that VSS based apps won't interoperate with containers. As the earlier answer suggested, the standard approach for containers is to simply spin up a new container from an image.
Windows images from Microsoft (which is all Windows images) are licensed, and I believe part of that licensing means you cannot export the image. The lack of pause/unpause is because of the underlying implementation. Linux does a pause with cgroups which aren't on Windows. Only Windows HyperV containers support pause because they use a HyperV command to implement it.
That said, backing up anything in docker involves backing up 3 things:
the image registry server
the configuration for the container, preferably a docker-compose.yml file
the volume data
You don't backup the containers themselves, they are ephemeral, treated like cattle. The volume data will be a filesystem directory, and you'll use your standard backup tools on this directory. If you cannot backup while your container is running, then stop the container first, and restart the container after the backup is complete.

Resources