OMD User
# omd create docker-user
# su - docker-user
How to monitor docker container?
Micro services memory usage inside the docker container?
How to configer docker container as check_mk agent?
Iam using Check_mk for monitoring my servers and know want to monitor for docker as well?
Here are two options:
when you deploy your container add the check_mk_agent at/during provisioning and using the Check_MK Web-API, add your host, do discovery, etc.
you can use the following plugin to monitor docker containers.
Alternatively if you are using the enterprise version you can use the current innovation release (1.5.x) which has native Docker support.
This is a late answer but since this came on top of my Google search results, I will take some time to add up to Marius Pana's answer. As of now, the raw version of Check_MK also supports natively dockers. However, if you want dedicated checks inside your docker, you will need to actually install a Check_MK agent inside the docker. To do that, you need to start some sort of shell (generally sh or bash) inside the docker with docker exec -it <id> sh. You can get your docker ID with docker ps.
Now that's the easy part. The hard part is to figure out which package manager you are dealing with inside the docker (if any) and how to install inetd/xinetd or your preferred way of communication for your agent (unless it's already installed). If it's a Ubuntu-based image, you will generally need to start with a apt update, apt-get install xinetd and then you can install your packaged Check_MK agent or install it manually if you prefer. If it's a CentOS-based image, you will instead use yum. If the image is based on Arch Linux, you will probably want to use pacman.
Once you managed to install everything in your docker, you can test by adding your docker IP to Check_MK as a host. Please note that if your docker is using the host IP, you will need to forward port 6556 from your docker to another port on your host since I assume you're already monitoring the host through port 6556.
After you've checked everything is working, 2 more things. If you stop there, a simple restart of your docker will cancel every change you've made, so you need to do a docker commit to save your changes to your container image. And lastly, you will want to plan container updates ahead: you can do reinstall the agent every time a new version of the container is pulled (you could even script this), or you could add instructions to your cont-init.d which would be executed every time you launch your docker.
Related
Motivation
Running DDEV for a diverse team of developers (front-end / back-end) on various operating systems (Windows, MacOS and Linux) can become time-consuming, even frustrating at times.
Hoping to simplify the initial setup, I started working on an automated VS Code Remote Container setup.
I want to run DDEV in a VS Code Remote Container.
To complicate things, the container should reside on a remote host.
This is the current state of the setup: caillou/vs-code-ddev-remote-container#9ea3066
Steps Taken
I took the following steps:
Set up VS Code to talk to a remote Docker installation over ssh. You just need to add the following to VS Code's settings.json: "docker.host": "ssh://username#host".
Install Docker and create a user with UID 1000 on said host.
Add docker-cli, docker-compose, and and ddev to the Dockerfile, c.f. Dockerfile#L18-L20.
Mount the Docker socket in the container and use the remote user with UID 1000. In the example, this user is called node: devcontainer.json
What Works
Once I launch the VS Code Remote Container extension, an image is build using the Dockerfile, and a container is run using the parameters defined in the devcontainer.json.
I can open a terminal window and run sudo docker ps. This lists the container I am in, and its siblings.
My Problem
DDEV needs to create docker containers.
DDEV can not be run as root.
On the host, the user with UID 1000 has the privilege to run Docker.
Within the container, the user with UID 1000 does not have the privilege to run Docker.
The Question
Is there a way to give an unprivileged user access to Docker within Docker?
Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!
I try to install Ruxit inside docker but I got this extremely strange error?
My dockerfile
RUN wget -O ruxit-Agent-Linux-1.91.271.sh https://yjm50779.live.ruxit.com/installer/agent/unix/latest/hnaT75uwgZzoBEf7
RUN /bin/sh ruxit-Agent-Linux-1.91.271.sh
Error:
Docker container detected! Ruxit Agent cannot be installed inside docker container. Setup won't continue.
Great question! As the error message indicates, you can indeed not install Ruxit Agent inside a Docker container. Now Ruxit does support Docker, so how come you cannot install inside a container?
Ruxit Agent needs to be installed directly on the host operating system and it will detect and monitor any docker containers that you start there - no need to modify any of your existing Docker images. We like to think this is a pretty cool approach.
But what if you just can't install anything on the host operating system?
Then we are currently working on two options for you:
We will soon publish a Docker image with Ruxit Agent pre-installed on dockerhub. If you start this image as a privileged Docker container, Ruxit Agent will automatically monitor all other containers running on the same host - again without modifying any other container image. This option is useful, if you want to roll out Ruxit Agent e.g. with Mesos, Docker Swarm or Kubernetes.
We are working on Ruxit Agent for Platform as a Service deployments, where you do not have root access to the host where your application is running. In this scenario, you need to copy the files of Ruxit agent into you Docker container and modify the startup parameters of e.g. your JVM to load Ruxit Agent into the process you want to monitor.
Both these options will be released within the next couple of weeks, check our blog to be the first to know. If you want to try them a bit earlier, let us know at success#ruxit.com and we will set you up with an early access preview as soon as we have something ready.
Can't seem to find recommended way to do this.
I have a vm host using unintuitive running a number of containers.
It is using an older version of docker and I want to update.
What are the steps to do this?
Stop containers.
Do update of docker version
Restart containers.
Trying to minimise downtime and don't have the option to bring up a new vm. Also vm contains volume data I don't want to lose
Depending on your distribution, run command to update docker.
e.g. on CentOS run yum update docker-engine
This will stop your containers, update docker and start containers which were configured to restart automatically (e.g. docker run --restart=always ...).
Please note that in case your container is configured to be removed automatically (docker run --rm ...) you will loose all data associated with the container unless you manage the data via volumes.
In case you use external service discovery tool, the port mapping of the exposed ports usually change so you should trigger an update of the ports in the external service discovery tool.
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md