I try to install Ruxit inside docker but I got this extremely strange error?
My dockerfile
RUN wget -O ruxit-Agent-Linux-1.91.271.sh https://yjm50779.live.ruxit.com/installer/agent/unix/latest/hnaT75uwgZzoBEf7
RUN /bin/sh ruxit-Agent-Linux-1.91.271.sh
Error:
Docker container detected! Ruxit Agent cannot be installed inside docker container. Setup won't continue.
Great question! As the error message indicates, you can indeed not install Ruxit Agent inside a Docker container. Now Ruxit does support Docker, so how come you cannot install inside a container?
Ruxit Agent needs to be installed directly on the host operating system and it will detect and monitor any docker containers that you start there - no need to modify any of your existing Docker images. We like to think this is a pretty cool approach.
But what if you just can't install anything on the host operating system?
Then we are currently working on two options for you:
We will soon publish a Docker image with Ruxit Agent pre-installed on dockerhub. If you start this image as a privileged Docker container, Ruxit Agent will automatically monitor all other containers running on the same host - again without modifying any other container image. This option is useful, if you want to roll out Ruxit Agent e.g. with Mesos, Docker Swarm or Kubernetes.
We are working on Ruxit Agent for Platform as a Service deployments, where you do not have root access to the host where your application is running. In this scenario, you need to copy the files of Ruxit agent into you Docker container and modify the startup parameters of e.g. your JVM to load Ruxit Agent into the process you want to monitor.
Both these options will be released within the next couple of weeks, check our blog to be the first to know. If you want to try them a bit earlier, let us know at success#ruxit.com and we will set you up with an early access preview as soon as we have something ready.
Related
I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)
Motivation
Running DDEV for a diverse team of developers (front-end / back-end) on various operating systems (Windows, MacOS and Linux) can become time-consuming, even frustrating at times.
Hoping to simplify the initial setup, I started working on an automated VS Code Remote Container setup.
I want to run DDEV in a VS Code Remote Container.
To complicate things, the container should reside on a remote host.
This is the current state of the setup: caillou/vs-code-ddev-remote-container#9ea3066
Steps Taken
I took the following steps:
Set up VS Code to talk to a remote Docker installation over ssh. You just need to add the following to VS Code's settings.json: "docker.host": "ssh://username#host".
Install Docker and create a user with UID 1000 on said host.
Add docker-cli, docker-compose, and and ddev to the Dockerfile, c.f. Dockerfile#L18-L20.
Mount the Docker socket in the container and use the remote user with UID 1000. In the example, this user is called node: devcontainer.json
What Works
Once I launch the VS Code Remote Container extension, an image is build using the Dockerfile, and a container is run using the parameters defined in the devcontainer.json.
I can open a terminal window and run sudo docker ps. This lists the container I am in, and its siblings.
My Problem
DDEV needs to create docker containers.
DDEV can not be run as root.
On the host, the user with UID 1000 has the privilege to run Docker.
Within the container, the user with UID 1000 does not have the privilege to run Docker.
The Question
Is there a way to give an unprivileged user access to Docker within Docker?
I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)
OMD User
# omd create docker-user
# su - docker-user
How to monitor docker container?
Micro services memory usage inside the docker container?
How to configer docker container as check_mk agent?
Iam using Check_mk for monitoring my servers and know want to monitor for docker as well?
Here are two options:
when you deploy your container add the check_mk_agent at/during provisioning and using the Check_MK Web-API, add your host, do discovery, etc.
you can use the following plugin to monitor docker containers.
Alternatively if you are using the enterprise version you can use the current innovation release (1.5.x) which has native Docker support.
This is a late answer but since this came on top of my Google search results, I will take some time to add up to Marius Pana's answer. As of now, the raw version of Check_MK also supports natively dockers. However, if you want dedicated checks inside your docker, you will need to actually install a Check_MK agent inside the docker. To do that, you need to start some sort of shell (generally sh or bash) inside the docker with docker exec -it <id> sh. You can get your docker ID with docker ps.
Now that's the easy part. The hard part is to figure out which package manager you are dealing with inside the docker (if any) and how to install inetd/xinetd or your preferred way of communication for your agent (unless it's already installed). If it's a Ubuntu-based image, you will generally need to start with a apt update, apt-get install xinetd and then you can install your packaged Check_MK agent or install it manually if you prefer. If it's a CentOS-based image, you will instead use yum. If the image is based on Arch Linux, you will probably want to use pacman.
Once you managed to install everything in your docker, you can test by adding your docker IP to Check_MK as a host. Please note that if your docker is using the host IP, you will need to forward port 6556 from your docker to another port on your host since I assume you're already monitoring the host through port 6556.
After you've checked everything is working, 2 more things. If you stop there, a simple restart of your docker will cancel every change you've made, so you need to do a docker commit to save your changes to your container image. And lastly, you will want to plan container updates ahead: you can do reinstall the agent every time a new version of the container is pulled (you could even script this), or you could add instructions to your cont-init.d which would be executed every time you launch your docker.
We are running jenkins in a docker container and we want to use the docker build step plugin. The documentation tells us:
You have to make sure that Docker service is running on slaves where you run the build. In Jenkins global configuration, you need to specify Docker REST API URL (typically somethig like http://127.0.0.1:2375)
But I see very often that people are using 0.0.0.0:2375
What is the difference and which do we have to use when we just want to use the docker daemon inside one docker container on one server (docker daemon is running on the same server)?
Regarding the differences between 0.0.0.0:2375 and 127.0.0.1:2375, according to this answer it's basically whether you want to open the host up to the outside or not.
If it's all on one server, I'm assuming both should work as it's all on the same host..