Connecting the Docker Daemon insde the CDK on RHEL based docker images - docker

I want to use the docker command line tool as in "docker ps", "docker build" and "docker run". How can I connect "docker" to the Docker Daemon inside the CDK, so I can create RHEL-based Docker images?

Use the vagrant-service-manager plugin to set up your host environment for connecting your client Docker binary (docker) to the Docker service running inside CDK. In the directory with the Vagrantfile you used to launch CDK, run:
eval "$(vagrant service-manager env docker)"
This will export environment variables that instruct the docker binary to connect to CDK.
To display info about the services running inside CDK and about the necessary settings to connect to the from your host (i.e. to see what the first command does), run:
vagrant service-manager env
See documentation for details: Using the vagrant-service-manager Plugin.
If you don't already have the docker client binary installed on your host system, vagrant-service-manager can do it for you:
vagrant service-manager install-cli docker
More details in documentation: Preparing Host System for Using Docker from the Command Line.
Just like using the docker binary to connect to the Docker daemon inside CDK, you can use the oc binary to connect to the OpenShift service running in CDK. Installation and set up is analogous to the docker client.

Related

How to forbidden docker run command in docker daemon also how to restrict individual user access to docker daemon

Objective:
I have 200+ projects using docker builds they run docker in their own docker daemon.To reduce cost i setup a central docker build server where i have to allow all projects to build docker images securely
Description
I created the setup with jenkins docker pipeline by installing docker plugin in jenkins and connected to my docker host via docker API.when i run build it launch docker host as jenkins slave container and allow to run docker build
Issue
Setup works fine for building docker image but my concern is with security
how to securely allow 200+ projects to connect docker daemon?
How to restrict access of each users based on roles?
How to forbidden docker run command in docker daemon? they are restricted to run docker run
Platform i use:
Jenkins running in redhatopenshift
docker host in a linux box
Can any suggest me the steps to fix this security hole
Regards
Ashif

Is there any configuration to run docker inside a jenkins container?

I am trying to build an image with docker and then upload it to the docker hub, after passing the quality tests I receive the following error: docker: not found, how can I communicate my docker service (localhost) with the container of jenkins.
Important: I have docker desktop installed locally and I have installed jenkins in a local container also in windows 10 pro.
Error: https://imgur.com/q1SrKGe
Pipeline: https://imgur.com/nQWL1HR
You have 2 options to do this:
Install Docker inside your Jenkins Container and also add a bind mount for the Docker socket from your host. Otherwise your Docker Daemon inside your Container wont work. On Linux this socket is /var/run/docker.sock, so the bind mount would look like -v /var/run/docker.sock:/var/run/docker.sock.
Use a different slave agent for the Building Image Stage where you have docker installed. For e.g. you could use Docker-in-Docker (https://hub.docker.com/_/docker) as a Slave Agent for Jenkins (connected via ssh) and run your docker build inside this slave agent.

Running docker command in a Java application executing in a docker container

I am creating a Spring Boot monitoring agent that collects docker metrics. The agent can be attached through POM dependency to any client Spring Boot application that runs inside a docker container.
In the agent, I am trying to programatically run docker stats
But, it fails to execute because the docker container doesn't have docker client installed in it.
So how can I run docker commands in docker container? Please note, I can't make changes to the Dockerfile of client.
You may execute docker commands within the container by defining the docker socket in the container.
run the container and mount the 'docker.sock' in the following manner:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
so mainly you have to mount docker.sock to order to run docker commands within container.

Cannot connect to the Docker daemon on TeamCity build agent in AWS

I've got build agent machine on Amazon Linux AMI. It has docker container jetbrains/teamcity-agent:latest. I can see build agent in TeamCity panel.
When I'm trying to run build with docker build/push commands I'm getting this error
Cannot login to registry docker.io (new); cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?; exit code 1 (Step: docker build (Docker))
What's wrong with teamcity-agent?
I guess that the jetbrains/teamcity-agent:latest will be running as a user that does not have docker permissions. Either the user that runs the commands in this image needs to be added to the group docker, or via ACLs be given permission to the docker socket /var/run/docker.sock. Note that this is root-equivalent.

Access host docker-machine from within container

I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-‌​1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.

Resources