Build/push image from jenkins running in docker - docker

I have two docker containers - one running jenkins and one running docker registry. I want to build/push images from jenkins to docker registry. How do I achieve this in an easy and secure way (meaning no hacks)?

The easiest would be to make sure the jenkins container and registry container are on the same host. Then you can mount the docker socket onto the jenkins container and use the dockerd from the host machine to push the image to the registry. /var/run/docker.sock is the unix socket the dockerd is listening to.
By mounting the docker socket any docker command you run from that container executes as if it was the host.
$ docker run -dti --name jenkins -v /var/run/docker.sock:/var/run/docker.sock jenkins:latest

If you use pipelines, you can install this Docker Plugin https://plugins.jenkins.io/docker-workflow,
create a credentials resource on Jenkins,to access the Docker registry, and do this in your pipeline:
stage("Build Docker image") {
steps {
script {
docker_image = docker.build("myregistry/mynode:latest")
}
}
}
stage("Push images") {
steps {
script {
withDockerRegistry(credentialsId: 'registrycredentials', url: "https://myregistry") {
docker_image.push("latest")
}
}
}
}
Full example at: https://pillsfromtheweb.blogspot.com/2020/06/build-and-push-docker-images-with.html

I use this type of workflow in a Jenkins docker container, and the good news is that it doesn't require any hackery to accomplish. Some people use "docker in docker" to accomplish this, but I can't help you if that is the route you want to go as I don't have experience doing that. What I will outline here is how to use the existing docker service (the one that is running the jenkins container) to do the builds.
I will make some assumptions since you didn't specify what your setup looks like:
you are running both containers on the same host
you are not using docker-compose
you are not running docker swarm (or swarm mode)
you are using docker on Linux
This can easily be modified if any of the above conditions are not true, but I needed a baseline to start with.
You will need the following:
access from the Jenkins container to docker running on the host
access from the Jenkins container to the registry container
Prerequisites/Setup
Setting that up is pretty straight forward. In the case of getting Jenkins access to the running docker service on the host, you can do it one of two ways. 1) over TCP and 2) via the docker unix socket. If you already have docker listening on TCP you would simply take note of the host's IP address and the default docker TCP port number (2375 or 2376 depending on whether or not you use TLS) along with and TLS configuration you may have.
If you prefer not to enable the docker TCP service it's slightly more involved, but you can use the UNIX socket at /var/run/docker.sock. This requires you to bind mount the socket to the Jenkins container. You do this by adding the following to your run command when you run jenkins:
-v /var/run/docker.sock:/var/run/docker.sock
You will also need to create a jenkins user on the host system with the same UID as the jenkins user in the container and then add that user to the docker group.
Jenkins
You'll now need a Docker build/publish plugin like the CloudBees Docker Build and Publish plugin or some other plugin depending on your needs. You'll want to note the following configuration items:
Docker URI/URL will be something like tcp://<HOST_IP>:2375 or unix:///var/run/docker.sock depending on how we did the above setup. If you use TCP and TLS for the docker service you will need to upload the TLS client certificates for your Jenkins instance as "Docker Host Certificate Authentication" to your usual credentials section in Jenkins.
Docker Registry URL will be the URL to the registry container, NOT localhost. It might be something like http://<HOST_IP>:32768 or similar depending on your configuration. You could also link the containers, but that doesn't easily scale if you move the containers to separate hosts later. You'll also want to add the credentials for logging in to your registry as a username/password pair in the appropriate credentials section.

I've done this exact setup so I'll give you a "tl;dr" version of it as getting into depth here is way outside of the scope of something for StackOVerflow:
Install PID1 handler files in container (i.e. tini). You need this to handle signaling and process reaping. This will be your entrypoint.
Install some process control service (i.e. supervisord) packages. Generally running multiple services in containers is not recommended but in this particular case, your options are very limited.
Install Java/Jenkins package or base your image from their DockerHub image.
Add a dind (Docker-in-Docker) wrapper script. This is the one I based my config on.
Create the configuration for the process control service to start Jenkins (as jenkins user) and the dind wrapper (as root).
Add jenkins user to docker group in Dockerfile
Run docker container with --privileged flag (DinD requires it).
You're done!

Thanks for your input! I came up with this after some experimentation.
docker run -d \
-p 8080:8080 \
-p 50000:50000 \
--name jenkins \
-v pwd/data/jenkins:/var/jenkins_home \
-v /Users/.../.docker/machine/machines/docker:/Users/.../.docker/machine/machines/docker \
-e DOCKER_TLS_VERIFY="1" \
-e DOCKER_HOST="tcp://192.168.99.100:2376" \
-e DOCKER_CERT_PATH="/Users/.../.docker/machine/machines/docker" \
-e DOCKER_MACHINE_NAME="docker" \
johannesw/jenkins-docker-cli

Related

GitLab Docker-in-Docker: how does Docker client in job container discover Docker daemon in `dind` service container?

I have a GitLab CI/CD pipeline that is being run on GKE.
One of the jobs in the pipeline uses a Docker-in-Docker service container so that Docker commands can be run inside the job container:
my_job:
image: docker:20.10.7
services:
- docker:dind
script:
- docker login -u $USER -p $PASSWORD $REGISTRY
- docker pull ${REGISTRY}:$TAG
# ...more Docker commands
It all works fine, but I would like to know why. How does the Docker client in the my_job container know that it needs to communicate with the Docker daemon running inside the Docker-in-Docker service container, and how does it know the host and port of this daemon?
There is no 'discovery' process, really. The docker client must be told about the daemon host through configuration (e.g., DOCKER_HOST). Otherwise, the client will assume a default configuration:
if the DOCKER_HOST configuration is present, this is used. Otherwise:
if the default socket (e.g., unix:///var/run/docker.sock) is present, the default socket is used.
if the default socket is NOT present AND a TLS configuration is NOT detected, tcp://docker:2375 is used
If the socket is NOT present AND a TLS configuration is present, tcp://docker:2376 is used
You can see this logic explicitly in the docker dockerfile entrypoint.
The docker client can be configured a couple ways, but the most common way in GitLab CI and with the official docker image is through the DOCKER_HOST environment variable. If you don't see this variable in your YAML, it may be set as a project or group setting or may be set on the runner configuration, or is relying on default behavior described above.
It's also possible, depending on your runner configuration (config.toml), that your job is not using the docker:dind service daemon at all. For example, if your runner has a volumes specification mounting the docker socket (/var/run/docker.sock) into the job container and there is no DOCKER_HOST (or equivalent) configuration, then your job is probably not even using the service because it would use the mounted socket (per the configuration logic above). You can run docker info in your job to be sure of this one way or the other.
Additional references:
official docker image entrypoint logic
Securing the daemon socket
GitLab docker in docker docs
Docker "context"

Why would it be necessary to give a docker container access to the docker socket?

I am reading a docker run command where it maps /var/run/docker.sock
like:
docker run -it --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock theimage /bin/bash
Why would the container would need access to the socket? (this article says it is a very bad idea.)
What would be one case where the container need access to the socket?
It is not necessary until the container needs to invoke itself the docker daemon, for example, in order to create and run an inner container.
For example, in my CI chain Jenkins builds a docker image to run the build and test process. Inside it we need to create an image to test and then submit it to K8S. In such situation Jenkins, when builds the pipeline container, passes to it the docker socket to allow the container to create other containers using the host server docker daemon.

Pycharm use Docker Container Python as Remote Interpreter

I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts

Ansible through docker, docker host in the inventory

Using this docker image from Docker Hub, I'm trying to run an ansible playbook that would configure the machine on which the container is running.
As an example, I run this:
docker run --net="host" -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory /playbook.yml
With this options, I can ping localhost and the inventory and playbook are both accessible.
The inventory is configured to use a local connection:
[executors]
127.0.0.1
[executors:vars]
ansible_connection=local
ansible_user=<my_user_in_docker_host>
ansible_become=True
The group executors is the one referenced from the playbook.
I see that the playbook is trying to connect as root (what I get by default when I attach to the container). Specifying -u when running the container doesn't seem to get along with Ansible.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
... followed by errors complaining about any command not available, after a successful local connection. That is what makes no sense for me given that both root or non-root users can execute them.
Any idea?
this image is designed to serve as a base for other images, and to take advantadge of ansible as a way of provisioning the requirements of the image rather than using the Dockerfile only.
This is stated in the documentation of the docker image:
Used mostly as a base image for configuring other software stack on
some specified Linux distribution(s).
Think of it as a base image to perform CI tasks on a lighter way than using other options (VMs, Vagrant...)
Take in account that the good thing about docker is that it isolates the host from the containers, so you can not reach the host files from the containers (except for whatever volumes you bind). Otherwise, it would be a security problem. See Here
regards
I was able to use ansible to configure the host from within a docker container. However, I didn't use a docker host network, but a docker bridge network.
When you start an ansible playbook in a container, then localhost will be the localhost of the container itself. This is just fine, because local_action(s) in ansible run in the container itself and remote actions on the host.
This is the modified version of your docker run example:
docker run -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory/playbook.yml
You should't configure the inventory to use localhost or a local connection, but to use the host (machine) and connect via ssh. This is an example:
[executors]
<my_host_ip>
[executors:vars]
ansible_connection=ssh
ansible_user=<my_host_user>
ansible_become=True
Assuming your docker container is running in the default bridge, you can find my_host_ip with the following command:
ip addr show docker0
The container will connect with ssh to the docker interface on the host.
Some additional hints:
ssh needs to listen on the docker0 interface
iptables/nftables needs to provide ssh access from the ansible container to the docker0 interface
Ansible uses keys to connect via ssh by default. By using the -k and/or the -K parameters of the ansible-playbook command, you can provide a password instead.

Use certificates in Docker container of Jenkins

I've started Jenkins in a Docker container by mounting the Docker sockets. So now I'm able to perform docker commands on my Jenkins. But the specific folders of Docker aren't in my container. (Just mounted the sockets).
Now I need to use certs to access my Docker registry. The path of the certs needs to be: /etc/docker/certs.d/myregistry.com:5000/ca.crt
But this does not exist in my Jenkins which just contains the bin and run folders of Docker.
What's the best way to connect the certificates for my Jenkins?
The way I'm doing it (for my SSL web server, but I think the principle is the same) is simply mounting the cert directory with -v.
E.g.:
docker run -v /etc/pki:/etc/pki:ro -P 443:443 mycontainer
Seems to work quite nicely (although it helps loads if you can wildcard the hostname, so your container doesn't need to "know" which host it's running on)

Resources