Docker on Windows10 home - inside docker container connect to the docker engine - docker

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts

You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.

First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

Related

the teamcity agent doesn't see the docker when using Qodana

I'm running the teamcity server with 2 agents, via docker, version : 2022.10.2-linux, in Windows 11 . In the agent parameters tab, I can see the docker version:
20.10.12 docker.version. It turns out that when I'm going to create a pipeline for my kotlin application (1.8), with gradle (7.6), I can't run because it has the following error: docker.server.version exists. I've searched some places on the internet but i don't know how to solve it. Could someone help?
I assume some steps of your build configuration need docker so that build configuration should be executed on agent with docker installed. Adding docker dependent steps adds implicit requirement that agent configuration parameter docker.server.version exists. This parameter is set up automatically during agent's startup if docker is available. You run your TeamCity agent in docker container but there's no docker inside that container. Basically, you have two options:
You could use DIND (Docker in Docker) e.g. install docker inside TeamCity docker image. This is the easiest approach.
Mount docker socket, docker binary and (if needed) docker-compose binary from host to teamcity agent container, however I'm not sure whether it works on Windows. In addition it requires some tricky mounts of agent's work and temp folders because docker steps will be executed on host outside teamcity agent container and paths to work and temp folders on host and inside the container should be equal. Example of docker-compose.yml for Linux
version: "2"
services:
teamcity-agent:
image: jetbrains/teamcity-agent
restart: always
privileged: true
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
- "/bin/docker-compose:/bin/docker-compose"
- "/opt/buildagent/work:/opt/buildagent/work"
- "/opt/buildagent/temp:/opt/buildagent/temp"
- "/opt/buildagent/conf:/opt/buildagent/conf"

Docker compose within Gitlab pipeline / docker in docker and volume sharing

The project I am working on consists of multiple components, written in C#/.NET6, and deployed as docker containers on a Linux host. Each component has his own git repository on Gitlab and its Gitlab pipeline is building the docker container to the Gitlab 'container registry'. For instance, one component is called "runtime", another one "services", etc.
All docker containers are defined in in a docker-compose.yml file: the suite is started with a "docker compose up" command.
I created a 'integration test' project to check the data exchange between the running containers. I've a lot of complex Linux shell scripts to prepare the mock data and so on for the tests. I've a bunch of tests written in Python running in a py-env on the Linux host and also some other tests written in C# running in a dedicated docker container.
I've actually different test scenarios or test group: each group has his own docker-compose-integration_$group.yml file to set up e.g. the mocked services.
All of this is run with
docker compose -f docker-compose.yml -f docker-compose-integration_$group.yml up -d
In multiple services defined in the docker compose file, I set up docker volumes to be able to check the data generated by the containers within my tests. For instance, the following is an extract of my docker-compose-integration_4.yml file for the 4th group of tests using the C# written tests running in a dedicated 'integration-tests-dotnet' container:
runtime:
extends:
file: ./docker-compose.yml
service: runtime
volumes:
- ./runtime/config_4.ini}:/etc/runtime/config.ini
- ${OUTPUT:-./output}/runtime/:/runtime/output/
integration-tests-dotnet:
volumes:
# share config for current group same as for runtime.
- ./runtime/config_4.ini}:/etc/runtime/config.ini
# share output folder from runtime.
- ./testData/:/opt/testData/
- ./output/runtime/:/opt/runtime/output/
- ./output/services/:/opt/services/output/
# share report file generated from the tests.
- ./output/integration/:/app/output/
Everything is running nicely on a Linux machine, or on WSL2 in my Windows PC (or on the Mac of a colleague).
The integration-test project has his own Gitlab pipeline.
Now, we would like to be able to run the integration tests within the Giltab pipeline, i.e. run "docker compose" from a Gitlab runner.
I do have already a 'docker in docker' capable runner, and added such a job in my .gitlab-ci.yml
run-integration-tests:
stage: integration-tests
variables:
DOCKER_TLS_CERTDIR: ''
DOCKER_HOST: tcp://localhost:2375/
services:
- name: docker:20.10.22-dind
command: ["--tls=false"]
tags:
- dind
image: $CI_REGISTRY_IMAGE:latest
This job is starting properly, BUT fails with the volume sharing.
This question Docker in Docker cannot mount volume raised already the issue with volume sharing by using the shared docker socket. The docker volume is shared from the HOST (i.e. on my runner). But the data are unknown from my host: they are only meant to be shared between the integration-test and the other containers.
As Olivier wrote in that question, for a
host: H
docker container running on H: D
docker container running in D: D2
the docker compose with volume sharing is equivalent to
docker run ... -v <path-on-D>:<path-on-D2> ...
while I only something equivalent to the following can run:
docker run ... -v <path-on-H>:<path-on-D2> ...
But I have no data on H to share, I just want between D and D2!
Is the volume sharing on HOST limitation the same using my docker-in-docker runner as the shared socket?
If so, it seems I need to rework the infrastructure and the concept of volume sharing used here.
Some suggests a Docker data volume containers.
Maybe I should use more of the names volumes.
Mayber tmpfs volumes? I need to check the data AFTER some container are exited, but I don't know if a container still running but in "exited" status has sill the tmpfs volume activated.
Is my analysis correct?
Any other suggestions?

Injecting host network into container in CircleCI

I have this CircleCI configuration.
version: 2
jobs:
build:
docker:
- image: docker:18.09.2-git
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
name: elasticsearch
working_directory: ~/project
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: test
command: |
docker run --rm \
--network host \
byrnedo/alpine-curl \
elasticsearch:9200
I'm looking for a way to allow my new container to access to the elasticsearch port 9200. With this configuration, the elasticsearch is not even a known host name.
Creating an extra network is not possible, so I have this error message container sharing network namespace with another container or host cannot be connected to any other network
Host network seems to be working only in the primary image
How could I do this?
That will not work. Containers started during a build via the docker run command are running via a remote Docker engine. The cannot talk to the containers running as part of the executor via TCP since they are isolated. Just docker exec.
The solution will ultimately depend on your end goal, but one option might be to remove the Elasticsearch image/container from the executor, and use Docker Compose to get both images to talk to each other within the build.

Calling docker stack deploy on a docker host from within a Jenkins container

On my OS X host, I'm using Docker CE (18.06.1-ce-mac73 (26764)) with Kubernetes enabled and using Kubernetes orchestration. From this host, I can run a stack deploy to deploy a container to Kubernetes using this simple docker-compose file (kube-compose.yml):
version: '3.3'
services:
web:
image: dockerdemos/lab-web
volumes:
- "./web/static:/static"
ports:
- "9999:80"
and this command-line run from the directory containing the compose file:
docker stack deploy --compose-file ./kube-compose.yml simple_test
However, when I attempt to run the same command from my Jenkins container, Jenkins returns:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
I do not want the docker client in the Jenkins container to be initialized for a swarm since I'm not using Docker swarm on the host.
The Jenkins container is defined in a docker-compose to include a volume mount to the docker host socket endpoint:
version: '3.3'
services:
jenkins:
# contains embedded docker client & blueocean plugin
image: jenkinsci/blueocean:latest
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- ./jenkins_home:/var/jenkins_home
# run Docker from the host system when the container calls it.
- /var/run/docker.sock:/var/run/docker.sock
# root of simple project
- .:/home/project
container_name: jenkins
I have also followed this guide to proxy requests to the docker host with socat: https://github.com/docker/for-mac/issues/770 and here: Docker-compose: deploying service in multiple hosts.
Finally, I'm using the following Jenkins definition (Jenkinsfile) to call stack to deploy on my host. Jenkins has the Jenkins docker plug-in installed:
node {
checkout scm
stage ('Deploy To Kube') {
docker.withServer('tcp://docker.for.mac.localhost:1234') {
sh 'docker stack deploy app --compose-file /home/project/kube-compose.yml'
}
}
}
I've also tried changing the withServer signature to:
docker.withServer('unix:///var/run/docker.sock')
and I get the same error response. I am, however, able to telnet to the docker host from the Jenkins container so I know it's reachable. Also, as I mentioned earlier, I know the message is saying to run swarm init, but I am not deploying to swarm.
I checked the version of the docker client in the Jenkins container and it is the same version (Linux variant, however) as I'm using on my host:
Docker version 18.06.1-ce, build d72f525745
Here's the code I've described: https://github.com/ewilansky/localstackdeploy.git
Please let me know if it's possible to do what I'm hoping to do from the Jenkins container. The purpose for all of this is to provide a simple, portable demonstration of a pipeline and deploying to Kubernetes is the last step. I understand that this is not the approach that would be taken anywhere outside of a local development environment.
Here is an approach that's working well for me until the Jenkins Docker plug-in or the Kubernetes Docker Stack Deploy command can support the remote deployment scenario I described.
I'm now using the Kubernetes client kubectl from the Jenkins container. To minimize the size increase of the Jenkins container, I added just the Kubernetes client to the jenkinsci/blueocean image that was built on Alpine Linux. This DockerFile shows the addition:
FROM jenkinsci/blueocean
USER root
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
RUN mkdir /root/.kube
COPY kube-config /root/.kube/config
I took this approach, which added ~100 mb to the image size rather than getting the Alpine Linux Kubernetes package, which almost doubled the size of the image in my testing. Granted, the Kubernetes package has all Kubernetes components, but all I needed was the Kubernetes client. This is similar to the requirement that the docker client be resident to the Jenkins container in order to run Docker commands on the host.
Notice in the DockerFile that there is reference to the Kuberenetes config file:
kube-config /root/.kube/config
I started with the Kubernetes configuration file on my host machine (the computer running Docker for Mac). I believe that if you enable Kubernetes in Docker for Mac, the Kubernetes client configuration will be present at ~/.kube/config. If not, install the Kubernetes client tools separately. In the Kubernetes configuration file that you will copy over to the Jenkins container via DockerFile, just change the server value so that the Jenkins container is pointing at the Docker for Mac host:
server: https://docker.for.mac.localhost:6443
If you're using a Windows machine, I think you can use docker.for.win.localhost. There's a discussion about this here: https://github.com/docker/for-mac/issues/2705 and other approaches described here: https://github.com/docker/for-linux/issues/264.
After recomposing the Jenkins container, I was then able to use kubectl to create a deployment and service for my app that's now running in the Kubernetes Docker for Mac host. In my case, here are the two commands I added to my Jenkins file:
stage ('Deploy To Kube') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-deployment.yaml'
}
stage('Configure Kube Load Balancer') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-service.yaml'
}
There are loads of options for Kubernetes container deployments. In my case, I simply needed to deploy my web app (with replicas) behind a load balancer. All of that is defined in the two yaml files called by kubectl. This is a bit more involved than docker stack deploy, but achieves the same end result.

Jenkins inside docker loses configuration when container is restarted

I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9

Resources