I am running a docker registry locally on my machine, and I can pull my image from it successfully:
docker pull 192.168.174.205:5001/myimg:latest
I am also running a jenkins container on my machine, but Jenkins cannot pull any image from the local registry. I use a Blue Ocean container (on the same machine) to start a pipeline, and it outputs:
+ docker pull 192.168.174.205:5001/insureio:latest
Error response from daemon: Get https://192.168.174.205:5001/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
script returned exit code 1
TMI
Specs
Docker version 1.13.1, build 4ef4b30/1.13.1
Jenkins ver. 2.204.2
host CentOS Linux 7 (Core)
Reference
I have been working from the instructions on
https://docs.docker.com/registry/deploying/
https://jenkins.io/doc/book/pipeline/docker/#custom-registry
Settings
My /etc/docker/daemon.json file reads {"insecure-registries" : ["192.168.174.205:5001"]}.
The local registry gives a 200 response:
curl http://192.168.174.205:5001/v2/_catalog
{"repositories":["mying"]}
My pipeline script is:
node {
stage('Build') {
docker.withRegistry('http://192.168.174.205:5001') {
docker.image('insureio:latest').inside('') {
sh 'make test'
}
}
}
}
Since both Jenkins and your registry are containers, Jenkins is going to be looking at the 192.168.174.205 IP address at its own network space.
If you're just trying things out, I would suggest doing a docker inspect <your registry container> | grep -i ipaddress to find its IP address (by default it should be in the region of 172.17.XXX.XXX) and configure your pipeline to use that address.
Related
introduction: I'm trying to build an Angular app on the slave, In order not to install nodejs on the slave I use docker on the slave to build and cleanup.
The problem
run docker commands directly on the slave with the Jenkins user works well
jenkins#ubuntu_server:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
...
When the jenkinsfile try to run docker commands on the slave it fails with permission denied error.
+ docker inspect -f . node:current-alpine3.10
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/node:current-alpine3.10/json: dial unix /var/run/docker.sock: connect: permission denied
here relevant jenkinsfile part
"ubuntu_server" is a label of the slave
pipeline {
agent {
docker {
image 'node:current-alpine3.10'
label 'ubuntu_server'
args '-v /tmp:/tmp'
}
}
I had the problem as well after adding a new slave node on a remote server.I did what most others suggested, adding to the docker group. I was able to execute docker commands with the user no problem when I SSHed in but I was still getting the error when building with Jenkins.
Turns out Jenkins never closed its connection to the server for the node which I had configured before adding that user to the docker group.
In the end the resolution for me was to go into Jenkins > Manage Jenkins > Manage Nodes and Clouds > {My Node} and disconnect the node. Upon reconnecting it worked fine.
Jenkins
In Jenkins I decided to use the remote docker feature.
So I installed docker.io on the linux server and use this pipeline.
node {
stage('Example') {
docker.withServer('tcp://docker.example.org:2375') {
docker.image('stefanscherer/node-windows:10').inside {
sh 'node --version'
}
}
}
}
But this fails with a error message about the volume configuration.
java.io.IOException: Failed to run image 'stefanscherer/node-windows:10'. Error: docker: Error response from daemon: invalid volume specification: '/var/lib/jenkins/workspace/Docker Test:/var/lib/jenkins/workspace/Docker Test:rw,z'.
See 'docker run --help'.
at org.jenkinsci.plugins.docker.workflow.client.DockerClient.run(DockerClient.java:133)
Maybe the problem is that I try to combine a linux Jenkins with a docker Windows?
But I read for this is the experimental option which should allow using Linux containers.
GitLab
GitLab-Runner installed via this guide. https://docs.gitlab.com/runner/install/windows.html
Then I connected it and selected docker as executor.
When I remove the hosts from daemon.json I get this error message.
ERROR: Preparation failed: Error response from daemon: client version 1.18 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version (executor_docker.go:1161:0s)
How to get a version which supports the newer API?
I read an article that GitLab is waiting for LTS EOL (End-of-Life). I think for CentOS or something else.
When I have the hosts set up in daemon.json I get this error message.
ERROR: Preparation failed: error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.18/info: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running. (executor_docker.go:1161:0s)
Thats because the runner can't find Docker because Docker is only listening on the TCP port.
I tried adding //./pipe/docker_engine to the hosts of daemon.json but it didn't work. The docker service is crashing immediately.
Docker
Windows Server 2016
daemon.json
{
"hosts": ["tcp://0.0.0.0:2375"],
"experimental": true
}
Goal
My goal is to build my jobs from (linux) Jenkins and (linux) GitLab on the (windows) Docker.
Problem
Jenkins is not working in general because of some settings maybe or because it tries to mount windows paths?
GitLab is expecting an old API which the docker doesn't offer.
Goal
You are in the right track, distribute your CI pipeline allows easy scalation and containers are the ideal solution for this.
Jenkins
In the Docker Pipeline Jenkins documentation it is explained:
For inside() to work, the Docker server and the Jenkins agent must use
the same filesystem, so that the workspace can be mounted.
So give a try with other command (for instance withRun) and see, in any case, I miss some credentials here. You can also configure a new Jenkins node for Docker and there you can specify the path where the jobs are executed.
GitLab
The GitLab runner issue on Windows is planned to be included in 11.8 release (Feb 2019) as it is described here.
Conclusion
I would go for a Linux installlation if you cannot wait until the new GitLab release and I would add a new Jenkins node for the docker configuration as it is described here.
I am attempting to build a docker image from a Dockerfile using a declarative pipeline in Jenkins. I've successfully added the 'jenkins' user to the docker group, and can run 'docker run hello-world' as the jenkins user manually. However, when I attempt to build through the pipeline, I can't even run 'docker run hello-world':
From the pipeline:
[workspace] Running shell script
+ whoami
jenkins
[workspace] Running shell script
+ groups jenkins
jenkins : jenkins docker
[workspace] Running shell script
+ docker run hello-world
docker: Got permission denied while trying to connect to the Docker
daemon socket at unix:///var/run/docker.sock: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.30/containers/create: dial unix
/var/run/docker.sock: connect: permission denied.
Manually sshing into Jenkins and switching to the 'jenkins' user:
*********#auto-jenkins-01:~$ sudo su - jenkins -s/bin/bash
jenkins#auto-jenkins-01:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Some other useful information: Jenkins is running from a VM.
Needed to give jenkins user group privileges to docker unix socket by editing /etc/default/docker and adding:
DOCKER_OPTS=' -G jenkins'
I'm using Docker Pipeline Plugin version 1.10.
I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security on the Docker daemon.
I'm not able to make the docker.withServer(...) step work.
As a basic test I simply put following content in a Jenkinsfile (if I'm correct this is a valid pipeline content):
docker.withServer('tcp://my.docker.host:2345') {
def myImage = docker.build('myImage')
}
When the pipeline executes I get this error: script.sh: line 2: docker: command not found like the docker command was still trying to execute locally (there is no docker command installed locally) rather than on my remote Docker daemon.
Am I missing anything ? Is it required to have the docker command installed locally when trying to execute Docker commands on a remote server..?
have you tried
withDockerServer('tcp://my.docker.host:2345') {
.....
}
Documentation here
docker needs to be installed on jenkins master in order for jenkins to be able to launch the docker on my.docker.host.
the first docker command runs on jenkins master, but with a parameter to pass the command to my.docker.host
the container itself will then run on my.docker.host
Note that you only need to install docker on the jenkins master; the daemon does not need to be running on jenkins master.
Check if you have set up port correctly. Default port for daemon is 2375. It has to be checked on both docker daemon (option -H 0.0.0.0:2375) and on the jenkins client
I am currently experimenting with Docker in combination with Jenkins to streamline the CI/CD workflow for a new project. I do so on a Mac with Docker 1.12 installed.
This is what I do:
Use docker machine to create a new Docker server
Use the official Jenkins Docker image to spin up a Jenkins instance on that server
Install the "Yet Another Docker Plugin" and "CloudBees Docker Pipeline" plugins.
Add a "Docker Cloud" using the IP of the Docker server above and the third party Docker DinD image tehranian/dind-jenkins-slave
With this setup, I run a very simple pipeline job like this:
node('docker') {
docker.image('hseeberger/scala-sbt').inside {
stage 'Checkout'
echo 'We got here!'
}
}
Jenkins spins up a Docker instance as expected and executes the job. So the basic Docker setup is working as expected.
But the Docker command within the job fails. Log output looks something like this:
[Pipeline] node
Still waiting to schedule task
Docker-23ebf3d8dd4f is offline
Running on Docker-23ebf3d8dd4f in /home/jenkins/workspace/docker-test
[Pipeline] {
[Pipeline] sh
[docker-test] Running shell script
+ docker inspect -f . hseeberger/scala-sbt
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[Pipeline] sh
[docker-test] Running shell script
+ docker pull hseeberger/scala-sbt
Using default tag: latest
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon. Is the docker daemon running on this host?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Now when I browse around for solutions, it is usually mentioned that the Docker socket needs to be provided to the container as a volume, but that doesn't seem to work either.
Since the general setup seems to be working, wouldn't the slave simply have to do the same thing as the Jenkins plugin does to spin up the Docker slave in the first place? That is, use the URL of the Docker server to control it? Since I assume this is an extremely common use-case, there must be a Docker image for Jenkins Docker slaves that can do this out of the box, right? What am I missing?
You might need to set the docker env and use the content of docker-machine env node in your running shellscript.