Build docker image in Jenkins (in docker container) pipeline - docker

I use Jenkins from docker container. And I want to build docker image in Jenkins pipeline but docker is not exist in this container (where Jenkins).
Jenkins container deployed by Docker Compose, yml file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
What we can do to build docker image in Jenkins pipeline?
Can we deploy some docker container with docker and use once for build docker image? or something else? How do you doing with they?
Edit:
Thanks #VonC, I checked your information, but... "permission denied"
Docker Compose file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
# - /var/run/docker.sock:/var/run/docker.sock:rw
- /var/run:/var/run:rw
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
/*stage('Unit Test') {
steps {
echo "Testing..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverage 'test-only * -- -F 4'"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverageReport"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt scalastyle || true"
}
}*/
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build-2 ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "D1- ..."
//withDockerServer([credentialsId: "AWS-Jenkins-Build-Slave", uri: "tcp://192.168.0.29:2376"]) {
echo "D2- ..."
sh "printenv"
echo "D3- ..."
//sh "docker images"
echo "D4- ..."
docker.build('my-app:latest', 'target/docker/stage').inside("--volume=/var/run/docker.sock:/var/run/docker.sock")
echo "D5- ..."
//base.push("tmp-fromjenkins")
//}
}
}
}
}
}
}
Result:
[job1] Running shell script
+ docker build -t my-app:latest target/docker/stage
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.29/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=my-app%3Alatest&target=&ulimits=null: dial unix /var/run/docker.sock: connect: permission denied
script returned exit code 1
Edit:
Last problem with "permission denied" fixed with:
>>sudo chmod 0777 /var/run/docker.sock
Worked State:
Call in host:
>>sudo chmod 0777 /var/run/docker.sock
Docker Compose file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
# - /var/run/docker.sock:/var/run/docker.sock:rw
- /var/run:/var/run:rw
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
/*stage('Unit Test') {
steps {
echo "Testing..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverage 'test-only * -- -F 4'"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverageReport"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt scalastyle || true"
}
}*/
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build-2 ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "D1- ..."
//withDockerServer([credentialsId: "AWS-Jenkins-Build-Slave", uri: "tcp://192.168.0.29:2376"]) {
echo "D2- ..."
sh "printenv"
echo "D3- ..."
//sh "docker images"
echo "D4- ..."
docker.build('my-app:latest', 'target/docker/stage')
echo "D5- ..."
//base.push("tmp-fromjenkins")
//}
}
}
}
}
}
}
My resolve:
I add some step in Jenkinsfile and get:
pipeline {
agent any
//def app
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "Environment:"
sh "printenv"
app = docker.build('ivanbuh/myservice:latest', 'target/docker/stage')
echo "Push to Docker repository ..."
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
echo "Complated ..."
}
}
}
}
//https://boxboat.com/2017/05/30/jenkins-blue-ocean-pipeline/
//https://gist.github.com/bvis/68f3ab6946134f7379c80f1a9132057a
stage ('Deploy') {
steps {
sh "docker stack deploy myservice --compose-file docker-compose.yml"
}
}
}
}

You can look at "Docker in Docker in Jenkins pipeline". It includes the step:
inside the Jenkinsfile, I need to connect my build container to the outer Docker instance. This is done by mounting the Docker socket itself:
docker.build('my-build-image').inside("--volume=/var/run/docker.sock:/var/run/docker.sock") {
// The build here
}
You can see a similar approach in "Building containers with Docker in Docker and Jenkins".
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system.
I have created a small docker-compose file where I map both my volumes and the docker socket as following:
jenkins:
container_name: jenkins
image: myjenkins:latest
ports:
- "8080:8080"
volumes:
- /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
- /var/run:/var/run:rw
Please note the special mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.
And, as I mentioned before, you might need to run docker in privilege mode.
Or, as the OP reported:
sudo chmod 0777 /var/run/docker.sock

Related

Jenkins - Mark build as success

I use Jenkins to build my maven Java app then create Docker image and push it. After all of that I have try-catch where I Try to stop and remove the container if it's already running - If not it should just skip it and run the new Image - It works but always marks the build as failed. I tried to change the build status, but apparently that is not possible.
Here is my pipeline:
node {
stage('Clone repository') {
git branch: 'main', credentialsId: 'realsnack-git', url: 'https://github.com/Realsnack/Java-rest-api.git'
}
stage('Build maven project') {
sh './mvnw clean package'
}
stage('Build docker image') {
sh 'docker build -t 192.168.1.27:49153/java-restapi:latest .'
}
stage('Push image') {
sh 'docker push 192.168.1.27:49153/java-restapi:latest'
}
try {
stage('Remove old container') {
sh 'docker stop java-rest_api && docker rm java-rest_api'
}
} catch(all) {
sh 'No container to remove - runnning it anyway'
} finally {
stage('Run image') {
sh 'docker run -d --name java-rest_api -p 8081:8081 192.168.1.27:49153/java-restapi:latest'
}
}
}
docker stop will fail if it fails to stop the container.
You can solve the issue in one of the two following ways:
Check that there is a running container before attempting to stop it:
sh "if [[ docker ps -a | grep java-rest_api ]]; docker stop java-rest_api; fi"
Ignore the docker error:
sh "docker stop java-rest_api || true"

Jenkins Pipeline Docker Network not found

So I have this setup
stage('Build') {
steps {
sh """ docker-compose -f docker-compose.yml up -d """
sh """ docker-compose -f docker-compose.yml exec -T app buildApp """
}
stage('Start UI server') {
steps {
script { env.NETWORK_ID = get network id with some script }
sh """ docker-compose -f docker-compose.yml exec -d -T app startUiServer """
}
}
stage('UI Smoke Testing') {
agent {
docker {
alwaysPull true
image 'some custom image'
registryUrl 'some custom registry'
registryCredentialsId 'some credentials'
args "-u root --network ${env.NETWORK_ID}"
}
}
steps { sh """ run the tests """ }
}
And for some reason
the pipeline fails with this error. Most of the time, not all the time
java.io.IOException: Failed to run image 'my image'. Error: docker: Error response from daemon: network 3c5b5b45ca0e not found.
So the Network ID is the right one. I've checked.
Any ideas why this is failing?
i really appreciate any help.

Docker not running in Jenkins Pipeline

I am running a jenkins docker image by doing this:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v /home/ec2-user/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkins/jenkins:lts
I have my jenkins server up but when I try to run a docker build image as below:
pipeline {
environment{
registry = "leexha/node_demo"
registyCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools{
nodejs "node"
}
stages {
stage('Git clone'){
steps{
git 'https://github.com/leeadh/node-jenkins-app-example.git'
}
}
stage('Installing Node') {
steps {
sh 'npm install'
}
}
stage ('Conducting Unit test'){
steps{
sh 'npm test'
}
}
stage ('Building image'){
steps{
script{
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage ('Pushing to Docker Hub'){
steps{
script{
docker.withRegistry('',registyCredential){
dockerImage.push()
}
}
}
}
}
}
it keeps telling me that dcoker is not found.
I already enabled the docker process to communicate via the -v /var/run/docker.sock:/var/run/docker.sock \
So im pretty confused now whats going on.
ANy help?
You need to install docker on Jenkins Server (insider the Jenkins image container). And install and config Jenkins plugin: docker on your Jenkins Server.

Declarative jenkins pipeline

I am using declarative Jenkins pipeline.
I am newbie in Jenkins so i don't understand something. I don't know how to handle this error and can someone tell me what is best options to do builds.
I have bash script which build, tag and push docker image to repository.
This is the part of my Jenkinsfile;
pipeline {
agent {
kubernetes {
label 'bmf-worker'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: service-reader
containers:
- name: docker
image: docker
command:
- cat
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
- name: gcloud
image: google/cloud-sdk
command:
- cat
tty: true
"""
}
}
stages {
stage('Code Checkout and Setup') {
steps {
echo 'Code Checkout and Setup'
}
}
stage('Build') {
parallel {
stage('Build') {
steps {
echo 'Start building Frontend and Backend Docker images'
}
}
stage('Build BMF Frontend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --build_bmf_frontend'
}
}
}
stage('Tag BMF Frontend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --tag_frontend'
}
}
}
stage('Build BMF Backend') {
steps {
container('gcloud') {
echo 'Buildinging Bmf Backend Images'
sh 'chmod +x build.sh'
sh './build.sh --build_bmf_backend'
}
}
}
stage('Tag BMF Backend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --tag_frontend'
}
}
}
}
}
How to use podTemplate to execute my steps. When am using docker container for Stage Build BMF Backend i have these errors;
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
/home/jenkins/workspace/BMF/bmf-web#tmp/durable-c146e810/script.sh: line 1: ./build.sh: not found
With gcloud container defined in podTemplate;
time="2019-03-12T13:40:56Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: no such file or directory"
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
And how to tag docker images because I need docker to tag and git because tags is short commit. When am using docker there is no git.
My Jenkins master is on Google Cloud Kubernetes.
Can someone explain me better solution to execute jobs.
The first issue you are facing is related to Docker, not Jenkins.
Docker commands can only be run by root, or by users in the docker group.
If you want the Jenkins user to be able to execute Docker commands then you can run the following command as root to add Jenkins to the Docker group:
usermod -aG docker jenkins
This is documented in the Docker docs.
Be aware that giving a user access to Docker effectively grants them root access, so be cautious of which users you add to this group.

using jenkins docker plugin for storage persistant containers in a build pipeline

This is the groovy script for a simple build pipeline that uses the docker image of SQL Server on Linux:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('build dacpac') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
sh 'docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux'
}
stage('deploy dacpac') {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
stage('run tests') {
PowerShell('Start-Sleep -s 5')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
I got to this point with some help with a question I posted a day or so ago, this was my attempt (with some help) at doing the same thing but with the docker plugin:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('Build Dacpac from SQLProj') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
docker.image('-e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux').withRun() {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
sh 'docker run -d --name SQLLinuxLocal2 microsoft/mssql-server-linux'
}
stage('sleep') {
PowerShell('Start-Sleep -s 30')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
The problem with this is that although it works, the docker run -d line spins up a different incarnation of the carnation. Could someone please point me in the correct direction as to getting the same result as per the first pipeline but by using the docker plugin.

Resources