Docker Container Jenkins Pipeline Script: Permission denied while executing the script - docker

I am running jenkins inside a docker container. I have created a simple pipleline to checkout,build and run docker image, but I am getting the following error.
Below is my pipleline script:
node {
def mvnHome = tool name: 'Maven Path', type: 'maven'
stage('Git CheckOut') {
git branch: '2019_DOCKER_SERVICES', credentialsId: 'git-creds', url: 'http://10.10.10.84:8111/scm/git/JEE_M_SERVICES'
}
stage('Maven Build') {
// Run the maven build
withEnv(["MVN_HOME=$mvnHome"]) {
if (isUnix()) {
sh '"$MVN_HOME/bin/mvn" -f Services/user-service/pom.xml clean install'
} else {
// bat(/"%MVN_HOME%\bin\mvn" -f Services\\user-service\\pom.xml clean install/)
}
}
}
stage('Docker Image Build') {
sh '"Services/user-service/" docker build -t user-service'
}
}
But I am getting the follow error in last stage, the first two stages ran successfully.
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Docker Image Build)
[Pipeline] sh
+ Services/user-service/ docker build -t user-service
/var/jenkins_home/jobs/docker-demo/workspace#tmp/durable-a5c035cf/script.sh: 1: /var/jenkins_home/jobs/docker-demo/workspace#tmp/durable-a5c035cf/script.sh: Services/user-service/: Permission denied
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline

You have to set up new Jenkins slaves using Docker
It's weird to run Docker inside the Docker container
To access low-level operations you have to run your Docker container privileged

Related

How to use docker agent in Jenkinsfile pipeline?

I have a Django project uploaded in GitHub and I need to link it with jenkins.
I installed Jenkins and Docker services on Ubuntu 20.04 machine.
I configured the Jenkins server with my repo and I installed all the suggestd plggins + docker pipeline plugin.
after that, I created a Jenkinsfile that uses docker agent to run the stages inside a python docker container but I'm getting "‘Jenkins’ doesn’t have label ‘docker’" in the console output. I tried to add the label docker in the project settings but still the same error appears!
This is my Jenkinsfile:
pipeline {
agent any
stages {
stage("install pip dependencies") {
agent {
docker {
label "docker"
image "python:3.7"
}
}
steps {
withEnv(["HOME=${env.WORKSPACE}"]) {
sh "pip install virtualenv"
sh "virtualenv venv"
sh "pip install -r requirements.txt "
}
}
}
}}
What am I missing?
Thank you!
That message means your only available node, which happens to be your Jenkins controller, does not have the label docker that you've required on your agent in this block:
agent {
docker {
label 'docker'
image 'python:3.7'
}
}
Adding the label docker to the controller, then restarting Jenkins (required for the label change to be recognized, though that surprised me. It might be a peculiarity of labeling the controller itself, since you should avoid scheduling jobs to run there if possible) resolves the issue.
Pre-label:
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (install pip dependencies)
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘docker’
Aborted by admin
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: ABORTED
Post-label, pre-restart:
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (install pip dependencies)
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘docker’
Aborted by admin
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: ABORTED
Post-restart, highlighting that my controller doesn't have docker installed
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (install pip dependencies)
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test#2
[Pipeline] {
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . python:3.7
/var/jenkins_home/workspace/test#2#tmp/durable-4a9f38a7/script.sh: 1: /var/jenkins_home/workspace/test#2#tmp/durable-4a9f38a7/script.sh: docker: not found
[Pipeline] isUnix
[Pipeline] sh
+ docker pull python:3.7
/var/jenkins_home/workspace/test#2#tmp/durable-58d19d02/script.sh: 1: /var/jenkins_home/workspace/test#2#tmp/durable-58d19d02/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
Your pipeline will looks like the following one:
pipeline {
agent {
label 'docker'
}
stages {
stage('install pip dependencies') {
steps {
withEnv(["HOME=${env.WORKSPACE}"]) {
sh'''
pip install virtualenv
virtualenv venv
pip install -r requirements.txt
'''
}
}
}
}
}
But before you need to follow these steps to make jenkins run docker containers as slaves:
install docker on your host;
add jenkins to docker group: sudo usermod -aG docker jenkins
Modify ExecStart=/usr/bin/dockerd line in the file /lib/systemd/system/docker.service to the following ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -H fd:// -s overlay2 --containerd=/run/containerd/containerd.sock
run sudo systemctl daemon-reload and sudo systemctl restart docker
configure docker section in Jenkins: manage Jenkins -> manage nodes and clouds -> configure clouds -> add a new cloud -> docker type tcp://127.0.0.1:2375 (or 4243) or unix:///var/run/docker.sock in Docker URL field. Configure agent, set any label and use it in pipeline.
Probably you will need to turn off selinux.

Proper way to run docker container using Jenkinsfile

When making a jenkinsfile, I have steps to run dockers image which pulling from my docker hub.
stage('pull image and run') {
steps {
sh '''
docker login -u <username> -p <password>
docker run -d -p 9090:3000 <tag>
'''
}
}
This step is okay if I run this script the first time. However, if I run this script the second time, it will get this error.
Login Succeeded
+ docker run -d -p 9090:3000 <tag>
669955464d74f9b5186b437b7127ca0a24f6ea366f3a903c673489bec741cf78
docker: Error response from daemon: driver failed programming external connectivity on endpoint distracted_driscoll (db16abd899cf0cbd4f26cf712b1eee4ace5b491e061e2e31795c2669296068eb): Bind for 0.0.0.0:9090 failed: port is already allocated.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 125
Finished: FAILURE
Obviously, the port 9090 is not available so the execution failed.
Question:
What is the correct way to upgrade an app inside a docker container?
I can stop the container before running the docker run, but I can't find a proper way to do that in jenkinsfile steps.
Any suggestion?
Thanks
Jenkins has really good docker support to make your build proceed within docker container. good example can be found here
One declarative example to do maven build will be:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp:/tmp'
registryUrl 'https://myregistry.com/'
registryCredentialsId 'myPredefinedCredentialsInJenkins'
}
}
stages {
stage("01") {
steps {
sh "mvn -v"
}
}
stage("02") {
steps {
sh "mvn --help"
}
}
}
}
In a scripted pipeline, it would be
node {
docker.withRegistry('https://registry.example.com', 'credentials-id') {
docker.image('node:14-alpine').inside("-v /tmp:/tmp") {
stage('Test') {
sh 'node --version'
}
}
}
}

Jenkins is adding private regisrty URL tag while pulling Image

I am getting this issue in Jenkins pipeline where I want to pull 'node' image but jenkins is adding the private docker registry url tag to it so the image is not found (artifactory.x.com/node:7-alpine)
Here is the pipeline
pipeline {
agent {
docker
{
image 'node:7-alpine'
registryUrl 'https://artifactory.x.com/'
registryCredentialsId 'jenkins-artifactory'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
This is the error I am getting
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/jobs/enterprise-master/workspace
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
$ docker login -u jenkins -p ******** https://artifactory.x.com/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/jenkins_home/jobs/enterprise-master/workspace#tmp/f54c8b21-837b-4652-b12c-d489fb7e4c4c/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] {
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . node:7-alpine
Error: No such object: node:7-alpine
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . artifactory.x.com/node:7-alpine
Error: No such object: artifactory.x.com/node:7-alpine
[Pipeline] isUnix
[Pipeline] sh
+ docker pull artifactory.x.com/node:7-alpine
Error response from daemon: unknown: Not Found
[Pipeline] }
[Pipeline] // withDockerRegistry
Now the problem is that there is no image artifactory.x.com/node:7-alpine so it cant be found.
How do I tell jenkins not to add the private repo URL while pulling.
Fixed this by removing Docker Registry URL and setting Registry credentials to none
Jenkins -> Manage Jenkins->Configure > Pipeline Model Definition
Also the pipeline definition is still same
pipeline {
agent {
docker
{
image 'node:7-alpine'
registryUrl 'https://artifactory.X.com/'
registryCredentialsId 'jenkins-artifactory'
}

pass variables between stages jenkins pipeline

I'm creating a Jenkins pipeline for simple deployment into kubernetes cluster, I have my private Docker registry, in here I simply clone my repo and build my docker image and update build docker image id into kubernetes deployment manifest and deploy the pod. but I'm having trouble passing my build image id to next stage, I did some research and try to solve it so I managed to pass the id to next stage but when I try to add the new id to deployment manifests its empty.
here is my pipeline
pipeline {
environment {
BUILD_IMAGE_ID = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxxxxx.git',
credentialsId: 'id',
branch: 'master')
}
}
stage('Login Docker Registry') {
steps{
script {
sh 'docker login --username=xxxx --password=xxxx registry.xxxx.com'
}
}
}
stage('Building Image') {
steps{
script {
def IMAGE_ID = sh script:'docker run -e REPO_APP_BRANCH=xxxx -e REPO_APP_NAME=xxx --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/image-build', returnStdout: true
println "Build image id: ${IMAGE_ID} "
BUILD_IMAGE_ID = IMAGE_ID.replace("/n","")
env.BUILD_IMAGE_ID = BUILD_IMAGE_ID
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${BUILD_IMAGE_ID} "
//update deployment manifests with latest docker tag
sh 'sed -i s,BUILD_ID,${BUILD_IMAGE_ID},g deployment-manifests/development/Service-deployments.yaml'
}
}
}
}
}
I don't want to save that value into a file and read and do the operation
output
[Pipeline] echo
Build image id:
registry.xxxx.com/service:3426d51-baeffc2
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Integration)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
passed:
registry.xxxx.com/service:3426d51-baeffc2
[Pipeline] sh
[orderservice] Running shell script
+ sed -i s,BUILD_ID,,g deployment-manifests/development/service-deployments.yaml

Bind Volumes to Docker container in a pipeline job

So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?
I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'lvthillo/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Suppose you are under Linux, run the following code
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit Use bind mounts to get more information.
ps:
run
sudo -s
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
docker run [options] 5ed
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
you can simply run
docker run [options] 5ed6
to run the image My_Image_with_very_long_name.

Resources