I am build a docker image with the Docker Jenkins plugin and after I push it I would like to delete it.
Is there a possibility to use the plugin instead of "sh" commands?
I am aware that I can do sh "docker rmi" but I would like to use the plugin as I use it for build.
Here are the steps so far:
stage("Docker Build&Push") {
dir("workingdir/dcd") {
def image
docker.withRegistry("https://${registry}", "${credentials}") {
image = docker.build('myImage')
image.inside {
sh 'echo "Hello workld! This is my Docker image"'
}
image.push("${version}")
//image.delete() ????
}
}
}
Thank you!
Related
How can I use an image built by kaniko in Jenkins pipeline? I want to build the image, use that image to run tests for my app, and then push the image. With docker that would look something like this:
steps {
container('docker') {
script {
myimage = docker.build('image:tag')
}
}
container('docker') {
script {
myimage.inside {
sh 'pipenv run test'
}
}
}
# and somewhere below I would use `docker.withRegistry('some registry') { myimage.push() }
}
I am not sure how to translate that part myimage.inside from docker. With kaniko I have this:
steps {
container('kaniko') {
script {
sh '/kaniko/executor --tar-path=myimage.tar --context . --no-push --destination myregistry:tag'
}
}
container(???) {
# how can I use that image from above to run my tests??
}
# and somewhere below I use `crane` to push the image.
}
Not sure if this is relevant, but whole pipeline would run in kubernetes environment, thus I would want to avoid using docker in docker (DinD).
I have a Jenkins pipeline running in a Docker container. My pipeline consists of three stages: Build, Test, and Deliver. Each stage makes use of an agent and the Build and Test stages work perfectly. However, for some reason the Deliver stage fails because the cdrx/pyinstaller-linux:python2 agent that runs the pyinstaller command can't find the source code in the mounted volume. I verified the file does exist and is in the correct location. When the job gets to stage 3 "Deliver" it fails to find add2vals.py. Any idea why this is happening, I'm baffled, miffed, jaded.
Jenkinsfile Pipeline Script
pipeline {
agent none
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
stash(name: 'compiled-results', includes: 'sources/*.py*')
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
stage('Deliver') {
agent any
environment {
VOLUME = '$(pwd)/sources:/src'
IMAGE = 'cdrx/pyinstaller-linux:python2'
}
steps {
dir(path: env.BUILD_ID) {
unstash(name: 'compiled-results')
sh "docker run --rm -v ${VOLUME} ${IMAGE} 'pyinstaller -F add2vals.py'"
}
}
post {
success {
archiveArtifacts "${env.BUILD_ID}/sources/dist/add2vals"
sh "docker run --rm -v ${VOLUME} ${IMAGE} 'rm -rf build dist'"
}
}
}
}
}
EDIT
After about two days of almost full time researching and attempts to resolve this issue I've been unable to. As of now I think there is a high likely hood of this being a bug in Docker. The files in the mounted volume just are not visible in the path on the container they are mounted to plain and simple. So be advised, will keep at it and update when I have something useful. If you encounter this I highly suggest just using Dind as oppose to Docker CLI installed on a jenkins container. Note this applies to a Windows 10 host with Docker Desktop installed using Linux containers. Hope this is helpful for the time being.
So I'm trying to set up a pipeline in Jenkins to build image and push them to Docker hub.
My credentials in Manage 'Jenkins' are called the same as "docker-hub-credentials" and seem to be used.
It can build, but it just doesn't get through the push... Help? I've been on that for hours and I'm not sure what I' missing.
I've already tried using docker login, but jenkins doesn't allow it.
stage('Build image') {
/* This builds the actual image; synonymous to
* docker build on the command line */
bat 'docker build -t username/foldername:build . ' }
stage('Push image') {
/* Finally, we'll push the image with two tags:
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
bat 'docker push username/foldername:build'
}
}
I expect the image to be pushed, but I have this instead :
The push refers to repository [docker.io/username/foldername]
a73d7d9f4346: Preparing
964bdfb24a54: Preparing
1af124d739c9: Preparing
6cffeea81e5d: Preparing
614a79865f6d: Preparing
612d27bb923f: Preparing
ef68f6734aa4: Preparing
612d27bb923f: Waiting
ef68f6734aa4: Waiting
denied: requested access to the resource is denied
I found the answer!!!
stage('Push image') {
withDockerRegistry([ credentialsId: "docker-hub-credentials", url: "" ]) {
bat "docker push devopsglobalmedia/teamcitydocker:build"
}
In Image push stage, you can do docker login first and then push the image. Try the following for docker login:
stage('Push image') {
withCredentials([usernamePassword( credentialsId: 'docker-hub-credentials', usernameVariable: 'USER', passwordVariable: 'PASSWORD')]) {
def registry_url = "registry.hub.docker.com/"
bat "docker login -u $USER -p $PASSWORD ${registry_url}"
docker.withRegistry("http://${registry_url}", "docker-hub-credentials") {
// Push your image now
bat "docker push username/foldername:build"
}
}
}
Make sure the registry url is correct.
withCredentials([usernamePassword(...)]) method above will set two environment variables USER and PASSWORD which are your docker registry credentials from credentials id docker-hub-credentials.
A better option is to use Docker pipeline plugin (it comes in the recommended plugins).
node {
checkout scm
def dockerImage
stage('Build image') {
dockerImage = docker.build("username/repository:tag")
}
stage('Push image') {
dockerImage.push()
}
}
Doing it this way, you must specify the credentials of the docker registry in the Pipeline Model Definition.
Docker pipeline plugin has problems applying the credentials assigned in Pipeline Model Definition to projects with Multi-branch pipeline. That is, if using the above code you continue to receive the error:
denied: requested access to the resource is denied
Then you must specify the credentials in the Jenkinsfile as follows:
node {
checkout scm
def dockerImage
stage('Build image') {
dockerImage = docker.build("username/repository:tag")
}
stage('Push image') {
docker.withRegistry('https://registry-1.docker.io/v2/', 'docker-hub-credentials') {
dockerImage.push()
}
}
}
You can modify the URL to a custom registry if you need it
I had the same issue recently, it was a wrong the docker registry url
The issue - https://index.docker.io/v1
The fix - https://index.docker.io/v1/
Yep, it was an issue. I was not able to push it to docker registry.
I have spent like half days to solve it. Please make sure Docker Pipeline Plugin installed.
script {
withDockerRegistry([ credentialsId: "Docker-Hub-Cred", url: "https://index.docker.io/v1/" ]){
}
}
If you want to push private repository then below commands you have to follow:
1st: you have to make sure dockerhub credentials on jenkins.
2nd: You have creat a job with pipeline project
3rd: Then you have to push your project with jenkinsfile.
4th: now you can build jenkins.
I am giving full Jenkinsfile it will help to solve this permission denied problem and successfully u can create docker image and push to dockerhub.
pipeline {
agent any
stages{
stage('Maven Install'){
agent any
steps{
bat 'mvn -f pom.xml clean install'
}
}
stage('Docker Build'){
agent any
steps{
bat 'docker build -t dockerhub_username/repository_name/docker-jenkins-integration .'
}
}
stage('Push image'){
agent any
steps{
withDockerRegistry([ credentialsId: "id", url: "" ]) {
bat "docker tag dockerhub_username/repository_name/docker-jenkins-integration dockerhub_username/repository_name:docker-jenkins-integration"
bat "docker push dockerhub_username/repository_name:docker-jenkins-integration"
}
}
}
}
}
As of today, Docker pipeline plugin gives groovy methods to login to docker hub, build image and finally push it.
Below is a sample code for a declarative pipeline
pipeline {
agent any
stages {
stage('Push container') {
steps {
echo "Workspace is $WORKSPACE"
dir("$WORKSPACE/dir-to-Dockerfile") {
script {
def REGISTRY_URL = "https://index.docker.io/v1/"
def USER_NAME = "your-registry-username"
docker.withRegistry("$REGISTRY_URL", 'DockerHub-Cred-ID-Defined-In-Jenkins') {
def image = docker.build("$USER_NAME/some-image-name:${env.BUILD_ID}")
image.push()
}
}
}
}
}
}
}
I solved by add :
stage('Push Docker Image') {
steps {
withDockerRegistry([credentialsId: "docker_auth", url: "https://index.docker.io/v1/"]) {
bat "docker push IMAGE_NAME:latest"
}
}
}
Why is it that docker not found when i use docker as an agent in jenkins pipeline?
+ docker inspect -f . node:7-alpine
/var/jenkins_home/workspace/poobao-aws-services#tmp/durable-
13f890b0/script.sh: 2: /var/jenkins_home/workspace/project-
name#tmp/durable-13f890b0/script.sh: docker: not found
In Global Tools Configuration, I have docker as automatically install.
I have docker set to install automatically as follows, with a declarative pipeline as follows...
My jenkinsfile then has this initialization stage (amended from here)
stage('Install dependencies') {
steps {
script {
def dockerTool = tool name: 'docker', type: 'org.jenkinsci.plugins.docker.commons.tools.DockerTool'
withEnv(["DOCKER=${dockerTool}/bin"]) {
//stages
//here we can trigger: sh "sudo ${DOCKER}/docker ..."
}
}
}
}
When built it then installs automatically...
I have BookStore Spring Boot project that needs to be deployed through Jenkins. Docker installed in my local machine (macOS) and Jenkinsfile created as follows
pipeline
{
agent
{
docker
{
image 'maven:3-alpine'
//This exposes application through port 8081 to outside world
args '-u root -p 8081:8081 -v /var/run/docker.sock:/var/run/docker.sock '
}
}
stages
{
stage('Build')
{
steps
{
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test')
{
steps {
//sh 'mvn test'
sh 'echo "test"'
}
post {
always {
//junit 'target/surefire-reports/*.xml'
sh 'echo "test"'
}
}
}
stage('Deliver for development')
{
when {
branch 'development'
}
steps {
sh './jenkins/scripts/deliver-for-development.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
}
}
stage('Deploy for production')
{
when {
branch 'production'
}
steps {
sh './jenkins/scripts/deploy-for-production.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
}
}
stage('Deliver') {
when {
branch 'production'
}
steps {
sh 'bash ./jenkins/deliver.sh'
}
}
}
}
I created multi-branch pipeline in Jenkins and when I try to run it, I got following error
/Users/Shared/Jenkins/Home/workspace/BookStore_master-VPWQ32ZZPV7CVOXNI4XOB3VSGH56MTF3W34KXKZFJKOBMSGLRZQQ#tmp/durable-70dd5a81/script.sh: line 2: docker: command not found
script returned exit code 127
This looks strange to me as docker available in local machine, and also configured Global Tool Configuration section with appropriate details as shown below. I looked into several posts and none of the solutions worked so far.
I faced the same issue on the Mac and the following answer helped me.
docker: command not found ( mac mini ) only happens in jenkins shell step but work from command prompt.
The solution is to add the following line into the /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist file so that Jenkins able to find the docker command from the host machine.
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Docker.app/Contents/Resources/bin/:/Users/Kh0a/Library/Group\ Containers/group.com.docker/Applications/Docker.app/Contents/Resources/bin</string>
</dict>
I had the same issue and was able to resolve it thanks to this thread https://stackoverflow.com/a/50029962/6943587.
You need to specify the docker label, aka which agent(s) have docker. There are two ways to do this, that I know of.
(Option 1 - preferred) Set docker label in Jenkinsfile
Set the agent as docker image with docker agent label.
// Jenkinsfile
pipeline {
// Assign to docker agent(s) label, could also be 'any'
agent {
label 'docker'
}
stages {
stage('Docker node test') {
agent {
docker {
// Set both label and image
label 'docker'
image 'node:7-alpine'
args '--name docker-node' // list any args
}
}
steps {
// Steps run in node:7-alpine docker container on docker agent
sh 'node --version'
}
}
stage('Docker maven test') {
agent {
docker {
// Set both label and image
label 'docker'
image 'maven:3-alpine'
}
}
steps {
// Steps run in maven:3-alpine docker container on docker agent
sh 'mvn --version'
}
}
}
}
(Option 2) Set docker label in configuration
Set the "docker label" in the Jenkins configuration under "Pipeline Model Definition", per the Jenkins docs here. This will only run the pipeline builds on agents with this label. Then you can create your pipeline like so...
// Jenkinsfile
pipeline {
// "Top-level" agent is assigned to docker agents via Jenkins pipeline configuration
agent none
stages {
stage('Docker node test') {
agent {
docker {
image 'node:7-alpine'
args '--name docker-node' // list any args
}
}
steps {
// Steps run in node:7-alpine docker container on docker agent
sh 'node --version'
}
}
stage('Docker maven test') {
agent {
docker {
image 'maven:3-alpine'
}
}
steps {
// Steps run in maven:3-alpine docker container on docker agent
sh 'mvn --version'
}
}
}
}
Hope this helps
Option 1 is preferred over option 2 because the Jenkinsfile configures
what machine(s) to run the docker agents on without relying on the
Jenkins pipeline configuration which could be deleted or edited in the
future.
Since you have chosen install automatically option in Global Tool Configuration section, Jenkins will not look for the docker in your system.
You can resolve this issue by unchecking the install automatically option for docker in Global Tool Configuration section
download docker installer,
install it and
give the path of installer to Jenkins.
Example screenshot is below.
Setup docker installer path in jenkins under Global Tool Configuration
I was able to solve this by retrieving Docker and Maven values from Global Tool Configuration section and adding them to environment PATH as shown below
Updated Jenkinsfile:
node {
stage('Initialize')
{
def dockerHome = tool 'MyDocker'
def mavenHome = tool 'MyMaven'
env.PATH = "${dockerHome}/bin:${mavenHome}/bin:${env.PATH}"
}
stage('Checkout')
{
checkout scm
}
stage('Build')
{
sh 'uname -a'
sh 'mvn -B -DskipTests clean package'
}
stage('Test')
{
//sh 'mvn test'
sh 'ifconfig'
}
stage('Deliver')
{
sh 'bash ./jenkins/deliver.sh'
}
}
There seems to be an issue with automated docker installer. I encountered the same issue on docker on centos 7.
I downloaded the docker cli executables from https://download.docker.com/linux/static/stable/x86_64/ and extracted them into jenkins docker volume on host (/var/lib/docker/volumes/jenkins_home/_data/docker). Then copied from /var/jenkins_home/docker to /usr/bin using shell on docker container.
After coping the executables, the build worked as expected.
In my case I had docker command issues because I was using jenkins-lts which is also a docker. After trying to debug for quite a while, I realized referencing docker command with in a docker might be an issue. I stopped the jenkins-lts service, downloaded jenkins.war file and ran the same pipeline script with docker command. It started working. My pipeline script has agent any, it still works in jenkins.war version of jenkins
If you are on windows
Follow from here:-
https://www.katacoda.com/courses/jenkins/build-docker-images
Just apply the line separator Unix and Mac Os : "\n" in your ".sh" files with your code editor. It worked for me.
add -v $(which docker):/usr/bin/docker while running container