How can I use an image built by kaniko in Jenkins pipeline? I want to build the image, use that image to run tests for my app, and then push the image. With docker that would look something like this:
steps {
container('docker') {
script {
myimage = docker.build('image:tag')
}
}
container('docker') {
script {
myimage.inside {
sh 'pipenv run test'
}
}
}
# and somewhere below I would use `docker.withRegistry('some registry') { myimage.push() }
}
I am not sure how to translate that part myimage.inside from docker. With kaniko I have this:
steps {
container('kaniko') {
script {
sh '/kaniko/executor --tar-path=myimage.tar --context . --no-push --destination myregistry:tag'
}
}
container(???) {
# how can I use that image from above to run my tests??
}
# and somewhere below I use `crane` to push the image.
}
Not sure if this is relevant, but whole pipeline would run in kubernetes environment, thus I would want to avoid using docker in docker (DinD).
Related
I am build a docker image with the Docker Jenkins plugin and after I push it I would like to delete it.
Is there a possibility to use the plugin instead of "sh" commands?
I am aware that I can do sh "docker rmi" but I would like to use the plugin as I use it for build.
Here are the steps so far:
stage("Docker Build&Push") {
dir("workingdir/dcd") {
def image
docker.withRegistry("https://${registry}", "${credentials}") {
image = docker.build('myImage')
image.inside {
sh 'echo "Hello workld! This is my Docker image"'
}
image.push("${version}")
//image.delete() ????
}
}
}
Thank you!
Suppose I have a dockerized pipeline with multiple steps. The docker container is defined in the beginning of Jenkinsfile:
pipeline {
agent {
docker {
image 'gradle:latest'
}
}
stages {
// multiple steps, all executed in 'gradle' container
}
post {
always {
sh 'git whatever-command' // will not work in 'gradle' container
}
}
}
I would like to execute some git commands in a post-build action. The problem is that gradle image does not have git executable.
script.sh: line 1: git: command not found
How can I execute it on Docker host still using gradle container for all other build steps? Of course I do not want to explicitly specify container for each step but that specific post-post action.
Ok, below is my working solution with grouping multiple stages (Build and Test) in a single dockerized stage (Dockerized gradle) and single workspace reused between docker host and docker container (see reuseNode docs):
pipeline {
agent {
// the code will be checked out on out of available docker hosts
label 'docker'
}
stages {
stage('Dockerized gradle') {
agent {
docker {
reuseNode true // < -- the most important part
image 'gradle:6.5.1-jdk11'
}
}
stages{
// Stages in this block will be executed inside of a gradle container
stage('Build') {
steps{
script {
sh "gradle build -x test"
}
}
}
stage('Test') {
steps{
script {
sh "gradle test"
}
}
}
}
}
stage('Cucumber Report') {
// this stage will be executed on docker host labeled 'docker'
steps {
cucumber 'build/cucumber.json'
}
}
}
post {
always {
sh 'git whatever-command' // this will also work outside of 'gradle' container and reuse original workspace
}
}
}
I have a Jenkins pipeline running in a Docker container. My pipeline consists of three stages: Build, Test, and Deliver. Each stage makes use of an agent and the Build and Test stages work perfectly. However, for some reason the Deliver stage fails because the cdrx/pyinstaller-linux:python2 agent that runs the pyinstaller command can't find the source code in the mounted volume. I verified the file does exist and is in the correct location. When the job gets to stage 3 "Deliver" it fails to find add2vals.py. Any idea why this is happening, I'm baffled, miffed, jaded.
Jenkinsfile Pipeline Script
pipeline {
agent none
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
stash(name: 'compiled-results', includes: 'sources/*.py*')
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
stage('Deliver') {
agent any
environment {
VOLUME = '$(pwd)/sources:/src'
IMAGE = 'cdrx/pyinstaller-linux:python2'
}
steps {
dir(path: env.BUILD_ID) {
unstash(name: 'compiled-results')
sh "docker run --rm -v ${VOLUME} ${IMAGE} 'pyinstaller -F add2vals.py'"
}
}
post {
success {
archiveArtifacts "${env.BUILD_ID}/sources/dist/add2vals"
sh "docker run --rm -v ${VOLUME} ${IMAGE} 'rm -rf build dist'"
}
}
}
}
}
EDIT
After about two days of almost full time researching and attempts to resolve this issue I've been unable to. As of now I think there is a high likely hood of this being a bug in Docker. The files in the mounted volume just are not visible in the path on the container they are mounted to plain and simple. So be advised, will keep at it and update when I have something useful. If you encounter this I highly suggest just using Dind as oppose to Docker CLI installed on a jenkins container. Note this applies to a Windows 10 host with Docker Desktop installed using Linux containers. Hope this is helpful for the time being.
Is it possible to specify a Dockerfile as basis for a container job, instead of a reference to an already built container?
This is possible with Jenkins, and a much appreciated feature:
pipeline {
stages {
stage("My Stage") {
agent {
dockerfile {
filename 'Dockerfile'
}
}
steps {
//
}
}
}
}
I would suggest something like:
container:
dockerfile: Dockerfile
I am using the declarative format for pipeline files and running inside of a docker container that is defined using a Dockerfile in my project's root directory.
My Jenkinsfile looks like this:
pipeline {
agent {
dockerfile {
additionalBuildArgs '--network host'
}
}
stages {
stage('Test') {
steps {
sh 'pytest --version'
}
}
}
I would like to pass additional arguments to the docker run command similar to this question ... How to pass docker container arguments when running the image in a Jenkinsfile
Is it possible to do that in the declarative pipeline format, or should I switch?
Edit:
This is essentially the equivalent of what I am trying to do in non-declarative:
node {
def pytestImage = docker.build('pytest-image:latest', '--network host .')
pytestImage.inside('--network=host') {
sh 'pytest --version'
// other commands ...
}
}
You can add args option to your dockerfile. It passes arguments directly to a docker run invocation:
pipeline {
agent {
dockerfile {
additionalBuildArgs '--network host'
args '--network=host'
}
}
stages {
stage('Test') {
steps {
sh 'pytest --version'
}
}
}
More information here