Can't use specific images in Jenkinsfile - docker

I've got a problem with a pipeline in which I want to build ARM images.
So, I was willing to use an arm32v7 image to run as build agent:
pipeline {
agent {
docker {
image 'arm32v7/docker:dind'
}
}
stages {
stage('Build atom images') {
steps {
// Building my images.
}
stage('Push to registry') {
agent any
steps{
script {
withDockerRegistry(credentialsId: 'cred', url: 'https://registry.custom') {
// Pushing images to registry.
}
}
}
}
}
}
But, when pipeline runs, here's what I got:
+ docker inspect -f . arm32v7/docker:dind
.
Failed to run top '81cc646256b727780420048da5ff10e5a3256510fc8a787137651941ee54d8a0'. Error: Error response from daemon: Container 81cc646256b727780420048da5ff10e5a3256510fc8a787137651941ee54d8a0 is not running
This happens with every kind of image I choose. Can you help me on this? Am i missing something?

Related

Using container and Docker image for Windows and Linux task

I have a requirement to run the specific stages between Linux and Windows agent. I am able to achieve this by putting agents and labels in every single stage but every time pipeline is looking for new executor in every stage which is consuming a lot of time.
So I am trying to search out for the way where the pipeline runtime can be shorten by using only same linux label for Linux tasks and docker windows image for Windows task.
With only Linux task - Pipeline is working fine and no new executor is being assigned. Indeed this saving pipeline runtime. However when there is a parameter to build the project then I am unable to invoke the Docker agent. I am appending the sample Jenkins pipeline code below for quick reference.
error - docker: not found.
I think this is because Docker is not available in label linux. But if I put agent in every single stage then it is working fine but consuming a lot of time in starting the agent every time. So with the same resources I want to achieve this.
pipeline {
agent {
label 'Linux'
}
stages {
stage ('Linux') {
steps {
container('linux') {
script{
some task
}
}
}
}
}
stage('env setup') {
steps {
container('linux') {
script {
some task
}
}
}
}
stage ('windows') {
agent {
docker {
label 'docker label'
image "docker image"
reuseNode true
}
}
stages {
stage ('build') {
steps {
script {
build with windows
}
}
}
stage ('Push') {
steps {
script {
push with windows
}
}
}
}
}
stage ('Deploy') {
steps {
container('linux') {
script {
deploy using linux
}
}
}
}
}

Jenkins Pipeline with Dockerfile configuration

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.
You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline
So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

Jenkinsfile - Agents questions

I have two questions to following example:
pipeline {
agent { label "docker" }
stages {
stage('Build') {
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
steps {
...
}
}
}
}
Question 1:
When I declare agent in top level of Jenkinsfile it means that it will be used for all below stages. So what is difference between:
agent { label "docker" }
and
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
First one will use docker agent and second will use docker agent with maven image as executable environment? Where label "docker" agent is configured/installed?
Question 2:
How label tag is working? I know that somewhere is already created agent and using label I just point to it - like in example above: by default I use "docker" agent? it also means that during steps {...} this agent will be overridden by maven agent?
Question 3:
Last question for following example:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v ... -e ...'
}
}
stages {
stage('Maven Build') {
steps {
sh '...'
}
}
stage('Docker push') {
agent {
docker 'openjdk:8-jdk-alpine'
}
steps {
script {
docker.build("my-image")
}
}
}
}
post {
...
}
}
I want to build first stage using docker container with maven:3-alpine image. During build following error is printed:
...tmp/durable-54b54bdc/script.sh: line 1: docker: not found
So I modified this example, here is the working result:
pipeline {
agent any
stages {
stage('Docker push') {
steps {
script {
docker.build("my-image")
}
}
}
}
}
How is it working agent any in this case? Which agent can execute docker.build?
Answer 1:
agent { label "docker" }
This will try to find an agent with label docker and execute the steps in that agent.
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
This will try to pull the docker image with name maven:3.x.x and start the container and execute the steps mentioned in the pipeline. If you are using MultiJob this will be executed in the slave with the label based on this configuration:
Answer 2:
Defining agent at the top-level of the Pipeline ensures that an Executor will be assigned to the agent labeled docker. To my knowledge, I assume that docker container will be created in the agent labeled docker and steps will be executed inside the container.
Answer 3:
The reason could be, you may not have configured Docker Label (refer above image). The task could have executed in the master where the docker is not installed. The reason for other one working could be because, the job is executed in an agent where docker is installed.

How to save Docker volume from within Cloudbees Pipeline in case of fail

I run a set of API-Tests in a Docker-Container that are started by a Jenkins-Pipeline-Stage (Cloudbees-plugin).
I would like to save the logs of the tests away in case the stage (see below) fails.
I tried to do it with a post-action in a later stage but then I do not have access to the image any more.
How would you approach this problem? How can I save the image away in case of a fail?
stage('build Dockerimage and run API-tests') {
steps{
script {
def apitestimage = docker.build('apitestimage', '--no-cache=true dockerbuild')
apitestimage.inside('-p 5800:5800') {
dir('testing'){
sh 'ctest -V'
}
}
sh 'docker rmi --force apitestimage'
}
}
}
Use a post { failure { .. } } step to archive the data of the failing stage directly within the failed stage, not later.

How to use an environment variable in the agent section of a Jenkins Declarative Pipeline?

I'm building a Docker image for an application based in node.js where some of the dependencies requires an NPM token for a private NPM registry, but when building the image the variable containing the token is null, e.g.
docker build -t 3273e0bfe8dd329a96070382c1c554454ca91f96 --build-args NPM_TOKEN=null -f Dockerfile
a simplified pipeline is:
pipeline {
environment {
NPM_TOKEN = credentials('npm-token')
}
agent {
dockerfile {
additionalBuildArgs "--build-args NPM_TOKEN=${env.NPM_TOKEN}"
}
}
stages {
stage('Lint') {
steps {
sh 'npm run lint'
}
}
}
}
Is there a way to use the env variable in that section or it is not currently supported?
BTW, I've followed the suggestions in Docker and private modules related to how to use a NPM token to build a docker image
This is definitely a bug with the declarative pipeline. You can track the issue related to this here: https://issues.jenkins-ci.org/browse/JENKINS-42369
If you move away from using the declarative pipeline and use the scripted pipelines instead, this won't occur, although your Jenkinsfile will be "wordier"
found a solution for this. Use credentials manager to add NPM_TOKEN. Then you can do
pipeline {
agent {
docker {
image 'node:latest'
args '-e NPM_TOKEN=$NPM_TOKEN'
}
}
stages {
stage('npm install') {
steps {
sh 'npm install'
}
}
stage('static code analysis') {
steps {
sh 'npx eslint .'
}
}
}
}
I came up with a workaround for this and it still uses declarative pipeline.
I'm using this technique to download private github repos with pip.
// Workarounds for https://issues.jenkins-ci.org/browse/JENKINS-42369
// Warning: The secret will show up in your build log, and possibly be in your docker image history as well.
// Don't use this if you have a super-confidential codebase
def get_credential(name) {
def v;
withCredentials([[$class: 'StringBinding', credentialsId: name, variable: 'foo']]) {
v = env.foo;
}
return v
}
def get_additional_build_args() {
return "--build-arg GITHUB_ACCESS_TOKEN=" + get_credential("mysecretid")
}
pipeline {
agent {
dockerfile {
filename 'Dockerfile.test'
additionalBuildArgs get_additional_build_args()
}
}

Resources