Why is Jenkins not using the correct image for docker builds? - docker

No matter what I try, I seem to be unable toget a declerative pipeline to build my project inside a docker container, with the correct image.
I have verified the following:
Jenkins does build the correct image (based on messages in the log)
When I build the image manually, it is build correctly
When building the project inside a container with the correct image, the build succeeds
The Jenkins steps do run in a container with some image.
As far as I can tell, Jenkins simply uses the base image and not the correct one, resulting from the dockerfile I specify.
Things I've tried:
Let Jenkins figure it out
pipeline {
agent dockerfile
Using docker at the top level:
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
reuseNode true
}
}
stages {
stage('configure') {
steps {
Use docker in each step
pipeline {
agent none
stages {
stage('configure') {
agent {
dockerfile {
filename 'Dockerfile'
reuseNode true
}
}
steps {
Abbreviations, due to the number of examples. Docker is not mentioned anywhere outside of the specified areas and simply removing the docker parts and using a regular agent works fine.
Logs
The logs are useless. They simply state that they build the image and verify that they exist and then fail to execute commands that have just been installed (meson in this case).

First of all, I suggest you to read:
The Pipeline syntax for the agent declaration and in particular the dockerfile section
This basic example on how to use dockerfile. Start with the minimal pipeline with agent { dockerfile true } and please show us the logs for example.
Without any logs or a more detailed explanation on your setup, it is difficult to help you.
I can certainly tell you that the second try is wrong because
reuseNode is valid for docker and dockerfile, and only has an effect when used on an agent for an individual stage.
Instead, I am not sure how the third try could ever work: with agent none you are forcing each stage to have an agent section, but in the stage's agent section you have the reuseNode option set to true. Isn't it a contradiction? How could you reuse a top-level node if this one does not exist?
I know it is not an answer, but it is also too long to stay in the comments in my opinion.

I always use it like this, with a pre-build image:
pipeline {
agent {
docker { image 'node:16-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
But I can only guess what you want to do inside the docker environment.

Related

Conditionally use docker agent or different agentLabel based on user choice in jenkins pipeline

I want to be able to use an AWS node to run our jenkins build stages or one of our PCs connected to hardware that can also test the systems. We have some PCs installed that have a recognised agent, and for the AWS node I want to run the steps in a docker container.
Based on this, I would like to decide whether to use the docker agent or the custom agent based on parameters passed in by the user at build time. My jenkinsfile looks like this:
#!/usr/bin/env groovy
#Library('customLib')
import groovy.transform.Field
pipeline {
agent none
parameters{
choice(name: 'NODE', choices: ['PC1', 'PC2', 'dockerNode', 'auto'], description: 'Select which node to run on? Select auto for auto assignment.')
}
stages {
stage('Initialise build environment'){
agent { // agent
docker {
image 'docker-image'
label 'docker-agent'
registryUrl 'url'
registryCredentialsId 'dockerRegistry'
args '--entrypoint=\'\''
alwaysPull true
}
}
steps {
script {
if (params.NODE != 'auto' && params.NODE != 'dockerNode'){
agentLabel = params.NODE + ' && ' + 'custom_string' // this will lead to running the stages on our PCs
}else {
agentLabel = 'docker-agent'
}
env.NODE = params.NODE
}
}
} // initialise build environment
stage('Waiting for available node') {
agent {label agentLabel} // if auto or AWSNode is chosen, I want to run in the docker container defined above; otherwise, use the PC.
post { cleanup { cleanWs() } }
stages {
stage('Import and Build project') {
steps {
script {
...
}
}
} // Build project
...
}
} // Waiting for available node
}
}
Assuming the PC option works fine if I don't add this docker option and the docker option also works fine on its own, how do I add a label to my custom docker agent and use it conditionally like above? Running this Jenkinsfile, I get the message 'There are no nodes with the label ‘docker-agent’ which seems to say that it can't find the label I defined.
Edit: I worked around this problem by adding two stages that run a when block that decides whether the stage runs or not, with a different agent.
Pros: I'm able to choose where the steps are run.
Cons:
The inside build and archive stages are identical, so they are simply repeated for each outer stage. (This may become a pro if I want to run a separate HW test stage only on one of the agents)
both outer stages always run, and that means that the docker container is always pulled from the registry, even if it is not used (i.e. if we choose to run the build process on a PC)
Seems like stages do not lend themselves easily to being encapsulated into functions.

Jenkinsfile: Create a stage with an agent that only conditionally runs

I have a Jenkinsfile where I am trying to:
Conditionally build a docker image to allow for testing
Conditionally run the tests using that docker image as my agent
The conditional build is working without a problem. It builds an image that I have assigned to the variable MYIMAGE shown below.
But the conditional running of the tests does not work because it tries to find an agent before it tests for the expression, but the agent is not there because the image has not been pushed.
In this case, I am triggering it with a PR comment of "test". But I cannot even get that far, because I cannot get the initial build to pass because it reaches the stage shown below -- which should be skipped since the trigger text is not present -- and it pauses there because it is trying to find a non-existent Docker image with which to create an agent. So, even though this step should be skipped entirely, it hangs here.
Here is that snippet of code:
stage('Test image') {
when {
expression {
return getTriggerText() == 'test'
}
}
agent {
docker {
image MYIMAGE
label MYLABEL
}
}
steps {
script {
sh 'python -m pytest tests/test_toyexample.py'
pullRequest.comment("Ran tests on image: $MYIMAGE")
}
}
}
And I am quite confident it is the agent portion that is messing things up: I have a very similar structure for skipping the image build except that it uses the standard agent, and that runs fine... it gets skipped.
Question
Is there a way to use a when expression like what I have above, even with an agent that does not exist? Keep in mind, this agent does exist if the step is not being skipped... it is only missing when the step is supposed to be skipped.

Creating a Python Pipeline on Jenkins but get access denied from docker

I've created a Jenkinsfile in my Git repository that is defined as this:
pipeline {
//None parameter in the agent section means that no global agent will be allocated for the entire Pipeline’s
//execution and that each stage directive must specify its own agent section.
agent none
stages {
stage('Build') {
agent {
docker {
//This image parameter (of the agent section’s docker parameter) downloads the python:3.8
//Docker image and runs this image as a separate container. The Python container becomes
//the agent that Jenkins uses to run the Build stage of the Pipeline project.
image 'python:3.8.3'
}
}
steps {
//This sh step runs the Python command to compile the application
sh 'pip install -r requirements.txt'
}
}
}
}
When I tried to run the job with this Pipeline, I've got the following error:
I also tried to use image python:latest but this option didn't work either.
Can someone explain me :)?
Go to Computer Management -> Local Users and Groups and make sure the user used by jenkins is added to the docker-users group

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

How to run unit tests in Jenkins in separate Docker containers?

In our codebase, we have multiple neural networks (classification, object detection, etc.) for which we have written some unit tests which we want to run in Jenkins at some specified point (the specific point is not relevant, e.g. whenever we merge some feature branch in the master branch).
The issue is that due to external constraints, each neural net needs another version of keras/tensorflow and a few other packages, so we cannot run them all in the same Jenkins environment. The obvious solution to this is Docker containers (we have specialized Docker images for each one) and ideally we would want to tell Jenkins to execute each unittest in a Docker container that we specify beforehand.
Does anyone know how to do that with Jenkins? I searched online, but the solutions I have found seem a bit hacky to me.
It looks like a candidate for Jenkins pipelines, especially docker agents
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
This allows for the actual Jenkins agent to spin up a docker container to do the work. You say you have images already so you're most of the way.

Resources