process apparently never started in /home/jenkins/workspace/developer-console#tmp/durable-28a71889 - docker

All aver the internet I didn't find a solution.
I'm learning this tutorial about how to build a node.js and React app with Jenkins : https://jenkins.io/doc/tutorials/build-a-node-js-and-react-app-with-npm/#fork-sample-repository
During the build, I got the error
process apparently never started in
/home/jenkins/workspace/developer-console#tmp/durable-28a71889
(running Jenkins temporarily with
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true
might make the problem clearer)
script returned exit code -2
My Jenkinsfile looks like :
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
}
Please do you know how could I correct it? During the process, when I check, the node:6-alpine container is created and running without errors.

Related

Jenkins job getting stuck on execution of docker image as the agent

I have installed Jenkins and Docker inside a VM. I am using Jenkins pipeline project and my jenkins declarative pipeline looks like this.
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stages {
stage('Test') {
steps {
echo 'Hello Nodejs'
sh 'node --version'
}
}
}
}
It is a very basic pipeline following this link https://jenkins.io/doc/book/pipeline/docker/
When I try to build my jenkins job, it prints Hello Nodejs, but gets stuck at the next instruction i.e. execution of shell command. After 5 minutes, the job fails with this error
process apparently never started in /var/lib/jenkins/workspace/MyProject#tmp/durable-c118923c
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
ERROR: script returned exit code -2
I am not understanding why it is not executing the sh command.
If I make it as agent any, it executes the sh command.
I am not sure that it will help but I remember that node image is launched under root account by default. Jenkins uses its own ID when launching a container. So, probably, it's a permissions issue. Try to add -u 0 argument:
agent {
docker {
image 'node:7-alpine'
args '-u 0'
}
}

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

How to execute script before the job starts?

Is it possible to execute my shell script before the job starts? We are using the jenkins pipeline, but it is already late when Jenkins is processing this script - we are dealing with unknown problem with keychain and git, but we are using global libraries as well, that need to be downloaded from git before the pipeline script is executed.
Therefore we need to delete the items which are causing the problems from keychain BEFORE it downloads the global library for the job. Is there anything like this available in Jenkins?
I recommend using a pipeline, you can control which stage is executed.
see below example:
pipeline {
agent any
stages {
stage('before job starts') {
steps {
sh 'your_scripts.sh'
}
}
stage('the job') {
steps {
sh 'run_job.sh'
}
}
}
post {
always {
echo 'I will always run!'
}
}
}

How to run a docker-compose instance in jenkins pipeline

I've set up a home based CI server for working with a personal project. Below you can see what happens for the branch "staging". It works fine, however the problems with such a pipeline config are:
1) The only way to stop the instance seem to be to abort the build in jenkins whiсh leads to the exit code 143 and build marked as red instead of green
2) If the machine reboots I have to trigger build manually
3) I suppose there should be a better way of handling this?
Thanks
stage('Staging') {
when {
branch 'staging'
}
environment {
NODE_ENV = 'production'
}
steps {
sh 'docker-compose -f docker-compose/staging.yml build'
sh 'docker-compose -f docker-compose/staging.yml up --abort-on-container-exit'
}
post {
always {
sh 'docker-compose -f docker-compose/staging.yml rm -f -s'
sh 'docker-compose -f docker-compose/staging.yml down --rmi local --remove-orphans'
}
}
}
So, what's the goal here? Are you trying to deploy to staging? If so, what do you mean by that? If jenkins is to launch a long running process (say a docker container running a webserver) then the shell command line must be able to start and then have its exit status tell jenkins pipeline if the start was successful.
One option is to wrap the docker compose in a script that executes, checks and exits with the appropriate exit code. Another is to use yet another automation tool to help (e.g. ansible)
The first question remains, what are you trying to get jenkins to do and how on the commandline will that work. If you can model the command line then you can encapsulate in a script file and have jenkins start it.
Jenkins pipeline code looks like groovy and is much like groovy. This can make us believe that adding complex logic to the pipeline is a good idea, but this turns jenkins into our IDE and that's hard to debug and a trap into which I've fallen several times.
A somewhat easier approach is to have some other tool allow you to easily test on the commandline and then have jenkins build the environment in which to run that command line process. Jenkins handles what it is good at:
scheduling jobs
determining on which nodes jobs run
running steps in parallel
making the output pretty or easily understood by we carbon based life forms.
I am using parallel stages.
Here is a minimum example:
pipeline {
agent any
options {
parallelsAlwaysFailFast() // https://stackoverflow.com/q/54698697/4480139
}
stages {
stage('Parallel') {
parallel {
stage('docker-compose up') {
steps {
sh 'docker-compose up'
}
}
stage('test') {
steps {
sh 'sleep 10'
sh 'docker-compose down --remove-orphans'
}
}
}
}
}
post {
always {
sh 'docker-compose down --remove-orphans'
}
}
}

Unable to change a directory inside a Docker container through a Jenkins declarative pipeline

I'm trying to change the current directory using the dir command outlined here: https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-dir-code-change-current-directory
I've edited my pipeline to resemble something like this:
pipeline {
agent { dockerfile true }
stages {
stage('Change working directory...') {
steps {
dir('/var/www/html/community-edition') {
sh 'pwd'
}
}
}
}
}
It doesn't change the directory at all but instead tries to create a directory on the host and fails with java.io.IOException: Failed to mkdirs: /var/www/html/community-edition
Using sh cd /var/www/html/community-edition doesn't seem to work either. How do I change the directory in the container? Someone else seems to have had the same issue but had to change his pipeline structure to change the directory and doesn't sound like a reasonable fix. Isn't the step already being invoked in the container? https://issues.jenkins-ci.org/browse/JENKINS-46636
I had the same problem yesterday. It seems to be a bug that causes dir() not to change the directory when used inside a container. I've got it to work by executing the cd and pwd command at once, like this:
sh '(cd //var/www/html/community-edition && pwd)'
I had same issue and this worked for me when I had "ws" in jenkinsfile pipeline:
stage('prepare') {
steps {
ws('/var/jenkins_home/workspace/pipeline#script/desiredDir') {
sh ''

Resources