Jenkins pipeline/docker :Jenkins does not seem to be running inside a container - docker

I'm trying to execute the sample of code found in Jenkins Pipeline here : https://jenkins.io/doc/book/pipeline/docker/
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('maven:3-alpine').inside('-v $HOME/.m2:/root/.m2') {
stage('Build') {
sh 'mvn -B'
}
}
}
And give me this error:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
[Pipeline] // withDockerContainer
I don't know why is it stopping like that without doing anything.
I have already install docker, docker plugin/docker pipeline on the latest version.
In configuration tool, i add the installation root path.
Did I miss something ?
Thanks in advance

This message is a normal debug message, maybe a little confusing, but not an error. As the Jenkins Pipeline code is written, during initialization it checks whether the step is already running in a container. I think the message could be written better.
If you have more problems than this message, please provide the entire log. Sounds like maybe a node cannot be assigned, or the docker client is not installed, or the docker image cannot be pulled.

The issue is a bit old but I faced a similar situation and I want to share.
I noticed that Jenkins mentions the the cause of the issue at the end of the pipeline logs.
For example in my case, the issue states:
java.io.IOException: Failed to run top '0458e2cc8b4e09c53bb89f680026fc8d035d7e608ed0b60912d9a61ebb4fea4d'. Error: Error response from daemon: Container 0458e2cc8b4e09c53bb89f680026fc8d035d7e608ed0b60912d9a61ebb4fea4d is not running
When checking the stage where this happened it's similar to the above you mentioned when using dockerImage.inside(), the reason in my case is that my Dockefile already defines an entrypoint and when using the inside feature jenkins gets confused, so to avoid this try overriding the entrypoint by passing it as a parameter to the inside function as follows:
dockerImage.inside("--entrypoint=''") {
echo "Tests passed"
}
Another good way to find the issue is to access your jenkins server ans list the docker containers with docker ps -a you may find the build container failed, check the logs then you will get a hint, in my case the logs says cat: 1: ./entrypoint.sh: not found.

Related

Terraform not found by a jenkins pipeline

So I'm running jenkins inside a docker container with terraform installed on it.
I have a pipeline which automate the "Terraform init, plan,..." procedure. However, every time I launch a build, I get this error
"/var/jenkins_home/workspace/Terra_pipeline_main#tmp/durable-1fd048ee/script.sh:
1:
/var/jenkins_home/workspace/Terra_pipeline_main#tmp/durable-1fd048ee/script.sh:
Terraform: not found".
It seems that Terraform isn't found even though it's installed (I checked on docker container CLI if terraform is really installed with a "terraform --help" and it worked).
I can't figure out what's the problem.
enter image description here
Thank you for your help. Indeed I was getting that error due to a typo in my code. I apologize for not adding any images. (It's my first on Stack)
To solve this problem, just write "terraform" instead of "Terraform" with no capital letters. #Matt schuchard

Jenkins Script Console vs Build Agent

I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...
Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller

"Docker: command not found" from Jenkins on MacOS

When running jobs from Jenkinsfile with Pipeline syntax and a Docker agent, the pipeline fails with "Docker: command not found." I understand this to mean that either (1) Docker is not installed; or (2) Jenkins is not pointing to the correct Docker installation path. My situation is very similar to this issue: Docker command not found in local Jenkins multi branch pipeline . Jenkins is installed on MacOS and running off of localhost:8080. Docker is also installed (v18.06.0-ce-mac70)./
That user's solution included a switch from pipeline declarative syntax to node scripted syntax. However I want to resolve the issue while retaining the declarative syntax.
Jenkinsfile
#!groovy
pipeline {
agent {
docker {
image 'node:7-alpine'
}
}
stages {
stage('Unit') {
steps {
sh 'node -v'
sh 'npm -v'
}
}
}
}
Error message
docker inspect -f . node:7-alpine
docker: command not found
docker pull node:7-alpine
docker: command not found
In Jenkins Global Tool Configuration, for Docker installations I tried both (1) install automatically (from docker.com); and (2) local installation with installation root /usr/local/.
All of the relevant plugins appears to be installed as well.
I solved this problem here: https://stackoverflow.com/a/58688536/8160903
(Add Docker's path to Homebrew Jenkins plist /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist)
I would check the user who is running the jenkins process and make sure they are part of the docker group.
You can try adding the full path of docker executable on your machine to Jenkins at Manage Jenkins > Global Tool Configuration.
I've seen it happen sometimes that the user which has started Jenkins doesn't have the executable's location on $PATH.

jenkins pipeline DOCKER_HOST

I need to run a docker-container inside my pipeline.
My problem is, there is no docker.sock available inside the Jenkins-container. And actual no chance to get it.
But I found some jobs using docker with this Option:
"Inject environment variables to the build process" -> "Properties
Content"
And following configured:
DOCKER_HOST=tcp://<ip>:<port>
DOCKER_CERT_PATH=/var/jenkins_home/certs
In my understanding, this is equivalent to the docker.sock and useable as plugin, isnt it?
But how can i use it inside a (multi-)pipeline project?
I've tried using this Block inside my Note:
environment {
DOCKER_HOST = 'tcp://<ip>:<port>'
DOCKER_CERT_PATH = '/var/jenkins_home/certs'
}
But got same issue: "docker: not found"
I might have a logical fallacy. Hope someone could help.
Otherwise is it possible to create a jenkins-slave including a docker.sock?
But got same issue: "docker: not found"
This indicates that your Jenkins slave, the one running the pipeline script, does not have the docker command line tools. This depends on your distribution, but in my case I fixed it by changing my build-slave/pipeline-runner creation steps to include:
yum install -y docker-client
Note that you'll still need that for the Cloudbees docker plugin (the thing which provides stuff like docker.build() and docker.image()) because it translates those nice pipeline directives down into shell commands.

Change Jeninks build status on specific error

i've got a Jenkins job that should simply start a docker container using the Docker plugin.
If the container is stopped, the job runs correctly, but if the container is already running, the build step returns a failure due to an
com.github.dockerjava.api.exception.NotModifiedException
error.
This is basically the expected behavior of Jenkins but in my case, i want to set it to unstable to have a more meaningful response for the user.
I tried to add a conditional build step afterwards using TextFinder that scans the console output for the error, but it seems that it isn't executed after the docker build step fails.
Is there a way to change the build status just for this error?
In Jenkins, you can add a Groovy PostBuild Script for that job:
exceptionTextRegex = '.*com.github.dockerjava.api.exception.NotModifiedException.*'
if(manager.logContains(exceptionTextRegex)) {
manager.buildUnstable()
}
Thank you for pointing me in the right direction. Groovy PoistBuild was indeed the answer, but the script was a little bit bigger:
errpattern = ~/com.github.dockerjava.api.exception.NotModifiedException.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
if (errmatcher.find()) {
manager.build.#result = hudson.model.Result.UNSTABLE
}
}

Resources