permission denied when executing the jenkins sh pipeline step - jenkins

I have some trouble with this situation:
everytime I create a new pipeline job ( entitled "pipeline"), the sh step won't work even with simple command like ls or pwd and it returns this log:
sh: 1: /var/jenkins_home/workspace/pipeline#tmp/durable-34c21b81/script.sh: Permission denied
Any suggestions?

I was getting a similar permissions denied error after following the Jenkins pipeline tutorial for a node project.
./jenkins/test.sh: Permission denied
The original pipeline Test stage looked like the following and returned that error.
stage('Test') {
steps {
sh './jenkins/test.sh'
}
}
I found the following post: https://stackoverflow.com/a/61956744/9109504 and modified the Test stage to the following
stage('Test') {
steps {
sh "chmod +x -R ${env.WORKSPACE}"
sh './jenkins/test.sh'
}
}
That change fixed the permissions denied error.

I guess you use
stage(name){
sh ./runSomething
}
Jenkins always uses to user jenkins for running scripts. There are some possibilities:
Jenkins is running with a different user, maybe you start it with some other user.
Something went running when installing jenkins, check that you have a jenkins user

From the terminal you should give permission to this file
sudo chmod -R 777 ./test.sh
and when you push this file it will go with this permissions under the hood, and this way Jenkins will execute this file.

We just need to add "sudo" before the path. I have tested It. It workes perfectly.
stages {
stage('Hello') {
steps {
sh 'sudo /root/test.sh'
}
}
}

Related

Jenkins pipeline: kubectl command not found

I'm running Jenkins locally and this is the last stage of my Jenkinsfile (after following this tutorial):
stage('Deploy to K8s') {
steps {
sshagent(credentials: ['k8s_key']) {
sh 'scp -r -o StrictHostKeyChecking=no localPath/deployment-remes-be.yaml <user>#<ip_address>:/opt/kubernetes-system/backend'
script {
try {
sh 'ssh <user>#<ip_address> kubectl apply -f /opt/kubernetes-system/backend/deployment-remes-be.yaml --kubeconfig=~/.kube/config'
}
catch(error) {
}
}
}
}
}
When I run the pipeline it completes without any blocking errors, but when I check the logs I can see this:
The copy before the apply command is working. I have microk8s installed on the Debian server I'm trying to deploy to and if I run the apply command manually then it's working fine. I've created the .kube/config file as shown here but using the --kubeconfig file doesn't make any difference. It also doesn't matter if I use microk8s.kubectl, I always get this message.
I have these plugins installed:
What can I do here to make the apply work from the pipeline?
In this situation where the error thrown is that the executable command is not found in the path, then the absolute path should be attempted as a first step. The shell step method can be updated accordingly:
sh 'ssh <user>#<ip_address> /path/to/kubectl apply -f /opt/kubernetes-system/backend/deployment-remes-be.yaml --kubeconfig=~/.kube/config'

Jenkins Groovy pipeline #tmp dir() AccessDenied exception for createDirectory

I'm trying to set-up a simple (single) Groovy pipeline script that would deploy my PHP project using webhooks. Sounds like a simple script and I've had it working in an older Jenkins version (which used bash script) but when I use the dir(String) {} method I get the following exception:
java.nio.file.AccessDeniedException: /srv/www/staging#tmp
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3563)
The server itself runs on CentOS 7 and Jenkins is using version 2.303.3. I've added the jenkins user to the apache group, which is the owner+group of the relative webroot, located in /srv/www/staging. The permissions for the /srv/www and /srv/www/staging directories are set to 0773 apache:apache.
When I simulate a bash session with user jenkins using su -s /bin/bash jenkins, I am able to create directories and files within the /srv/www/staging directory - without any issue. So I'm wondering how the pipeline itself doesn't have permissions to do so!..
The pipeline script is as follows:
pipeline {
agent any
stages {
stage('Git Pull') {
steps {
sh "whoami"
dir ('/srv/www/staging') {
sh "pwd"
}
}
}
}
}
The sh "whoami" returns jenkins, so I'm sure the proper user is executing the commands.
I can't really find a descriptive similar issue, thus why I'm posting this.
Hopefully someone is able to guide me in the right direction.

AWS CLI and SAM CLI are not found on Jenkins Pipeline script (Jenkins run as Docker instance)

I have a script as
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git branch: "master", credentialsId: "ritefit.io", url: "git#bitbucket.org:ritefitio/my-project.git"
}
}
stage('Build') {
steps {
sh 'echo $PATH'
sh 'aws s3 ls'
}
}
}
}
I got the below one as the output
> /var/jenkins_home/workspace/question-app-pipeline#tmp/durable-af9dc286/script.sh:
> 1:
> /var/jenkins_home/workspace/question-app-pipeline#tmp/durable-af9dc286/script.sh:
> aws: not found
If I ssh to the instance to run aws s3, it is working as normal. But not from Jenkins pipeline.
I also tried to echo $PATH and I do see the 'aws' path is already included.
The same is happening for sam cli.
Please help me out please. It has been couple days which I search and apply many ways but no luck so far
Note: Jenkins run under Docker instance
I think that is the issue. I have logged inside the docker and I got the same issue.
did you try to run aws command as jenkins user in the agent or instance ? or try "sudo aws s3 ls"

Jenkins job getting stuck on execution of docker image as the agent

I have installed Jenkins and Docker inside a VM. I am using Jenkins pipeline project and my jenkins declarative pipeline looks like this.
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stages {
stage('Test') {
steps {
echo 'Hello Nodejs'
sh 'node --version'
}
}
}
}
It is a very basic pipeline following this link https://jenkins.io/doc/book/pipeline/docker/
When I try to build my jenkins job, it prints Hello Nodejs, but gets stuck at the next instruction i.e. execution of shell command. After 5 minutes, the job fails with this error
process apparently never started in /var/lib/jenkins/workspace/MyProject#tmp/durable-c118923c
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
ERROR: script returned exit code -2
I am not understanding why it is not executing the sh command.
If I make it as agent any, it executes the sh command.
I am not sure that it will help but I remember that node image is launched under root account by default. Jenkins uses its own ID when launching a container. So, probably, it's a permissions issue. Try to add -u 0 argument:
agent {
docker {
image 'node:7-alpine'
args '-u 0'
}
}

Unable to change a directory inside a Docker container through a Jenkins declarative pipeline

I'm trying to change the current directory using the dir command outlined here: https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-dir-code-change-current-directory
I've edited my pipeline to resemble something like this:
pipeline {
agent { dockerfile true }
stages {
stage('Change working directory...') {
steps {
dir('/var/www/html/community-edition') {
sh 'pwd'
}
}
}
}
}
It doesn't change the directory at all but instead tries to create a directory on the host and fails with java.io.IOException: Failed to mkdirs: /var/www/html/community-edition
Using sh cd /var/www/html/community-edition doesn't seem to work either. How do I change the directory in the container? Someone else seems to have had the same issue but had to change his pipeline structure to change the directory and doesn't sound like a reasonable fix. Isn't the step already being invoked in the container? https://issues.jenkins-ci.org/browse/JENKINS-46636
I had the same problem yesterday. It seems to be a bug that causes dir() not to change the directory when used inside a container. I've got it to work by executing the cd and pwd command at once, like this:
sh '(cd //var/www/html/community-edition && pwd)'
I had same issue and this worked for me when I had "ws" in jenkinsfile pipeline:
stage('prepare') {
steps {
ws('/var/jenkins_home/workspace/pipeline#script/desiredDir') {
sh ''

Resources