I want to know if there is a function or pipeline plugin that allows to create directory under the workspace instead of using sh "mkdir directory"?
I've tried to use a groovy instruction new File("directory").mkdirs() but it always return an exception.
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new java.lang.RuntimeException java.lang.String
What you can do is use the dir step, if the directory doesn't exist, then the dir step will create the folders needed once you write a file or similar:
node {
sh 'ls -l'
dir ('foo') {
writeFile file:'dummy', text:''
}
sh 'ls -l'
}
The sh steps is just there to show that the folder is created. The downside is that you will have a dummy file in the folder (the dummy write is not necessary if you're going to write other files). If I run this I get the following output:
Started by user jon
[Pipeline] node
Running on master in /var/lib/jenkins/workspace/pl
[Pipeline] {
[Pipeline] sh
[pl] Running shell script
+ ls -l
total 0
[Pipeline] dir
Running in /var/lib/jenkins/workspace/pl/foo
[Pipeline] {
[Pipeline] writeFile
[Pipeline] }
[Pipeline] // dir
[Pipeline] sh
[pl] Running shell script
+ ls -l
total 4
drwxr-xr-x 2 jenkins jenkins 4096 Mar 7 22:06 foo
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Just use a file operations plugin.
fileOperations([folderCreateOperation('directoryname')])
To avoid the dummy file from the answer by #jon-s, one could simply pwd -P. That also accesses the directory and therefore creates it implicitly.
dir('foo') {
sh 'pwd -P'
}
Notes:
It seems -P is needed. Assumedly, pwd just passes the environment variable otherwise.
One could argue that sh 'mkdir foo' is equally suitable (if using sh at all).
Related
I have a Jenkins that runs in a container.
I was trying to debug a groovy file that is running in Jenkins pipeline and found out it is not executing from the workspace for some reason.
Below is the Jenkins pipeline
pipeline {
agent any
stages {
stage('testing') {
steps {
script {
sh '''
ls
'''
def proc = [ "ls"].execute()
def output = proc.text
println(output)
}
}
}
}
}
The shell command returns listing of the checked out repository, as expected.
However same command executed in groovy script shows container root filesystem.
That's not what I expected. What is going on here?
Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (testing)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ ls
README.md
docs
jenkins
modules
scripts
[Pipeline] echo
aws
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
vault
The sh node is a proper Jenkins built-in step that runs the command in the job.
The .execute() is a bit of a hack :)
You're running the Groovy script through the Jenkins interpreter, which is not pretty standard and does it's own thing.
I'd avoid it and keep with the Jenkins sh that has a standard behaviour.
I am using a very simple script mentioned below as per the official docs (https://www.jenkins.io/doc/book/pipeline/docker/):
pipeline {
agent {
docker { image 'node:14-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Simple as it is, it outputs follows:
22:58:45 [Pipeline] }
22:58:45 [Pipeline] // stage
22:58:45 [Pipeline] withEnv
22:58:45 [Pipeline] {
22:58:45 [Pipeline] isUnix
22:58:45 [Pipeline] sh
22:58:45 + docker inspect -f . node:14-alpine
22:58:46 Sorry, home directories outside of /home are not currently supported.
22:58:46 See https://forum.snapcraft.io/t/11209 for details.
22:58:46 [Pipeline] isUnix
22:58:46 [Pipeline] sh
22:58:46 + docker pull node:14-alpine
22:58:46 Sorry, home directories outside of /home are not currently supported.
22:58:46 See https://forum.snapcraft.io/t/11209 for details.
22:58:46 [Pipeline] }
22:58:46 [Pipeline] // withEnv
22:58:46 [Pipeline] }
22:58:46 [Pipeline] // node
22:58:46 [Pipeline] End of Pipeline
22:58:46 ERROR: script returned exit code 1
22:58:46 Finished: FAILURE
Not sure what I am doing wrong.
The hyperlink inside the message leads to a page that says:
Snapd does currently not support running snaps if the home directory of the user is outside of /home.
It says that for the docker command. I suspect you're trying to run the docker command as the jenkins user. The default home directory for the jenkins user is /var/lib/jenkins. The default home directory of the jenkins user is outside /home.
If that's the case, there are several alternatives available:
Create a user on that computer with a home directory in /home and run a Jenkins agent as that user
Install docker on that computer using apt instead of using snapd (following the Docker directions rather than the Ubuntu directions)
Create a user on another computer with a home directory in /home and install docker there with snapd, then configure an agent to use that computer
It's likely you are inheriting the HOME environment variable from Jenkins in some way. You can use env config to override that. If you want the HOME from the worker node executing the docker build you can mount env.HOME into /home/jenkins (or something like that) into the container.
Something like:
pipeline {
agent {
docker {
image 'node:14-alpine'
args '-v $HOME:/home/jenkins'
}
}
...
}
Jenkins (2.162), updated modules. I need to to add private github dependencies for cargo build. So, I need store SSH key into Jenkins container before cargo build.
I did:
stage('Build') {
steps{
script {
dir('api'){
withCredentials([string(credentialsId: 'GitKeyText', variable: 'ID_RSA')]) {
sh '''
set +x
eval `ssh-agent -s`
mkdir ~/.ssh
echo ${ID_RSA} >~/.ssh/id_rsa
chmod go-r ~/.ssh/id_rsa
ssh-add
cargo build
'''
}
}
input message: "wait"
}
}
}
All looks good and this sequence of command work well manually inside the docker container. But, Jenkins job had been failing at ssh-add without any error messages. Just ERROR: script returned exit code 1 at the end of the Jenkins console log.
add01:
I added echo comment to the code, and changed set +x to set -x
no output from ssh-add (console output)
.....
+ echo before ssh-add
before ssh-add
+ ssh-add
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // script
Post stage
.....
I used Jenkins SSH Agent Plugin.
All work as intended.
script {
dir('contract_api'){
sshagent(['GitSSHcred']) {
sh 'cargo build'
}
}
}
So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?
I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'lvthillo/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Suppose you are under Linux, run the following code
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit Use bind mounts to get more information.
ps:
run
sudo -s
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
docker run [options] 5ed
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
you can simply run
docker run [options] 5ed6
to run the image My_Image_with_very_long_name.
I want to configure a Jenkins pipeline job so that it should be able to run multiple shell script jobs. Even if one shell script fails the job should run the other two before failing the job.
You need to tweak your shell script, not Jenkins pipeline to achieve what you want!
Try this in your shell script
shell script command > /dev/null 2>&1 || true
so fail/pass it will execute and go to next shell script
You can always try catch the potentially failing sh execution
node {
sh "echo test"
try {
sh "/dev/null 2>&1"
} catch (error) {
echo "$error"
}
sh "echo test1"
}
Above runs successfully and produces
Started by user Blazej Checinski
[Pipeline] node
Running on agent2 in /home/build/workspace/test
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ echo test
test
[Pipeline] sh
[test] Running shell script
+ /dev/null
/home/build/workspace/test#tmp/durable-b4fc2854/script.sh: line 2: /dev/null: Permission denied
[Pipeline] echo
hudson.AbortException: script returned exit code 1
[Pipeline] sh
[test] Running shell script
+ echo test1
test1
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS