I guys, I have a Jenkins Pipeline and at some point I have to run a docker run
sh 'ls $(pwd)'
sh 'docker run --rm -v $(pwd):/src cdrx/pyinstaller-windows ls /src'
The problem is that first line ls list correctly the current commit files, but for some reason I'm not able to mount this folder $(pwd) inside another container, in fact, the command ls /src is empty when did run from Jenkins agent, same command on host machine mount correctly the volume, I can I fix this?
Use the $WORKSPACE environment variable:
sh "docker run --rm -v '$WORKSPACE:/src' cdrx/pyinstaller-windows ls /src"
quote it to make sure possible spaces in the pwd will be included and not treated as their own parameters...
sh 'docker run --rm -v "$(pwd):/src" cdrx/pyinstaller-windows ls /src'
Related
I want to run echo "tools path is: $TOOLSPATH" in my docker image but make sure the variable doesn't get expanded in my machine and sent to docker. I am not sure how to avoid variable expansion.
docker run -v `pwd`:/root -it --rm foobar echo 'tools path is: $TOOLSPATH'
> tools path is: $TOOLSPATH
docker run -v `pwd`:/root -it --rm foobar echo "tools path is: $TOOLSPATH"
> tools path is:
There is one way to run echo $VAR inside the container and print it in your terminal. You just need to pass an interpreter too.
docker run alpine sh -c 'echo my HOME: $HOME'
my HOME: /root
PS: I used alpine as a test. If sh doesn't work, you can try bash instead.
This might be helpful to you
docker run -v `pwd`:/root -it --rm foobar echo "tools path is: $TOOLSPATH"
I am following the Jenkins tutorial with some modification.
I run the Jenkins docker container by:
docker run --rm --privileged -u root -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD"/vol:/var/jenkins_home \
jenkinsci/blueocean
With my Jenkinsfiles:
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'ls' ##### 1
sh 'py.test --junit-xml test-reports/results.xml sources/test_calc.py' ##### 2
}
}
stage('Deliver') {
agent any
environment {
VOLUME = '$(pwd)/sources:/src'
ABS_WS = '/home/myname/vol/workspace'
JOB_WS = "\${PWD##*/}"
IMAGE = 'cdrx/pyinstaller-linux:python2'
}
steps {
dir(path: env.BUILD_ID) {
unstash(name: 'compiled-results')
sh "pwd" ##### 3
sh "ls" ##### 4
sh "docker run -v '${ABS_WS}/${JOB_WS}/sources:/src' ${IMAGE} 'ls'" ##### 5
sh "docker run -v ${ABS_WS}/${JOB_WS}/sources:/src ${IMAGE} 'ls'" ##### 6
sh "docker run -v ${VOLUME} ${IMAGE} 'ls'" ##### 7
}
}
}
The output and my questions for ####1~6:
####1: ls here including the /sources/*.py that docker container(qnib/pytest) can process.
####3: output: /var/jenkins_home/workspace/simple-python-pyinstaller-app/32
####4: ls here also including the /soucres/*.py we need
####5: ls here didn't include /sources/*.py, due to docker volume mounted failed.
I already tried with different solution from here, still not working.
docker run -v '/home/myname/vol/workspace/${PWD##*/}/sources:/src' cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
add2vals.spec
build
dist
BUT ####6, similar to ####5 just without Single quotation, nothing output from ls (WHY?):
docker run -v /home/myname/vol/workspace/32/sources:/src cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
####7. the output is identical to ####5
docker run
-v /var/jenkins_home/workspace/simple-python-pyinstaller-app/32/sources:/src cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
add2vals.spec
build
dist
My questions are:
In Deliver stage, how can I map docker container volume to the host or Jenkins container?
In ####3,4 the path in Jenkins container is /var/jenkins_home/workspace/simple-python-pyinstaller-app/32 , this path including the /sources/*.py; and #####7 we can see /var/jenkins_home/workspace/simple-python-pyinstaller-app/32/sources:/src, I thought it was mounted on the correct path to /src in pyinstaller-linux container.
I am not very clear why in Test stage we don't need to mount any volume when running pytest docker?
And why not Deliver stage going the same way as Test stage? (like ####2)
What is difference between ####6 and ####5 ?
I have a list of commands which I need to issue one by one to a running docker container. However, when I "cd" in the container, it's not working as expected. For example:
docker run -di --name example alpine:latest
for CMD in 'mkdir -p example && touch example/file' 'cd example' 'ls'
do
docker exec -w='/root' example sh -c "$CMD"
done
Will printout example instead of file. How should I properly execute series of statements, but preserving the working directory between their execution? Preferably, if it possible to do this without concatenating all the commands?
I think you should use this format:
dingrui#gdcni:~/onie$ docker exec -w /root example sh -c 'mkdir -p example; touch example/file; cd example; ls'
file
or write these commands to a script and then mount it to container and run it in container:
dingrui#gdcni:~/onie$ docker run -itd -w /root -v $(pwd):/app --name example busybox /app/test.sh
12f7f2b55182bd18c45ce31e03390544adaedc1a2dd923d3bc4293b214301650
dingrui#gdcni:~/onie$ docker logs example
file
I'm trying to run docker containers in the Jenkins Pipeline.
I have the following in my Jenkinsfile:
stage('test') {
steps {
script {
parallel (
"gatling" : {
sh 'bash ./test-gatling.sh'
},
"python" : {
sh 'bash ./test-python.sh'
})
}
}
}
In the test-gatling.sh I have this:
#!/bin/bash
docker cp RecordedSimulation.scala denvazh/gatling:/RecordedSimulation.scala
docker run -it -m denvazh/gatling /bin/bash
ls
./gatling.sh
The ls command is there just for test, but when it's executed it lists files and folders of my github repository, rather than the files inside the denvazh/gatling container. Why is that? I thought the docker run -it [...] command would open the container so that commands could be run inside it?
================
Also, how do I run a container and just have it running, without executing any commands inside it? (In the Jenkins Pipeline ofc)
I'd like to run: docker run -d -p 8080:8080 -t [my_container] and access it on port 8080. How do I do that...?
If anyone has the same or similar problem, here are the answers:
Use docker exec [name of container] command, and to run any terminal commands inside a container, add /bin/bash -c "[command]"
To be able to access a container/app that is running on any port from a second container, when starting the second container run it with --net=host parameter
I'm trying to create a Dockerfile that copies all the files in the currently directory to a specific folder.
Currently I have
COPY . /this/folder
I'm unable to check the results of this command, as my container closes nearly immediately after I run it. Is there a better way to test if the command is working?
you can start a container and check.
$ docker run -ti --rm <DOCKER_IMAGE> sh
$ ls -l /this/folder
If your docker image has ENTRYPOINT setting, then run below command:
$ docker run -ti --rm --entrypoint sh <DOCKER_IMAGE>
$ ls -l /this/folder
If it is only for testing, include the below command in your docker file:
RUN cd /this/folder && ls
This will list the directory contents while docker build