Jenkins docker - running a container, executing shell script etc - docker

I'm trying to run docker containers in the Jenkins Pipeline.
I have the following in my Jenkinsfile:
stage('test') {
steps {
script {
parallel (
"gatling" : {
sh 'bash ./test-gatling.sh'
},
"python" : {
sh 'bash ./test-python.sh'
})
}
}
}
In the test-gatling.sh I have this:
#!/bin/bash
docker cp RecordedSimulation.scala denvazh/gatling:/RecordedSimulation.scala
docker run -it -m denvazh/gatling /bin/bash
ls
./gatling.sh
The ls command is there just for test, but when it's executed it lists files and folders of my github repository, rather than the files inside the denvazh/gatling container. Why is that? I thought the docker run -it [...] command would open the container so that commands could be run inside it?
================
Also, how do I run a container and just have it running, without executing any commands inside it? (In the Jenkins Pipeline ofc)
I'd like to run: docker run -d -p 8080:8080 -t [my_container] and access it on port 8080. How do I do that...?

If anyone has the same or similar problem, here are the answers:
Use docker exec [name of container] command, and to run any terminal commands inside a container, add /bin/bash -c "[command]"
To be able to access a container/app that is running on any port from a second container, when starting the second container run it with --net=host parameter

Related

Docker mount volume in Jenkins Docker container

I am following the Jenkins tutorial with some modification.
I run the Jenkins docker container by:
docker run --rm --privileged -u root -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD"/vol:/var/jenkins_home \
jenkinsci/blueocean
With my Jenkinsfiles:
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'ls' ##### 1
sh 'py.test --junit-xml test-reports/results.xml sources/test_calc.py' ##### 2
}
}
stage('Deliver') {
agent any
environment {
VOLUME = '$(pwd)/sources:/src'
ABS_WS = '/home/myname/vol/workspace'
JOB_WS = "\${PWD##*/}"
IMAGE = 'cdrx/pyinstaller-linux:python2'
}
steps {
dir(path: env.BUILD_ID) {
unstash(name: 'compiled-results')
sh "pwd" ##### 3
sh "ls" ##### 4
sh "docker run -v '${ABS_WS}/${JOB_WS}/sources:/src' ${IMAGE} 'ls'" ##### 5
sh "docker run -v ${ABS_WS}/${JOB_WS}/sources:/src ${IMAGE} 'ls'" ##### 6
sh "docker run -v ${VOLUME} ${IMAGE} 'ls'" ##### 7
}
}
}
The output and my questions for ####1~6:
####1: ls here including the /sources/*.py that docker container(qnib/pytest) can process.
####3: output: /var/jenkins_home/workspace/simple-python-pyinstaller-app/32
####4: ls here also including the /soucres/*.py we need
####5: ls here didn't include /sources/*.py, due to docker volume mounted failed.
I already tried with different solution from here, still not working.
docker run -v '/home/myname/vol/workspace/${PWD##*/}/sources:/src' cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
add2vals.spec
build
dist
BUT ####6, similar to ####5 just without Single quotation, nothing output from ls (WHY?):
docker run -v /home/myname/vol/workspace/32/sources:/src cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
####7. the output is identical to ####5
docker run
-v /var/jenkins_home/workspace/simple-python-pyinstaller-app/32/sources:/src cdrx/pyinstaller-linux:python2 ls
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
ls
add2vals.spec
build
dist
My questions are:
In Deliver stage, how can I map docker container volume to the host or Jenkins container?
In ####3,4 the path in Jenkins container is /var/jenkins_home/workspace/simple-python-pyinstaller-app/32 , this path including the /sources/*.py; and #####7 we can see /var/jenkins_home/workspace/simple-python-pyinstaller-app/32/sources:/src, I thought it was mounted on the correct path to /src in pyinstaller-linux container.
I am not very clear why in Test stage we don't need to mount any volume when running pytest docker?
And why not Deliver stage going the same way as Test stage? (like ####2)
What is difference between ####6 and ####5 ?

Run host Docker from within Jenkins Docker

Is it possible to create and run Docker containers for CI/CD from within a running Jenkins Docker Container? So basically access Docker on the host server from within a running container.
On my host server (Ubuntu 19.04) Docker (Docker version 19.03.3) is installed. By running the following command I create a Jenkins Container that I give permissions to Docker (so I thought):
mkdir /home/myuser/Desktop/jenkins_home
docker run -dit --name jenkins -v /home/myuser/Desktop/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -p 8080:8080 jenkins/jenkins:lts
Within Jenkins I create a Pipeline that loads a Jenkinsfile from Git that looks like this:
pipeline {
agent {
docker {
image 'ubuntu:19.04'
args '-u root:sudo -p 3000:3000'
}
}
stages {
stage('Install') {
steps {
sh 'apt-get update'
sh 'apt-get install -y curl'
sh 'curl -sL https://deb.nodesource.com/setup_13.x | sh -'
sh 'curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -'
sh 'echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list'
sh 'apt-get update'
sh 'apt-get install -y nodejs yarn'
}
}
stage('Build') {
steps {
sh './build.sh'
}
}
}
}
When I run the Pipeline it crashes when trying to instruct Docker to pull the ubuntu:19.04 Docker image. The error is docker: not found.
Somewhere a connection between my Jenkins Container and the host Docker access files is misconfigured. What configuration is necessary to run Docker commands on the host server from within the Docker Container?
If you want to create and run Docker containers for CI/CD from Jenkins container,
This can be achieved creating a shell command on Jenkins job that runs an ssh command on Docker host.
This needs as requirements that Jenkins container ssh public key is authorized on Docker host, so authorized_keys file should exist on Docker host.
To use the same ssh keys inside Jenkins container can be used a bind mount with ssh keys on Jenkins containers.
Example using docker-compose:
volumes:
- /home/user/.ssh/id_rsa:/var/jenkins_home/.ssh/id_rsa
- /home/user/.ssh/id_rsa.pub:/var/jenkins_home/.ssh/id_rsa.pub
This is an example content of a shell command used to launch and update containers on Docker host from a Jenkins job:
cat ./target/stack/installer-*.tar | ssh root#${DOCKER_HOST} \
/home/user/Build-Server/remote-installer.sh
In the command above an installer is launched on Docker host. As result new containers are deployed/updated on Docker host.
The remote-installer.sh script receive the file from standard input and unpack it using tar command.
TEMPDIR=`mktemp -d`
echo "unarchiving to $TEMPDIR"
tar xv -C "$TEMPDIR"
...
This works for both cases having Docker containers on same server as Jenkins container or having Docker containers and Jenkins container on different servers.

Running shell script from PC in running Docker

I have pulled one docker image and docker container is running successfully as well. But I want to run one shell script in the running docker. The shell script is located in my hard disk. I am unable to find out which command to use and how to give pathname of the shell file so that it can be executed in running docker.
Please guide me.
Regards
TL;DR
There are two ways that could work in your case.
You can run one-liner-script using docker exec sh/bash with -c argument:
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
You can copy shell script into container using docker cp and then run it in docker context:
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh
Precaution
Not all containers allow to run shell scripts in their context. You can check it executing any shell command in docker:
docker exec -i <your_container_id> echo "Shell works"
For future reference check section Understand how CMD and ENTRYPOINT interact
Docker Exec One-liner
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
If your container has sh or bash or BusyBox shell wrapper (such as alpine, you can send one-line shell script to container's shell.
Limitations:
only short scripts;
hard to pass command-line arguments;
only if your container has shell.
Docker Copy and Execute Script
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can copy script from host to container and then execute it.
You can pass arguments to the script.
You can run script with root credentials with -u root: docker exec -i -u root <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can run script interactively with -t: docker exec -it <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
Limitations:
one more command to execute;
only if your container has shell.

How to run a docker image in Jenkins

I know how to build a docker image in Jenkins. Basically, calling docker.build("foo") from my Jenkinsfile is the equivalent of executing, from the command line, docker build -t foo ..
My question is how I RUN the image? What's the equivalent of docker run foo, assuming that foo has a defined ENTRYPOINT?
I'm familiar with docker.image('foo').inside() { ... }, which allows me to run a shell script or something inside the container, but that's not what I'm looking for. I want to run the container from it's ENTRYPOINT.
For running a Docker image from jenkinsfile you can use the below docker CLI commmand-
sh "docker run -it --entrypoint /bin/bash example"
It will start the docker container (run docker image) and you can ssh to the host where docker is running and can use docker ps command to list the running container.
You can probably have a look at .withRun -
Run a normal command OR use your entrypoint as argument -
docker.image('python:2.7').withRun('-u root --entrypoint /bin/bash') {
sh 'pip install version'
}

How to continue running scripts when exiting docker containers

My script is as follows:
# start a ubuntu container in the background
docker run -it --name ub -d ubuntu /bin/bash
sleep 1
# run a command in the container
docker exec -it ub bash
echo 234
# exit the container
exit
sleep 1
# do something else
echo 123
But the script would just stop right after exit and hang there. Does anyone know why is that?
p.s: My Docker version is: 17.03.0-ce, build 60ccb22
You have given -it during the run command. which opens up the /bin/bash of your container and waits there. The next command wont get executed until the first command execution is completed.
It's better to create a script file and move it inside the container while making the docker. and run the script on starting the docker. You may specify that using a CMD in the docker file.
You won't be needing an additional exec command.
The corresponding Dockerfile would be
FROM ubuntu:latest
COPY <path-to-script> <dest>
CMD [" <path-to-script> "]
You have to create the script file along with the Dockerfile. Build the docker using the command
docker build -t <image-name> <location of Dockerfile>
The execution command would be
docker run -d --name <name> -d ubuntu <path-to-script>

Resources