How to run a docker image in Jenkins - docker

I know how to build a docker image in Jenkins. Basically, calling docker.build("foo") from my Jenkinsfile is the equivalent of executing, from the command line, docker build -t foo ..
My question is how I RUN the image? What's the equivalent of docker run foo, assuming that foo has a defined ENTRYPOINT?
I'm familiar with docker.image('foo').inside() { ... }, which allows me to run a shell script or something inside the container, but that's not what I'm looking for. I want to run the container from it's ENTRYPOINT.

For running a Docker image from jenkinsfile you can use the below docker CLI commmand-
sh "docker run -it --entrypoint /bin/bash example"
It will start the docker container (run docker image) and you can ssh to the host where docker is running and can use docker ps command to list the running container.

You can probably have a look at .withRun -
Run a normal command OR use your entrypoint as argument -
docker.image('python:2.7').withRun('-u root --entrypoint /bin/bash') {
sh 'pip install version'
}

Related

while starting a docker container I have to execute a script inside docker container

while starting a docker container I have to execute a script inside docker container. Can I do it using docker run command or docker start command mentioning the path in docker? I know I have to use CMD in docker file but dockerfile is not present
Have you tried
docker run -it <image-name> bash "command-to-execute"
To enter a running Docker container (get a Bash prompt inside the container), please run the following:
docker container exec -it <container_id> /bin/bash
You can get the container_id by listing running Docker containers with:
docker container ps -a or docker ps -a
docker run --name TEST -d image sh -c " CMD "
in CMD section you can give the path of shell script

Persisting gradle daemon after docker exec -it <container_name> gradle build

I'm using docker exec -it <container_name> gradle build to run gradle (5.6.2/JDK 11) builds in a docker container. This approach works fine, but the daemon is destroyed after the command is completed. How can I keep the daemon running in the container after my build is complete?
I have tried gradle --forground but have learned that this creates incompatible daemons and is an undesirable option.
This problem cannot be solved using Gradle. The daemons are lost because the Docker container stops. You should drop into an interactive shell instead:
docker run --rm -it gradle:5.6.2-jdk11 bash
Now run Gradle commands, and the daemon will be reused on subsequent commands:
root#014faa72d745:/home/gradle# gradle help
Once you're done, exit from the container:
root#014faa72d745:/home/gradle# exit
Note: You can use a bind mount to get your current working directory's files accessible in the container:
docker run --rm -it --mount type=bind,src=$PWD,dst=/app -w /app gradle:5.6.2-jdk11 bash

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Jenkins docker - running a container, executing shell script etc

I'm trying to run docker containers in the Jenkins Pipeline.
I have the following in my Jenkinsfile:
stage('test') {
steps {
script {
parallel (
"gatling" : {
sh 'bash ./test-gatling.sh'
},
"python" : {
sh 'bash ./test-python.sh'
})
}
}
}
In the test-gatling.sh I have this:
#!/bin/bash
docker cp RecordedSimulation.scala denvazh/gatling:/RecordedSimulation.scala
docker run -it -m denvazh/gatling /bin/bash
ls
./gatling.sh
The ls command is there just for test, but when it's executed it lists files and folders of my github repository, rather than the files inside the denvazh/gatling container. Why is that? I thought the docker run -it [...] command would open the container so that commands could be run inside it?
================
Also, how do I run a container and just have it running, without executing any commands inside it? (In the Jenkins Pipeline ofc)
I'd like to run: docker run -d -p 8080:8080 -t [my_container] and access it on port 8080. How do I do that...?
If anyone has the same or similar problem, here are the answers:
Use docker exec [name of container] command, and to run any terminal commands inside a container, add /bin/bash -c "[command]"
To be able to access a container/app that is running on any port from a second container, when starting the second container run it with --net=host parameter

Execute host shell script from meteor container

I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.

Resources