testing tkinter-based function on jenkins in a docker container on AWS - docker

I have a python code that passes all the test on my local machine. The code uses tkinter and provides a GUI. However, none of the test functions actually open the GUI. (They call tk.Tk() though).
I created a docker container locally and could use X11 forwarding to pass the tests on the "local" container as well.
Now, I'm trying to run the tests on Jenkins that I have set up on an EC2 instance. Jenkins is supposed to create a docker container using the Dockerfile that is on my repository. And then call "docker run -e ... -v ..." (similar to what I had in my local computer) to check the tests. I understand my ec2 instance does not have a gui and therefore x11 forwarding is not as simple as it was on my computer. There should be a way for tests using a gui to be checked through Jenkins setup on AWS. Any help is appreciated.
EDIT
Here is the build script that I have on AWS, it creates the docker container using the Dockerfile:
IMAGE_NAME="test-image"
CONTAINER_NAME="deidentifier_clinical"
echo "Check current working directory"
pwd
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo $DISPLAY
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY $IMAGE_NAME bash -c "cd /$CONTAINER_NAME;make test"
echo "Copy coverage.xml into Jenkins container"
rm -rf reports; mkdir reports
docker cp $CONTAINER_NAME:/deidentifier_clinical/htmlcov/* reports/.
echo "Cleanup"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
docker rmi $IMAGE_NAME
This fails on the docker run line. This same script runs with no problem on my local computer after setting up the X11-forwarding.

Related

Cannot copy file from jenkins container to host through jenkins pipeline

working on zsh (mac1), I have used this docker run command:
- docker run --name container2 --link container1:alias -v /Users/omerboucris/Desktop/Devops_Final_Project/jenkins-data:/var/jenkins_home -p 8090:8080 -d jenkins/jenkins:lts
I have upload the Jenkins job and Im trying to docker cp some local file which created in /workspace/job1/file.txt to my vm host path:
/Users/omerboucris/Desktop/Devops_Final_Project/jenkins-data
why I have no access to my vm ? the jenkins runs on root only.. If I print 'pwd' on my job: /var/jenkins_home/workspace/MonitorJob
so how I can use the docker cp ?
this command not helpful:
docker cp d6dac560c25b:/var/jenkins_home/workspace/FinalProject_Devops/OurLandPage.jsp <HOME>/Desktop
even If Im trying to cd my Desktop I cant
thanks!

Keep Docker running Azure CI/CD Pipeline

I have a release pipeline with 2 steps, the first one copies a .jar file into a server through SSH, the second step runs a shell script with the following commands:
docker container stop $(sudo docker ps -aqf "name=crmbackend")
docker rm crmbackend
cd /home/artotor/crmbackend
docker build -t crmbackend .
docker run -d --name crmbackend -p 9090:8888 crmbackend:latest &
exit 0
And this is the stage in azure devops:
What i am trying to achieve is to keep running the docker (step 5) without killing it, and also tell the pipeline that everything went ok. What could be the way to achieve it?, i've tried to add the options -t -i to docker run without success.
This is the Dockerfile that i am using:
FROM openjdk:11
ADD target/crm-backend.jar crm-backend.jar
EXPOSE 8888
ENTRYPOINT ["java", "-jar", "crm-backend.jar"]

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

Jenkins does not wait for docker exec command to complete

Here's the situation:
I have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container.
Manually, everything works in the container. However, when Jenkins executes the job, it doesn't "wait" for the docker exec command to run to completion.
Below, is an extract from the Jenkinsfile. The short-lived printenv command runs correctly, and prints the environment variables. The next command (python) just gets run and then Jenkins moves on immediately, not waiting for completion. The Jenkins agent (slave) is running on an Ubuntu image. Running all these commands outside Jenkins work as expected.
echo "Running the app docker container in detached tty mode to keep it up"
docker run --detach --tty --name "${CONTAINER_NAME}" "${IMAGE_NAME}"
echo "Listing environment variables"
docker exec --interactive "${CONTAINER_NAME}" bash -c "printenv"
echo "Running test coverage"
docker exec --interactive "${CONTAINER_NAME}" bash -c "python -m coverage run --source . --branch -m pytest -vs"
It seems maybe related to this question.
Please can anyone explain how to get Jenkins to wait for the docker exec command to complete before proceeding to the next step.
Have considered alternatives, like the Docker Pipeline Plugin, but would much prefer to use something close to what I have above where possible.
Ok, another approach, I've tried using Docker Pipeline plugin here.
You can use docker.sock as volume mount to orchestrate containers on your host machine like this in your docker-compose.yml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Depending on your setup you might need to run
chmod 666 /var/run/docker.sock
to get going in the first place.
This works on macOS as well as Linux.
Ugh. This was down to the way that I'd set up docker support on the slave container.
I'd used socat to provide a TCP server proxy. Instead, switched that out for a plain old docker.sock volume between host & container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The very first time, I had to also sort out a permissions issue by doing (inside the container):
rm -Rf ~/.docker
chmod 666 /var/run/docker.sock
After that, everything "just worked". Very painful experience.

run docker commands from command prompt versus jenkins script

I have a test Ubuntu server with docker-machine installed. I have a number of docker containers running on the servers. Including a Jenkins container. I run jenkins with the following command
docker run -d --name jenkins -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker --restart=always -p 8080:8080 -v ~/jenkinsHome:/var/jenkins_home docker-jenkins
I am working on managing my images through Jenkins. I can start all but one of my containers via Jenkins shell script. The one container that fails appears to start in the script (I do a docker PS after the docker run in script). However, the container stops after the script completes. I am using the same docker run command that works on the command prompt, but it fails in Jenkins script:
sudo docker run -d --net=host -v ~/plex-config:/config -v ~/Media:/media -p 32400:32400 wernight/plex-media-server
I have double checked folder permissions and they are correct. Can anyone direct me to possible reasons the run command is failing in Jenkins, but not at the command prompt?
using docker ps -a I was able to get an ID for the stopped container. Then by using docker logs I was able to see the error was a folder permission issue. Then digging deeper, it was a user permission error mis-match between the user Jenkins runs as inside it's container not being able to pass the folder correctly. I have decided to circumvent the problem by using docker stop and start commands and not using the docker run command.

Resources