How to configure Jenkins in Docker? - jenkins

I'm new to Jenkins and Docker both. I am currently working on project where I allow users to submit jobs to jenkins. And I was wondering if there is a way to use docker to dynamically spun-up Jenkins server and redirect all the user jobs coming from my application to this server and then destroy this jenkins once the work is done. Is it possible? if yes, how ? if no, Why? Also, I need to setup maven for this jenkins server, do I need another container for that?

You can try the following but I can't guarantee if it's that easy to move your jenkins content from your dedicated server to your docker container. I did not try it before .
But the main thing is the following.
Create a backup of the content of your dedicated jenkins server using
tar
Create a named docker volume:
$ docker volume create --name jenkins-volume
Extract your jenkins-backup.tar inside your docker volume. This
volume is in /var/lib/docker/volumes/jenkins-volume/_data
$ sudo tar -xvpzf jenkins-backup.tar -C /var/lib/docker/volumes/jenkins-volume/_data/
Now you can start your jenkins container and telling it to use the jenkins-volume. Use the jenkins-image which is the same version as the dedicated jenkins.
$ docker run -d -u jenkins --name jenkins -p 50000:50000 -p 443:8443 -v jenkins-volume:/var/jenkins_home --restart=always jenkins:your-version
For me this worked to move the content from our jenkins container on AWS to a jenkins container on another cloud provider. I did not try it for a dedicated server. But you can't break anything of your existing jenkins when you try it.

Related

How to build Docker Container outside of dockerized Jenkins

I have a docker composed Jenkins container running on Ubuntu 20.04. When I push my code, a job is auto started in Jenkins. And a docker build operation is executed, after that a docker compose up -d operation is also executed. My Application is up and runing but not reachable...
When I run docker ps on Ubuntu, there is no container of my Application, because the docker container of my app was built inside the Jenkins Container, and docker compose up was also executed inside my jenkins container.
I want to achieve that my app runing side by side with Jenkins and other Container on Ubuntu not inside Jenkins Container. How I can achieve that?
Assuming you have an application that is dockerized. The Jenkins is also running on a docker container exposed at some port. Then you can open Jenkins and write a freestyle Job for deployment which would do the following:
Have a deploy folder where the shellscript is written as per below. This build repository could be within the same repo as the source code or somewhere else as well.
ssh -i key user#server '
# The command here are run within this ssh session.
#Checkout your code from version control
git clone your_repo
#build your docker services
cd your_repo
docker-compose build service1
docker-compose build service2
...
#run your built services
docker-compose up -d service1
docker-compose up -d service2
...
'
Now, the application should be deployed and accessible for you and Jenkins should be able to notify the same.

How to run shell script on Host from jenkins docker container?

I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.

Add file to jenkins workspace with docker

In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation

How to properly start Docker inside Jenkins that is also running in Docker

I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/

Starting new Docker container with every new Bamboo build run and using the container to run the build in

I am new to Bamboo and are trying to get the following process flow using Bamboo and Docker:
Developer commits code to a Bitbucket branch
Build plan detects the change
Build plan then starts a Docker container on a dedicated AWS instance where Docker is installed. In the Docker container a remote agent is started as well. I use the atlassian/bamboo-java-agent:latest docker container.
Remote agent registers with Bamboo
The rest of the build plan runs in the container
Container and agent gets removed when plan completes
I setup a test build plan and in the plan My first task is to start a Docker instance like follows:
sudo docker run -d --name "${bamboo.buildKey}_${bamboo.buildNumber}" \
-e HOME=/root/ -e BAMBOO_SERVER=http://x.x.x.x:8085/ \
-i -t atlassian/bamboo-java-agent:latest
The second task is to get the source code and deploy. 3rd task is test and 4th task is shutting down the container.
There are other agents online on Bamboo as well and my build plan sometimes uses those and not the Docker container that I started as part of the build plan.
Is there a way for me to do the above?
I hope it all makes sense. I am truly new to this and any help will be appreciated.
We (Atlassian Build Engineering) have created a set of plugins to run Docker based agents in a cluster (ECS) that comes online, builds a single job and then exits. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
first you need to make sure the "main" docker container is not exiting when you run it.
check with
docker ps -a
you should see it is running
now assuming it is running you can execute commands inside the container
to get into the container
docker exec -it containerName bash
to execute commands inside the container from outside the container
docker exec -it containerName commandToExecuteInsideTheContainer
you could as part of the containers dockerfile COPY a script in it that does something.
Then you can execute that script from outside the container using the above approach.
Hope this gives some insight.

Resources