I have a docker composed Jenkins container running on Ubuntu 20.04. When I push my code, a job is auto started in Jenkins. And a docker build operation is executed, after that a docker compose up -d operation is also executed. My Application is up and runing but not reachable...
When I run docker ps on Ubuntu, there is no container of my Application, because the docker container of my app was built inside the Jenkins Container, and docker compose up was also executed inside my jenkins container.
I want to achieve that my app runing side by side with Jenkins and other Container on Ubuntu not inside Jenkins Container. How I can achieve that?
Assuming you have an application that is dockerized. The Jenkins is also running on a docker container exposed at some port. Then you can open Jenkins and write a freestyle Job for deployment which would do the following:
Have a deploy folder where the shellscript is written as per below. This build repository could be within the same repo as the source code or somewhere else as well.
ssh -i key user#server '
# The command here are run within this ssh session.
#Checkout your code from version control
git clone your_repo
#build your docker services
cd your_repo
docker-compose build service1
docker-compose build service2
...
#run your built services
docker-compose up -d service1
docker-compose up -d service2
...
'
Now, the application should be deployed and accessible for you and Jenkins should be able to notify the same.
Related
I am running Jenkins on EKS with Kubernetes plugin.
I have one cloud setup, and a template running my own container image for alpine with docker ( to execute docker commands )
i have only 1 job currently that only does "docker service ls" as bash
i get the error
"/tmp/jenkins8475081645730667159.sh: line 2: docker: command not
found"
while going inside the container using exec and switching to "jenkins" user i am able to run "docker".
it looks like my pod contains both jnlp container and my alpine-docker container-when write to file , it will write it to the alpine container while if i run "docker" it will try to run it on the jnlp container, does this make any sense ? Thanks
you have to run docker from your container
In your pipeline
container('mycontainer') {
sh 'docker service ls'
}
You can't use a container other than the jnlp one if you are using freestyle jobs, only pipeline jobs
I have been playing around with Jenkins, and I'm now able to connect github and set triggers. I want to build my code using make and docker, however when i execute make or docker in the shell, they are not found. How do I configure Jenkins' build step to run make and docker
I would install make and the docker daemon on your Jenkins server. This will allow you to build and push docker images from within your Jenkins build pipelines using the Executable Shell Build task. You will also be able to run make commands there as well.
docker build -t <USER>/<REPO_NAME>:<TAG> .
docker push <USER>/<REPO_NAME>:<TAG>
There are also Jenkins plugins available for building your docker images too.
I would NOT recommend running Jenkins using a Docker container, then running Docker inside that container. This is known as Docker in Docker(aka. DinD), and should be avoided for the reasons stated in this article.
You can install Docker on the same machine where your jenkins is running.
Or you can run a docker container which contains both jenkins and docker.
If you purpose is to learn jenkins, I suggest running Jenkins within a docker and Docker daemon on your host machine.
just have Docker installed on your host machine.
then issue the command which runs
docker run \
--rm -u root -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock --name myjenkinsserver jenkinsci/blueocean
then you are ready to go.
add a pipeline job as follows:
pipeline {
agent { docker 'gcc:latest' }
stages {
stage('build') {
steps {
sh 'make --version'
}
}
}
}
now you can run make commands.
In general, it is better to run jenkins jobs on Jenkins slave machines or in other terms, Jenkins agents. You can create custom Jenkins agents which include necessary tools, in your case, such as make.
I've got a jenkins declarative pipeline build that runs gradle and uses a gradle plugin to create a docker image. I'm also using a dockerfile agent directive, so the entire thing runs inside a docker container. This was working great with jenkins itself installed in docker (I know, that's a lot of docker). I had jenkins installed in a docker container on docker for mac, with -v /var/run/docker.sock:/var/run/docker.sock (DooD) per https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/. With this setup, the pipeline docker agent ran fine, and the docker build command within the pipeline docker agent ran fine as well. I assumed jenkins also mounted the docker socket on its inner docker container.
Now I'm trying to run this on jenkins installed on an ec2 instance with docker installed properly. The jenkins user has the docker group as its primary group. The jenkins user is able to run "docker run hello-world" successfully. My pipeline build starts the docker agent container (based on the gradle image with various things added) but when gradle attempts to run the docker build command, I get the following:
* What went wrong:
Execution failed for task ':docker'.
> Docker execution failed
Command line [docker build -t config-server:latest /var/lib/****/workspace/nfig-server_feature_****-HRUNPR3ZFDVG23XNVY6SFE4P36MRY2PZAHVTIOZE2CO5EVMTGCGA/build/docker] returned:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Is it possible to build docker images inside a docker agent using declarative pipeline?
Yes, it is.
The problem is not with Jenkins' declarative pipeline, but how you're setting up and running things.
From the error above, looks like there's a missing permission which needs to be granted.
Maybe if you share what your configuration looks like and how your're running things, more people can help.
I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/
I am new to Bamboo and are trying to get the following process flow using Bamboo and Docker:
Developer commits code to a Bitbucket branch
Build plan detects the change
Build plan then starts a Docker container on a dedicated AWS instance where Docker is installed. In the Docker container a remote agent is started as well. I use the atlassian/bamboo-java-agent:latest docker container.
Remote agent registers with Bamboo
The rest of the build plan runs in the container
Container and agent gets removed when plan completes
I setup a test build plan and in the plan My first task is to start a Docker instance like follows:
sudo docker run -d --name "${bamboo.buildKey}_${bamboo.buildNumber}" \
-e HOME=/root/ -e BAMBOO_SERVER=http://x.x.x.x:8085/ \
-i -t atlassian/bamboo-java-agent:latest
The second task is to get the source code and deploy. 3rd task is test and 4th task is shutting down the container.
There are other agents online on Bamboo as well and my build plan sometimes uses those and not the Docker container that I started as part of the build plan.
Is there a way for me to do the above?
I hope it all makes sense. I am truly new to this and any help will be appreciated.
We (Atlassian Build Engineering) have created a set of plugins to run Docker based agents in a cluster (ECS) that comes online, builds a single job and then exits. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
first you need to make sure the "main" docker container is not exiting when you run it.
check with
docker ps -a
you should see it is running
now assuming it is running you can execute commands inside the container
to get into the container
docker exec -it containerName bash
to execute commands inside the container from outside the container
docker exec -it containerName commandToExecuteInsideTheContainer
you could as part of the containers dockerfile COPY a script in it that does something.
Then you can execute that script from outside the container using the above approach.
Hope this gives some insight.