How to run docker container in a remote machine - docker

I am trying to run this jenkins pipeline code via DOCKER. I am using AWS ec2-user as a VM here. This code is working fine, but...
node{
stage('SCM CHECKOUT'){
git 'https://bitbucket.org/rajesh212/myapp.git'
}
stage('MVN BUILD'){
def mvnHome = tool name: 'maven', type: 'maven'
sh "${mvnHome}/bin/mvn clean package"
}
stage('DEPLOYMENT VIA DOCKER'){
def customImage = docker.build("image:${env.BUILD_ID}")
docker.image("image:${env.BUILD_ID}").withRun('-p 9090:8080'){sleep 10000}
}
If I am not giving sleep command then this job ran
successfully but my docker container start and stop immediately. i.e
I am not able to get the output. How to solve this problem?
I want to run this docker image on a remote machine? how to do it?

In order to run on a remote server, you must use the withServer command.
As for the container stopping, try changing the withRun command to withRun('-d -p 9090:8080')

If you are using declarative pipelines, try this ssh command. As a prerrequisite you need to set up a key pair to allow Jenkins to ssh into the remote server. An specific ssh key pair for deployment is recommended for security issues:
stage('Deploy to Production') {
steps{
sh 'ssh -i path/to/deploy_private_key user#DNS_REMOTE_SERVER "docker run -d REGISTRY/YOUR_DOCKER_IMAGE:TAG"'
}
}
Use the -d parameter to run the container in detached mode.
Hope it helps.

Related

How to pass docker run arguments in Jenkins?

I am trying to set up my Jenkins pipeline using this docker image. It requires to be executed as following:
docker run --rm \
-v $PROJECT_DIR:/input \
-v $PROJECT_DIR:/output \
-e PLATFORM_ID=$PLATFORM_ID \
particle/buildpack-particle-firmware:$VERSION
The implementation in my Jenkins pipeline looks like this:
stage('build firmware') {
agent {
docker {
image 'particle/buildpack-particle-firmware:4.0.2-tracker'
args '-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker'
}
}
steps {
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
Executing this on my PC system works just fine.
Upon executing the Jenkins pipeline, I am eventually getting this error:
java.io.IOException: Failed to run image 'particle/buildpack-particle-firmware:4.0.2-tracker'. Error: docker: Error response from daemon: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "-w": executable file not found in $PATH: unknown.
I read through the documentation of Jenkins + Docker, but I couldn't find out how to use such an image. All the guides usually explain how to run a docker image and execute shell commands.
If I get it right, this Dockerfile is the layout for the said docker image.
How do I get around this issue and call a docker container with run arguments?
The agent mode is intended if you want run Jenkins build steps inside a container; in your example, run the archiveArtifacts step instead of the thing the container normally does. You can imagine using an image that only contains a build tool, like golang or one of the Java images, in the agent { docker { image } } line, and Jenkins will inject several lines of docker command-line options so that it runs against the workspace tree.
The Jenkins Docker interface may not have a built-in way to wait for a container to complete. Instead, you can launch a "sidecar" container, then run docker wait still from outside the container to wait for it to complete. This would roughly look like
stage('build firmware') {
steps {
docker
.image('particle/buildpack-particle-firmware:4.0.2-tracker')
.withRun('-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker') { c ->
sh "docker wait ${c.id}"
}
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
In the end, it is up to Jenkins how the docker run command is executed and which entrypoint is taken. Unfortunately, I can't change the settings of the Jenkins Server so I had to find a workaround.
The solution for me is similar to my initial approach and looks like this:
agent {
docker {
image 'particle/buildpack-hal'
}
}
environment {
APPDIR="$WORKSPACE/tracker-edge"
DEVICE_OS_PATH="$WORKSPACE/device-os"
PLATFORM_ID="26"
}
steps {
sh 'make sanitize -s'
}
One guess is that calling the docker container as expected doesn't work on my Jenkins Server. It requires to be run and shell commands to be executed from within.

Run job inside docker container in specific agent Jenkins

I am trying to run the Jenkins pipeline steps in the Docker container in a specific agent.
I could use docker to run it but the container will run in a random agent but i need it to run in specific agent.
Here's what i tried:
pipeline {
agent { label 'agent-007' }
stages {
stage("Unit Tests") {
agent { docker 'image-name' }
steps {
sh 'pwd'
sh 'hostname'
}
}
}
}
In the documentation Specifying a Docker Label it says that in the configuration for your Jenkins job, you can specify which agent you want docker to run on. In your case, you could set "Docker Label" to "agent-007" in your job configuration. You can also specify which docker registry you want to pull from which is really handy if you use Artifactory, for instance.

Using a docker command in a Jenkinsfile gives me inconsistent result (works sometime, not found sometime)

pipeline {
agent any
stages {
stage('BuildImage') {
steps {
withCredentials([string(credentialsId: 'docker_pw', variable: 'DOCKER_PW')]){
sh '''
docker login -u ... -p ${DOCKER_PW} <dockerhub>
docker -v
'''
}
}
}
...
I am building a Jenkins pipeline using Jenkinsfile. I am trying to build a docker image in the Jenkinsfile and push it to the dockerhub.
This works sometimes but sometimes I just fail with the message line 2: docker: command not found
This doesn't make sense to me because it works sometimes.
Do I have to use a different agent or something?
This may be due to job trying to run on agents where docker is not installed.The best solution would be to use the labels. You can add labels on agents where docker is installed. That will help to identify what that agent can be used for. You can then specify in pileline with agent { label 'docker' }

Jenkins: Connect to a Docker container from a stage that is run with an agent (another Docker container)

I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.

Jenkins with make and docker

I have been playing around with Jenkins, and I'm now able to connect github and set triggers. I want to build my code using make and docker, however when i execute make or docker in the shell, they are not found. How do I configure Jenkins' build step to run make and docker
I would install make and the docker daemon on your Jenkins server. This will allow you to build and push docker images from within your Jenkins build pipelines using the Executable Shell Build task. You will also be able to run make commands there as well.
docker build -t <USER>/<REPO_NAME>:<TAG> .
docker push <USER>/<REPO_NAME>:<TAG>
There are also Jenkins plugins available for building your docker images too.
I would NOT recommend running Jenkins using a Docker container, then running Docker inside that container. This is known as Docker in Docker(aka. DinD), and should be avoided for the reasons stated in this article.
You can install Docker on the same machine where your jenkins is running.
Or you can run a docker container which contains both jenkins and docker.
If you purpose is to learn jenkins, I suggest running Jenkins within a docker and Docker daemon on your host machine.
just have Docker installed on your host machine.
then issue the command which runs
docker run \
--rm -u root -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock --name myjenkinsserver jenkinsci/blueocean
then you are ready to go.
add a pipeline job as follows:
pipeline {
agent { docker 'gcc:latest' }
stages {
stage('build') {
steps {
sh 'make --version'
}
}
}
}
now you can run make commands.
In general, it is better to run jenkins jobs on Jenkins slave machines or in other terms, Jenkins agents. You can create custom Jenkins agents which include necessary tools, in your case, such as make.

Resources