Creating Docker image and running as service in Jenkins - docker

I have a JSP website. I am building DevOps pipeline. I am looking for help to integrate Jenkins with the Docker.
I already have docker file which does task of Deploying war file to the tomcat server.
(Command1)
Through the command line I can run the docker file and create an image.
I can run created image as a service and able to browse the website.
(Command2)
I want to do these two steps in Jenkins. I need your help to integrate these two commands in Jenkins, so that I need not to run these two commands manually one after other.

I think that you can use the "Docker Pipeline Plugin" for that.
For the first command, you can have a stage that runs:
myImage = docker.build("my-image:my-tag")
If you need you can have another stage where you can run some tests inside the image with:
myImage.inside {
sh './run-test.sh'
}
Finally, you can push the image to the repository to your repository with:
docker.withRegistry('https://your-registry.com', 'credentials_id') { //use a second parameter if you repository requires authentication
myImage.push('new_tag') //You can push it with a new tag
}
Please note that if you wanna use the docker.* methods in a declarative pipeline you must do it inside a script step or in a function.
(More info in the plugin's user guide)
For the second command, you only have to update the running image in the server. For doing that you have a lot of options (docker service update if you're using Docker Swarm, for example) and I think that part is outside of the scope of this post.

Related

Docker Save through Jenkins Pipeline (+ other commands)

I see that there is some documentation on using Docker through Jenkins Pipelines here:
https://www.jenkins.io/doc/book/pipeline/docker/
They have an example for building a Docker image:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
But I was unable to find the complete list (with examples) of Docker commands supported.
Here's some places I've looked:
https://plugins.jenkins.io/docker-workflow/
https://github.com/jenkinsci/docker-workflow-plugin/
https://docs.cloudbees.com/docs/admin-resources/latest/plugins/docker-workflow
What I was looking to do is docker save. Does anyone know if something like this is supported, or where it might be documented:
// Tar ball or filename+path
def imageTar = docker.save("${ImageFileName}.tar", "${ProjectImage}:${ProjectRelease}")
The commands under the docker keyword are made available by the Docker Pipeline Plugin which is usually installed by default with Jenkins. The full documentation of the plugin is available Here.
In addition because this plugin adds methods as global variables (which are available in Pipeline directly, not as steps) you can see the available options, which are based on the version of the plugin you have installed, in the Global Variable Reference documentation within your Jenkins instance. There are two ways to reach it:
Navigate to: [JENKINS_URL]/pipeline-syntax/globals
Go to one of your pipeline jobs, on the left menu click on the Pipeline Syntax link, then on the left menu select Global Variable Reference
Search for the docker section and you will see all available options.
Back to your original question - it seems that this plugin currently does not support the save command. it only supports tag, push, pull and run (of all kinds).
If you find it useful you can open a feature request in the plugin's Report an issue (Jira) page asking that they add this new capability.

Mount repo into docker image when running yaml-pipeline in Azure DevOps

I am running a docker image in Azure Devops yaml-pipeline using a container step. However, I have problems mounting the content of the repo so that this is accessible from inside the docker image.
The Azure Devops pipeline.yml file is as follows:
container:
image: 'image-name'
endpoint: 'foo'
options: '-v $(Build.SourcesDirectory):/testing'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
This fails with the error message:
Error response from daemon: create $(Build.SourcesDirectory): "$(Build.SourcesDirectory)" includes
invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended
to pass a host directory, use absolute path
I also tried replacing $(..)$ with $[..] (see here but this results in the same error. Also with ${{..}} the pipeline will not even start (error: "A template expression is not allowed in this context" in the UI)
If I remove options the script runs, but the repo is not mounted.
For non-yaml pipelines, the question was addressed here.
Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?
Any ideas how to accomplish this? Or do I need to create a new docker
image where the repo files have been add:ed?
When specifying the Container using Yaml Schema directly, the Azure DevOps Service will call an extra Initialize containers task automatically before checkout source repo task and your real tasks.
container:
image: 'image-name'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
During the Initialize containers task, the predefined variables like Agent.BuildDirectory, Build.Repository.LocalPath and Build.SourcesDirectory are not expanded (non-defined variables).
So you can't use the Build.SourcesDirectory in this way cause the value of this variable is expanded after the Initialize containers task.
1.About why the link you shared above can work: It's in a docker task/step so it can recognize the $(Build.SourcesDirectory) variable. (The real build tasks run after the build variables are defined)
2.If you're using specific Micorosft-hosted agent, you can try hard-coding the path. You can check this similar issue.
Usually for windows-hosted agent: $(Build.SourcesDirectory)=> D:\a\1\s
For Ubuntu-hosted agent: $(Build.SourcesDirectory)=> /home/vsts/work/1/s.

How to use multiple docker repositories in a Jenkins pipeline

I have a Jenkins pipeline in which I need to log into two different docker repositories. I know how to authenticate to one repo using the following command
docker.withRegistry('https://registry.example.com', 'credentials-id')
but don't know how to do it for more than 1 repo?
Nesting docker.withRegistry calls actually works as expected. Each call adds an entry to /home/jenkins/.dockercfg with provided credentials.
// Empty registry ('') means default Docker Hub `https://index.docker.io/v1/`
docker.withRegistry('', 'dockerhub-credentials-id') {
docker.withRegistry('https://private-registry.example.com', 'private-credentials-id') {
// your build steps ...
}
}
This allows you to pull base images from Docker Hub using provided credentials to avoid recently introduced pull limits, and push results into another docker registry.
This is a partial answer only applicable when you are using two registries but only need credentials on one. You can nest the calls since they mostly just do a docker login that stays active for the scope of the closure and will add the registry domain name into docker pushes and such.
Use this in a scripted Jenkins pipeline or in a script { } section of a declarative Jenkins pipeline:
docker.withRegistry('https://registry.example1.com') { // no credentials here
docker.withRegistry('https://registry.example2.com', 'credentials-id') { // credentials needed
def container = docker.build('myImage')
container.inside() {
sh "ls -al" // example to run on the newly built
}
}
}
Sometimes you can use two, non-nested calls to docker.withRegistry() one after the other but building is an example of when you can't if, for example, the base image for the first FROM in the Dockerfile needs one registry and the base image for a second FROM is in another registry.

How to get job variables injected into the docker execution?

I wonder if this is already part of the system...
I need to use the current gitlab user id and email ($GITLAB_USER_ID, $GITLAB_USER_EMAIL) injected into the execution of the docker image (to later configure the git repository).
Is there a magic way to do this ? or should I explicitly write the export commands into my .gitlab-ci.yml file (as a before_script for example) ?
Thanks.
I got my response by trying the env command on a build.
So yes every job variables are available into the docker execution env.

Jenkins Pipeline push Docker image

My Jenkins job is Pipeline that running in Dockers:
node('docker') {
//Git checkout
git url: 'ssh://blah.blah:29411/test.git'
//Build
sh 'make'
//Verify/Run
sh './runme'
}
I'm working with kernel and my sources take a lot of time to get it from GIT (it's about 2GB). I'm looking on how I can push the docker image to use it for the next build so it will already contain most of the sources. I probably need to do:
docker push blahdockergit.blah/myjenkinsslaveimage
but it should run outside of the container.
Found in pipeline syntax that following class can be used for building external jobs

Resources