make a deployment redownload an image with jenkins - jenkins

I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page.
I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.
My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).
I have the autopullpolicy set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the BUILD_NUMBER as the tag for the image in the deployment).
Here is my pipeline:
pipeline {
agent any
environment
{
user = "NAME"
repo = "prework"
imagename = "${user}/${repo}"
registryCreds = 'dockerhub'
containername = "${repo}-test"
}
stages
{
stage ("Build")
{
steps {
// Building artifact
sh '''
docker build -t ${imagename} .
docker run -p 80 --name ${containername} -dt ${imagename}
'''
}
}
stage ("Test")
{
steps {
sh '''
IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername})
STATUS=$(curl -sL -w "%{http_code} \n" $IP:80 -o /dev/null)
if [ $STATUS -ne 200 ]; then
echo "Site is not up, test failed"
exit 1
fi
echo "Site is up, test succeeded"
'''
}
}
stage ("Store Artifact")
{
steps {
echo "Storing artifact: ${imagename}:${BUILD_NUMBER}"
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
def customImage = docker.image(imagename)
customImage.push(BUILD_NUMBER)
customImage.push("latest")
}
}
}
}
stage ("Deploy to Kubernetes")
{
steps {
echo "Deploy to k8s"
script {
kubernetesDeploy(configs: "deployment.yaml", kubeconfigId: "kubeconfig") }
}
}
}
post {
always {
echo "Pipeline has ended, deleting image and containers"
sh '''
docker stop ${containername}
docker rm ${containername} -f
'''
}
}
}
EDIT:
I used sed to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now.
Thanks.

According to the information from Kubernetes Continuous Deploy Plugin p.6. you can add enableConfigSubstitution: true to kubernetesDeploy() section and use ${BUILD_NUMBER} instead of latest in deployment.yaml:
By checking "Enable Variable Substitution in Config", the variables
(in the form of $VARIABLE or `${VARIABLE}) in the configuration files
will be replaced with the values from corresponding environment
variables before they are fed to the Kubernetes management API. This
allows you to dynamically update the configurations according to each
Jenkins task, for example, using the Jenkins build number as the image
tag to be pulled.

Related

Using a docker command as an variable in a Jenkins sh script?

I am working on a CI/DC Pipeline where I have a DEV, TEST and Prod Server. With a Jenkins Pipeline i deploy my newest Image onto my DEV-Server. Now i want to take the image of my DEV-Server by reading out the sha256 id and put it on my TEST-Server.
I have a Jenkins Pipeline for that:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always angular:latest'
}
}
}
}
As you see i currently use the :latest tag, but i want something like this:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always \$imageofDev'
}
}
}
}
$imageofDev = docker inspect mycontainerdev | grep -o 'sha256:[^"]*' // this command works and give my back the raw sha256 number
So that it uses the actuall sha256 number of my dev image
I dont know how i can define this variable and later use the value of it in this Jenkins Pipeline. How can i do this?
When you build the image, choose an image name and tag that you can remember later. Jenkins provides several environment variables that you can use to construct this, such as BUILD_NUMBER; in a multibranch pipeline you also have access to BRANCH_NAME and CHANGE_ID; or you can directly run git in your pipeline code.
def shortCommitID = sh script: 'git rev-parse --short HEAD', returnStdout: true
def dockerImage = "project:${shortCommitID.trim()}"
def registry = 'registry.example.com'
def fullDockerImage "${registry}/${dockerImage}"
Now that you know the Docker image name you're going to use everywhere you can just use it; you never need to go off and look up the image ID. Using the scripted pipeline Docker integration, for example:
docker.withRegistry("https://${registry}") {
def image = docker.build(dockerImage)
image.push
}
Since you know the registry/image:tag name, you can just use it in other Jenkins directives too
def containerName = 'container'
// stop an old container, if any
sh "docker stop ${containerName} || true"
sh "docker rm ${containerName} || true"
// start a new container that outlives this pipeline
sh "docker run -d --name ${containerName} ${fullDockerImage}"

Jenkins start same docker container with different compose files

I'm new to Jenkins, and I have a project, but there are I need few instances of it, with different configurations, meaning, to run different docker-compose files, due to different mounts / ports, but the rest of the project is the same.
I could not find any information about an issue like this.
if it help:
Jenkinsfile:
pipeline {
agent any
environment {
PATH = "$PATH:/usr/local/bin"
}
stages {
stage("build docker image"){
steps{
sh """
docker build . -t application:development --pull=false
"""
}
}
stage("run compose"){
steps{
sh"""
docker-compose up -d
"""
}
}
}
Yes! This is possible.
You need to create 2 docker-compose files with different configurations.
Ex:
docker-compose-a.yml
docker-compose-b.yml
Then:
pipeline {
agent any
environment {
PATH = "$PATH:/usr/local/bin"
}
stages {
stage("build docker image"){
steps{
sh """
docker build . -t application:development --pull=false
"""
}
}
stage("run compose"){
steps{
sh"""
docker-compose up -f docker-compose-a.yml -d
docker-compose up -f docker-compose-b.yml -d
"""
}
}
}

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

How to run a "sidecar" container in Jenkins Blue Ocean?

I am fairly new to Jenkins and CI/CD in general, but believe that I have searched long enough to conclude things are not as I expect.
I want to do some frontend tests on my website and just as in real life I want to test with the site in one Docker container and the database in another container. Jenkins has this documented as "sidecar" containers which can be part of a pipeline.
Their example:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
The thing is that I do not have a 'traditional' Jenkins pipeline, but I am running Jenkins Blue Ocean instead. This gives me a fancy pipeline editor, but also my pipeline code (Jenkinsfile) looks really different from the example:
pipeline {
agent {
docker {
image 'php'
}
}
stages {
stage('Build') {
steps {
sh 'composer --version'
sh 'composer install'
}
}
stage('Tests') {
steps {
echo 'Do test'
}
}
}
}
So how would I be spawning (and tearing down) these "sidecar" containers in a Blue Ocean pipeline? Currently the Pipeline editor has no available options if I want to add a step related to Docker. Can I still use docker.image? I do have the Docker Pipeline plugin installed.
.
The example provided by Jenkins in the link is actually a fully functional pipeline, with one exception. You need to comment out checkout scm if you provide the pipeline script directly into Jenkins.
node {
// checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
What it may be confusing to you is that the code style in the example above is very different than the one generated by Blue Ocean pipeline editor. That is because the script is written in the Scripted Pipeline and Blue Ocean has generated a Declarative Pipeline. Both are fully supported in Jenkins and both use the same engine underneath, but the syntax differences may lead to confusion at start.
You can use the Scripted Pipeline example above just fine, but if you want to keep the Declarative Pipeline, you can run the scripted part inside the script step. In both cases you need to change the docker images and executed commands according to your needs.
pipeline {
agent any
stages {
stage('Build and test') {
steps {
script {
node {
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
}
}
}
}
}
Please note:
Docker container link feature used in this example is a legacy feature, it may eventually be removed.
The pipeline will fail at make check, as make is not provided in centos:7 image.
More then half a year later I finally figured out it was much simpler than I thought. It can be done with docker-compose.
You need to make sure that your Jenkins has access to docker-compose. So if you are running Jenkins as a docker container ensure it has access to the Docker socket. Also Jenkins is not likely to ship with docker-compose included (JENKINS-51898) so you will have to build your own blue ocean image to install docker-compose.
Rather than copying below file, check https://docs.docker.com/compose/install/ for the latest version!
# Dockerfile
FROM jenkinsci/blueocean
USER root
RUN curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
USER jenkins
Once you have Jenkins and Docker up and running you can deploy a test version of your application with a regular docker-compose file, including all database and other containers you might need. You can install dependencies and start the tests by using docker-compose exec to execute commands inside containers started with docker-compose.
Note that docker-compose -f docker-compose.test.yml exec -T php composer-install executes the composer-install command in the container that was defined as the php service in the docker-compose file.
In the end, no matter the outcome of the test, all containers and associated volumes (-v flag) are shutdown and removed.
# Jenkinsfile
pipeline {
agent any
stages {
stage('Start') {
steps {
sh 'docker-compose -f docker-compose.test.yml up -d'
}
}
stage('Composer install') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php composer install --no-interaction --no-progress --optimize-autoloader'
}
}
stage('Test') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php <run test script>'
}
}
}
post {
always {
sh 'docker-compose -f docker-compose.test.yml down -v'
}
}
}

Best solution to deploy (copy) the last version to the server using Jenkins Pipline

Here is my Jenkins Pipeline:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm build'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh './jenkins/scripts/kill.sh'
}
}
stage('Deploy') {
steps {
sh './jenkins/scripts/deploy.sh'
}
}
} }
I use Docker and jenkinsci/blueocean image to run Jenkins. The first two stages are kind of standard to build a NodeJS app, the third one, however, is the part that I want to Jenkins copy new files to the server. Here is the deploy.sh files:
#!/usr/bin/env sh
set -x
scp -o StrictHostKeyChecking=no -r dist/* deviceappstore:/var/www/my_website/static/
There are two problems, first jenkinsci/blueocean does not have scp (not setup) and second, the ~/.ssh/config does not exist inside of the Jankins docker image then SCP will fail to authenticate. My solution was to build a custom image extends from jenkinsci/blueocean, setup SCP and copy config file and SSH key into it.
There are some plugins like Publish Over SSH but it seems it's not useful for Pipeline projects.
Is there any better solution? It the whole scenario right or I'm doing something wrong? I'm looking for most secure and standard solution for this problem.
OK, I think I found a good solution.
Thanks to SSH Agent plugin I can easily pass the credentials to the SCP command and copy the files to the server. Something like this:
...
stage('Deploy') {
steps {
sshagent(['my SSH']) {
echo 'this works...'
sh 'scp -o StrictHostKeyChecking=no -r dist/* my_server:/var/www/my_site/static/'
}
}
}
...
This is perfect because all the credentials are inside of Jenkins server and there's nothing about it in the repo.
And to be able to use this, there's just one solution. You need to use apk inside of the jenkinsci/blueocean (alpine) image and setup openssh:
apk add openssh
Or better solution create a new Dockerfile and build your own version.

Resources