Using a docker command as an variable in a Jenkins sh script? - docker

I am working on a CI/DC Pipeline where I have a DEV, TEST and Prod Server. With a Jenkins Pipeline i deploy my newest Image onto my DEV-Server. Now i want to take the image of my DEV-Server by reading out the sha256 id and put it on my TEST-Server.
I have a Jenkins Pipeline for that:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always angular:latest'
}
}
}
}
As you see i currently use the :latest tag, but i want something like this:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always \$imageofDev'
}
}
}
}
$imageofDev = docker inspect mycontainerdev | grep -o 'sha256:[^"]*' // this command works and give my back the raw sha256 number
So that it uses the actuall sha256 number of my dev image
I dont know how i can define this variable and later use the value of it in this Jenkins Pipeline. How can i do this?

When you build the image, choose an image name and tag that you can remember later. Jenkins provides several environment variables that you can use to construct this, such as BUILD_NUMBER; in a multibranch pipeline you also have access to BRANCH_NAME and CHANGE_ID; or you can directly run git in your pipeline code.
def shortCommitID = sh script: 'git rev-parse --short HEAD', returnStdout: true
def dockerImage = "project:${shortCommitID.trim()}"
def registry = 'registry.example.com'
def fullDockerImage "${registry}/${dockerImage}"
Now that you know the Docker image name you're going to use everywhere you can just use it; you never need to go off and look up the image ID. Using the scripted pipeline Docker integration, for example:
docker.withRegistry("https://${registry}") {
def image = docker.build(dockerImage)
image.push
}
Since you know the registry/image:tag name, you can just use it in other Jenkins directives too
def containerName = 'container'
// stop an old container, if any
sh "docker stop ${containerName} || true"
sh "docker rm ${containerName} || true"
// start a new container that outlives this pipeline
sh "docker run -d --name ${containerName} ${fullDockerImage}"

Related

How to build and upload docker-compose file inside jenkins pipeline

I need to build a docker image using docker-compose.yaml file and then I need to push it on dockerhub.
I used following command to build Dockerfile and push it to dockerhub with jenkins docker plugin.
stage('Building image and Push') {
steps{
script {
customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
Is there a similar method to write this kind of command to build and push docker-compose file?
PS: I heard about about a plugin called Docker compose build step. Can I do same as above with this plugin?
This does not seem supported in a scripted pipeline.
Using purely a declarative pipeline on an agent based on the docker-compose image (so with docker-compose preinstalled), I suppose you can shell script those commands directly:
stage('Build Docker Image') {
steps{
sh 'docker-compose build'
echo 'Docker-compose-build Build Image Completed'
}
}
And then login to DockerHub and push, as described in "Simple Jenkins Declarative Pipeline to Push Docker Image To Docker Hub"
stage('Login to Docker Hub') {
steps{
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | sudo docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
echo 'Login Completed'
}
}
stage('Push Image to Docker Hub') {
steps{
sh 'sudo docker push <dockerhubusername>/<dockerhubreponame>:$BUILD_NUMBER'
echo 'Push Image Completed'
}
}
A manual approach, since an "agent docker-compose" is not directly available (you have docker agent docker/dockerfile/kubernetes, but no docker-compose option)

make a deployment redownload an image with jenkins

I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page.
I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.
My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).
I have the autopullpolicy set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the BUILD_NUMBER as the tag for the image in the deployment).
Here is my pipeline:
pipeline {
agent any
environment
{
user = "NAME"
repo = "prework"
imagename = "${user}/${repo}"
registryCreds = 'dockerhub'
containername = "${repo}-test"
}
stages
{
stage ("Build")
{
steps {
// Building artifact
sh '''
docker build -t ${imagename} .
docker run -p 80 --name ${containername} -dt ${imagename}
'''
}
}
stage ("Test")
{
steps {
sh '''
IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername})
STATUS=$(curl -sL -w "%{http_code} \n" $IP:80 -o /dev/null)
if [ $STATUS -ne 200 ]; then
echo "Site is not up, test failed"
exit 1
fi
echo "Site is up, test succeeded"
'''
}
}
stage ("Store Artifact")
{
steps {
echo "Storing artifact: ${imagename}:${BUILD_NUMBER}"
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
def customImage = docker.image(imagename)
customImage.push(BUILD_NUMBER)
customImage.push("latest")
}
}
}
}
stage ("Deploy to Kubernetes")
{
steps {
echo "Deploy to k8s"
script {
kubernetesDeploy(configs: "deployment.yaml", kubeconfigId: "kubeconfig") }
}
}
}
post {
always {
echo "Pipeline has ended, deleting image and containers"
sh '''
docker stop ${containername}
docker rm ${containername} -f
'''
}
}
}
EDIT:
I used sed to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now.
Thanks.
According to the information from Kubernetes Continuous Deploy Plugin p.6. you can add enableConfigSubstitution: true to kubernetesDeploy() section and use ${BUILD_NUMBER} instead of latest in deployment.yaml:
By checking "Enable Variable Substitution in Config", the variables
(in the form of $VARIABLE or `${VARIABLE}) in the configuration files
will be replaced with the values from corresponding environment
variables before they are fed to the Kubernetes management API. This
allows you to dynamically update the configurations according to each
Jenkins task, for example, using the Jenkins build number as the image
tag to be pulled.

How to create new docker container and run it from Jenkinsfile

I've inherited this Jenkinsfile stage that will run a new docker image using withRun:
stage('Deploy') {
steps {
script {
docker.image('deployscript:latest').withRun("""\
-e 'IMAGE=${IMAGE_NAME}:${BUILD_ID}' \
-e 'CNAME=${IMAGE_NAME}' \
-e 'PORT=${PORT_1}:80' \
-e 'PORT=${PORT_2}:443'""") { c ->
sh "docker logs ${c.id}"
}
}
}
}
However, I believe this method is only meant for testing purposes and actually stops the container once the block is finished. I want this step to actually run the container and stop/restart the previous one if necessary. The documentation out there on this is surprisingly sparse. Please help.
If you want to run the docker container throughout all the stages, thenthe example would look like below:
Scripted Pipeline
node('master') {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('alpine:latest').inside {
stage('01') {
sh 'echo STAGE01'
}
stage('02') {
sh 'echo STAGE02'
}
}
}
Declarative Pipeline
pipeline {
agent {
docker {
image 'alpine:latest'
label 'master'
args '-v /tmp:/tmp'
}
}
stages {
stage('01') {
steps {
sh "echo STAGE01"
}
}
stage('02') {
steps {
sh "echo STAGE02"
}
}
}
}
In both scripted and declarative pipelines, The docker container from the alpine image will active for all the stages to finish and only delete if the stage is a success or failure.
But If you would want to control start, stop, restart the container yourself on different stages, you can do it with bash command or by writing a small groovy script wrapping the docker command like below
node {
stage('init') {
docker create --name myImage1 -v $(pwd):/var/jenkins -w /var/jenkins imageName:tag
}
stage('build') {
// make use of docker command to start, stop and execute some script inside the container
// same goes for other stage
//once all done you can remove the container
docker rm myImage1
}
}
The following will stop the existing container and run a new one with the new image:
stage('Deploy') {
steps {
sh "docker stop ${IMAGE_NAME} || true && docker rm ${IMAGE_NAME} || true"
sh "docker run -d \
--name ${IMAGE_NAME} \
--publish ${PORT}:443 \
${IMAGE_NAME}:${BUILD_ID}"
}
}

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

How to pull & run docker image on remote server through jenkins pipeline

I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"

Resources