How to build and upload docker-compose file inside jenkins pipeline - docker

I need to build a docker image using docker-compose.yaml file and then I need to push it on dockerhub.
I used following command to build Dockerfile and push it to dockerhub with jenkins docker plugin.
stage('Building image and Push') {
steps{
script {
customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
Is there a similar method to write this kind of command to build and push docker-compose file?
PS: I heard about about a plugin called Docker compose build step. Can I do same as above with this plugin?

This does not seem supported in a scripted pipeline.
Using purely a declarative pipeline on an agent based on the docker-compose image (so with docker-compose preinstalled), I suppose you can shell script those commands directly:
stage('Build Docker Image') {
steps{
sh 'docker-compose build'
echo 'Docker-compose-build Build Image Completed'
}
}
And then login to DockerHub and push, as described in "Simple Jenkins Declarative Pipeline to Push Docker Image To Docker Hub"
stage('Login to Docker Hub') {
steps{
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | sudo docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
echo 'Login Completed'
}
}
stage('Push Image to Docker Hub') {
steps{
sh 'sudo docker push <dockerhubusername>/<dockerhubreponame>:$BUILD_NUMBER'
echo 'Push Image Completed'
}
}
A manual approach, since an "agent docker-compose" is not directly available (you have docker agent docker/dockerfile/kubernetes, but no docker-compose option)

Related

Jenkins pipeline and Docker multi-stage builds howto

Question
I have to configure CI/CD for number of Git repositories with help of Jenkins (and DockerHub as CD target). I did that with help of Docker multi-stage build (see Considerations). I'm afraid to misunderstand/overcomplicate a simple idea.
Is Jenkins + Docker multi-stage build = best/good practice? Am I applying the idea in the correct way?
Considerations
From this presentation I assume using Docker inside Jenkins is a good idea. After reading an article Using Multi-Stage Builds to Simplify and Standardize Build Processes, Docker multi-stage builds looks to be the next step of using Jenkins + Docker.
Answers to similar question also say Docker multi-stage makes sense, but doesn't provide an example of realisation.
Implementation
Jenkins creates pipeline from SCM repository.
Git repository
Dockerfile
Jenkinsfile
project-folder
|-src
|-pom.xml
Dockerfile
FROM alpine as source
RUN apk --update --no-cache add git
COPY project-folder repo
FROM maven:3.6.3-jdk-8 as test
COPY --from=source repo repo
WORKDIR repo
RUN mvn clean test
FROM maven:3.6.3-jdk-8 as build
COPY --from=test repo repo
WORKDIR repo
RUN mvn clean package
FROM openjdk:8 as final
MAINTEINER xxx <xxx#gmail.com>
LABEL owner="xxx"
COPY --from=build repo/target/some-lib-1.8.jar /usr/local/some-lib.jar
ENTRYPOINT ["java", "-jar", "/usr/local/some-lib.jar"]
Jenkinsfile
I used docker build --target for more granularity on Jenkins UI.
#!/usr/bin/env groovy
def imageId = "use-name/image-name:1.$BUILD_NUMBER"
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
stages {
stage('Test') {
steps {
script {
sh "docker build --no-cache --target test -t ${imageId} ."
}
}
}
stage('Build') {
steps {
script {
sh "docker build --target build -t ${imageId} ."
}
}
}
stage('Image') {
steps {
script {
sh "docker build --target final -t ${imageId} ."
}
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('' , 'dockerhub') {
dockerImage = docker.build("${imageId}")
dockerImage.push()
}
}
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
following taleodor's answer I would suggest next jenkinsfile:
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
environment {
imageId = 'use-name/image-name:1.$BUILD_NUMBER'
docker_registry = 'your_docker_registry'
docker_creds = credentials('your_docker_registry_creds')
}
stages {
stage('Docker build') {
steps {
sh "docker build --no-cache --force-rm -t ${imageId} ."
}
}
stage('Docker push') {
steps {
sh'''
docker login $docker_registry --username $docker_creds_USR --password $docker_creds_PSW
docker push $imageId
docker logout
'''
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

How to pull & run docker image on remote server through jenkins pipeline

I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"

Docker is listening to port specified in run command

I created a pipeline in Jenkins which takes an app from Github, builds the app, and then builds an image and then finally runs that image with the app.
the Dockerfile is:
FROM javastreets/mule:latest
COPY ./target/jenkins-demo-api-1.0.0-1.0.0-SNAPSHOT-mule-application.jar /opt/mule/apps/
CMD [ "/opt/mule/bin/mule"]
here jenkins-demo-api-1.0.0-1.0.0-SNAPSHOT-mule-application.jar is the app that is built in Jenkins from Github.
the pipeline script is as:
pipeline {
agent any
tools{
maven 'M3'
}
stages {
stage('git pull'){
steps{
git branch: 'master', credentialsId: '025fbee3-18cc-4298-ac9b-adac*****', url: 'https://github.com/treadston-e/mule-jenkins.git'
}
}
stage('Build') {
steps {
bat "mvn clean package"
}
}
stage('build image'){
steps{
bat 'docker build -t docker-demo .'
}
}
stage('run image'){
steps{
bat 'docker run -d -p 127.0.0.1:8081:8081 docker-demo'
}
}
}
}
the pipeline executes successfully but when I try to hit, http:localhost:8081 response I receive is This page isn’t working
what should I do?
The localhost you are referring to is a localhost of the docker container which is not the same as of your client. Try to specify network in your docker run command.
docker run -d --network host -p 8081:8081 docker-demo
if you would like to check on which IP address the bridge is running, you can check it as follows:
docker network inspect bridge

Uploading non maven projects to nexus

I'm new with docker and jenkins. I'm trying to upload a non maven project to nexus using the jenkins pipeline. Below is a snippet of my jenkinsfile script. I want to do a maven upload of the resulting docker build image. any help?
node {
def app
stage('Clone repository') {
checkout scm
}
stage('Build image') {
app = bat "docker build -t myapp ."
}
stage('Test image') {
bat 'echo "Tests successful"'
}
stage('Deploy image') {
"
}
}
I haven't used this plugin before so this is my best guess. I am thinking the below plugin might help you do what you are looking for.
https://github.com/spotify/dockerfile-maven
Configure your pom.xml to point to nexus repository
Usage - https://github.com/spotify/dockerfile-maven/blob/master/docs/usage.md
dockerfile:build ------> Builds a Docker image from a Dockerfile.
dockerfile:tag ---------> Tags a Docker image.
dockerfile:push -------->Pushes a Docker image to a repository.

Resources