i am trying to write a pipeline which first clones a repository, then builds a docker image and after that pushes the docker image to docker hub. following is my jenkins file.
pipeline {
agent { dockerfile true }
environment {
APPLICATION = 'connect'
ENVIRONMENT = 'dev'
BUILD_VERSION = '0.9.5'
MAINTAINER_NAME = 'Shoaib'
MAINTAINER_EMAIL = 'shoaib#email.com'
BUILD_DOCKER_REPO = repo1/images'
DOCKER_IMAGE_TAG = 'repo1/images:connect_dev_0.9.5'
}
stages {
stage('clone repository') {
steps {
checkout Jenkins-Integration
}
}
stage('Build Image') {
steps {
image = docker.build("-f Dockerfile.local", "--no-cache", "-t ${DOCKER_IMAGE_TAG}", "--build-arg envior=${ENVIRONMENT} .", "--build-arg build_version=${BUILD_VERSION} .", "--build-arg maintainer_name=${MAINTAINER_NAME} .", "--build-arg maintainaer_email=${MAINTAINER_EMAIL} .")
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('https://registry.example.com', 'docker-hub-credentials') {
image.push(${DOCKER_IMAGE_TAG})
}
}
}
}
}
}
but when i run this job in blue ocean i am getting following error.
i have tried googling it but could not find satisfactory answer. any help is appreciated.
Put the docker.build in below stage into a script as following:
stage('Build Image') {
steps {
script {
def image = docker.build("-f Dockerfile.local", "--no-cache", "-t ${DOCKER_IMAGE_TAG}", "--build-arg envior=${ENVIRONMENT}", "--build-arg build_version=${BUILD_VERSION}", "--build-arg maintainer_name=${MAINTAINER_NAME}", "--build-arg maintainaer_email=${MAINTAINER_EMAIL} .")
}
}
}
Related
Am I able somehow to copy data from one stage for usage on another?
For example, I have one stage where I want to clone my repo, and on another run the Kaniko which will copy (on dockerfile) all data to container and build it
How to do this? Because
Stages are independent and I not able to operate via the same data on both
on Kaniko I not able to install the GIT to clone it there
Thanks in advance
Example of code :
pipeline {
agent none
stages {
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
sh 'cd repo'
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
}
}
}
P.S. On dockerfile I using such as
ADD . /
You can try to use stash:
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
script {
stash includes: 'repo/', name: 'myrepo'
}
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
script {
unstash 'myrepo'
}
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
I have a sample jenkins script which I am using to build my image
pipeline {
environment{
registry = "leexha/sampleadd"
registyCredential = 'dockerhub'
dockerImage = ''
}
agent any
stages {
stage('Git clone'){
steps{
git branch: 'master', url: 'https://github.com/leeadh/terraform_simpleapp.git'
}
}
stage ('Building image'){
steps{
script{
dockerImage = docker.build(registry + ":development .","--build-arg endpoint_arg='http://22222:8200' --build-arg token_arg='s.aaaaa'")
}
}
}
stage ('Pushing to Docker Hub'){
steps{
script{
println dockerImage.id
docker.withRegistry('',registyCredential){
dockerImage.push()
}
}
}
}
}
}
the build is successful. However when I println the dockerImage under the stage ('Pushing to Docker Hub') i notice it states that
leexha/sampleadd:development .
As a result tagging fails and it cant push. Im wondering why it puts an additional . at the end of the image when I did not put it anywhere.
in line
dockerImage = docker.build(registry + ":development .","--build-arg endpoint_arg='http://22222:8200' --build-arg token_arg='s.aaaaa'")
if i dont put the . after development, the build will fail.
Using the OpenShift oc new-app command, I have built a container image. Instead of pushing the image to a local container registry, I want to push the generated image to a private registry. As I am using Jenkins for CI/CD, I want to automate the process of generating the image and pushing to the private registry.
I am able to achieve the generation part. But struck with pushing the image to a private registry through Jenkinsfile. Any pointers on how to achieve this is appreciated.
This Jenkins Building Docker Image and Sending to Registry article discusses how this might be done.
pipeline {
environment {
registry = "gustavoapolinario/docker-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools {nodejs "node" }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/gustavoapolinario/node-todo-frontend'
}
}
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
This is my Jenkinsfile for building docker image and pushing it to dockerhub. Everything works just great.
I would like to clean up the untagged images after the build process. Currently I do docker system prune -f manually on the Jenkins node. Is there anyway to incorporate when agent is none?
pipeline {
agent none
stages {
stage('Build Jar') {
agent {
docker {
image 'maven:3.6.0'
args '-v $HOME/.m2:/root/.m2'
}
}
steps {
sh 'mvn clean package'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myimagename")
}
}
}
stage('Push Image') {
steps {
script {
app.push("latest")
}
}
}
}
}
I have a pipeline with multiple stages, and I want to reuse a docker container between only "n" number of stages, rather than all of them:
pipeline {
agent none
stages {
stage('Install deps') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm install'
}
}
stage('Build, test, lint, etc') {
agent {
docker { image 'node:10-alpine' }
}
parallel {
stage('Build') {
agent {
docker { image 'node:10-alpine' }
}
// This fails because it runs in a new container, and the node_modules created during the first installation are gone at this point
// How do I reuse the same container created in the install dep step?
steps {
sh 'npm run build'
}
}
stage('Test') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm run test'
}
}
}
}
// Later on, there is a deployment stage which MUST deploy using a specific node,
// which is why "agent: none" is used in the first place
}
}
See reuseNode option for Jenkins Pipeline docker agent:
https://jenkins.io/doc/book/pipeline/syntax/#agent
pipeline {
agent any
stages {
stage('NPM install') {
agent {
docker {
/*
* Reuse the workspace on the agent defined at top-level of
* Pipeline, but run inside a container.
*/
reuseNode true
image 'node:12.16.1'
}
}
environment {
/*
* Change HOME, because default is usually root dir, and
* Jenkins user may not have write permissions in that dir.
*/
HOME = "${WORKSPACE}"
}
steps {
sh 'env | sort'
sh 'npm install'
}
}
}
}
You can use scripted pipelines, where you can put multiple stage steps inside a docker step, e.g.
node {
checkout scm
docker.image('node:10-alpine').inside {
stage('Build') {
sh 'npm run build'
}
stage('Test') {
sh 'npm run test'
}
}
}
(code untested)
For Declarative pipeline, one solution can be to use Dockerfile in the root of the project. For e.g.
Dockerfile
FROM node:10-alpine
// Further Instructions
Jenkinsfile
pipeline{
agent {
dockerfile true
}
stage('Build') {
steps{
sh 'npm run build'
}
}
stage('Test') {
steps{
sh 'npm run test'
}
}
}