I'm trying to make Jenkins build and publish a docker image of my project after every successful build of my Maven Java project.
I have a Dockerfile and a Jenkinsfile at the root of the project.
Here's my Jenkinsfile?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Build image') {
steps {
script {
docker.withRegistry('http://localhost:5000') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'target/**/*.jar', fingerprint: true
}
}
}
When Jenkins tr to build this project, it returns the given error message:
+ docker build -t my-image:13 .
/var/jenkins_home/workspace/powertiss-eureka_master#tmp/durable-b9e45059/script.sh: line 1: docker: not found
What am I doing wrong here?
*edit: My Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/projectName-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
The maven:3-alpine image most likely does not include Docker. You need to choose an image that has Docker included.
Related
i am trying to build a docker image with jenkins bbut i keep getting ADD failed: no source files were specified error but it builds when using the command line.
my pipeline script is below:
pipeline {
agent any
environment {
dockerImage=''
}
tools {
// Install the Maven version configured as "M3" and add it to the path.
maven "maven3"
}
stages {
stage('Clone') {
steps {
// Get some code from a GitHub repository
git branch: 'main', credentialsId: '9e078e67-9fad-43ca-b0c5-feadb03fc060', poll: false, url: 'https://github.com/phelumie/Birthday-Scheduling-App.git'
}
}
stage('Build jar') {
steps {
sh "mvn -f scheduler/pom.xml package -DskipTests -X"
echo 'Build jar Completed'
}
}
stage('Build image') {
steps {
echo 'Starting to build docker image'
script {
def dockerImage=docker.build("phelumiess/demo","-f scheduler/Dockerfile .")
echo 'Build Image Completed'
}
}
}
}
}
Below is my Dockerfile
FROM openjdk:8-jdk-alpine
EXPOSE 8080
ADD target/birthday-scheduler*.jar birthday-scheduler.jar
ENTRYPOINT ["java","-jar","/birthday-scheduler.jar" ]
Change the directory before executing the docker build command. Refer the following.
stage('Build image') {
steps {
echo 'Starting to build docker image'
dir('Birthday-Scheduling-App/scheduler') {
script {
def dockerImage=docker.build("phelumiess/demo")
echo 'Build Image Completed'
}
}
}
}
In my jenkins pipeline, the pipeline code and Dockerfile is available at gitlab
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh '''
java -version
chmod 777 /data
'''
}
}
}
}
From the Dockerfile the image gets created and docker container gets started but missing some privilages.
can not even create a directory
Need to start the docker container with privilages so that I can perform this chmod, mkdir, etc.
agent { dockerfile .. supports arguments. See docs
agent {
// Equivalent to "docker build -f Dockerfile.build
dockerfile {
filename 'Dockerfile.build'
args '--privileged'
}
}
Question
I have to configure CI/CD for number of Git repositories with help of Jenkins (and DockerHub as CD target). I did that with help of Docker multi-stage build (see Considerations). I'm afraid to misunderstand/overcomplicate a simple idea.
Is Jenkins + Docker multi-stage build = best/good practice? Am I applying the idea in the correct way?
Considerations
From this presentation I assume using Docker inside Jenkins is a good idea. After reading an article Using Multi-Stage Builds to Simplify and Standardize Build Processes, Docker multi-stage builds looks to be the next step of using Jenkins + Docker.
Answers to similar question also say Docker multi-stage makes sense, but doesn't provide an example of realisation.
Implementation
Jenkins creates pipeline from SCM repository.
Git repository
Dockerfile
Jenkinsfile
project-folder
|-src
|-pom.xml
Dockerfile
FROM alpine as source
RUN apk --update --no-cache add git
COPY project-folder repo
FROM maven:3.6.3-jdk-8 as test
COPY --from=source repo repo
WORKDIR repo
RUN mvn clean test
FROM maven:3.6.3-jdk-8 as build
COPY --from=test repo repo
WORKDIR repo
RUN mvn clean package
FROM openjdk:8 as final
MAINTEINER xxx <xxx#gmail.com>
LABEL owner="xxx"
COPY --from=build repo/target/some-lib-1.8.jar /usr/local/some-lib.jar
ENTRYPOINT ["java", "-jar", "/usr/local/some-lib.jar"]
Jenkinsfile
I used docker build --target for more granularity on Jenkins UI.
#!/usr/bin/env groovy
def imageId = "use-name/image-name:1.$BUILD_NUMBER"
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
stages {
stage('Test') {
steps {
script {
sh "docker build --no-cache --target test -t ${imageId} ."
}
}
}
stage('Build') {
steps {
script {
sh "docker build --target build -t ${imageId} ."
}
}
}
stage('Image') {
steps {
script {
sh "docker build --target final -t ${imageId} ."
}
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('' , 'dockerhub') {
dockerImage = docker.build("${imageId}")
dockerImage.push()
}
}
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
following taleodor's answer I would suggest next jenkinsfile:
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
environment {
imageId = 'use-name/image-name:1.$BUILD_NUMBER'
docker_registry = 'your_docker_registry'
docker_creds = credentials('your_docker_registry_creds')
}
stages {
stage('Docker build') {
steps {
sh "docker build --no-cache --force-rm -t ${imageId} ."
}
}
stage('Docker push') {
steps {
sh'''
docker login $docker_registry --username $docker_creds_USR --password $docker_creds_PSW
docker push $imageId
docker logout
'''
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
Dockerfile:
pipeline {
agent any
stages {
stage ('Compile') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn clean compile'
}
}
}
stage ('unit test and Package') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn package'
}
}
}
stage ('Docker build') {
steps {
sh 'docker build -t dockerId/cakemanager .'
}
}
}
}
docker build -t dockerId/cakemanager .
/Users/Shared/Jenkins/Home/workspace/CDCI-Cake-Manager_master#tmp/durable-e630df16/script.sh:
line 1: docker: command not found
First install docker plugin from Manage Jenkins >> Manage Plugins >> Click on available and search for Docker and install it.
and then configure it on Manage Jenkins >> Global tool configuration.
You need to manually install docker on your Jenkins master or on agents if you're running builds on them.
Here's the doc to install docker on OS X https://docs.docker.com/docker-for-mac/install/
I have Jenkins running on an EC2 Instance. I have the EC2 Plugin Configured in a Peered VPC, and when a job is tagged 'support_ubuntu_docker' it will spin up an Jenkins Slave, with Docker pre-installed.
I am able to follow the examples, and get my job to connect to the local docker running on the Slave, and run commands inside the container.
Working: https://pastebin.com/UANvjhnA
pipeline {
agent {
docker {
image 'node:7-alpine'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Not Working https://pastebin.com/MsSZaPha
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
I have tried with the ansible/ansible:default image, as well as a image I created myself.
FROM alpine:3.7
RUN apk add --no-cache terraform
RUN apk add --no-cache ansible
ENTRYPOINT ["/bin/ash"]
This image behaves locally.
[jenkins_test] docker exec -it 3843653932c8 ash 10:56:42 ☁ master ☂ ⚡ ✭
/ # terraform --version
Terraform v0.11.0
/ # ansible --version
ansible 2.4.6.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Aug 22 2018, 13:24:18) [GCC 6.4.0]
/ #
I really just want to be able to clone my terraform git repo, and use the terraform in the container to run my init/plan/applies.
The error im getting for all of these is.
java.io.IOException: Failed to run top 'c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3'. Error: Error response from daemon: Container c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3 is not running
The question really should have been a Docker question; what's the difference between node:7-alpine and hashicorp/terraform:light?
hashicorp/terraform:light has an ENTRYPOINT entry, pointing to /bin/terraform.
Basically that means you run it this way:
docker run hashicorp/terraform:light --version
And it will exit right away, i.e., it's not interactive.
So if you want an interactive shell within that Docker container, you'll have to override the ENTRYPOINT to point at a shell, say, /bin/bash and also tell Docker to run interactively:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-it --entrypoint=/bin/bash'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
In a scripted pipeline you can do this:
docker.image(dockerImage).inside("--entrypoint=''") {
// code to run on container
}
If you are creating the image to use in Jenkins from a base image that already has an ENTRYPOINT instruction, you can override it by adding this line to the end of your own Dockerfile:
ENTRYPOINT []
Then the whole --entrypoint is no longer needed.
I had to change the entrypoint to empty to get it working with the following script it worked like a charm:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}