Unable to run docker build inside Jenkinsfile - docker

On doing docker build inside Jenkinsfile,
i.e
docker build -f ./Dockerfile -t datastore:1.0.1 .
I am getting error like
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This is my Jenkinsfile
#!/usr/bin/groovy
node {
checkout scm
// Here are the important data for this build
def DOCKER_REGISTRY = "XXX"
def DATASTORE = "datastore"
def DOCKER_TAG_DATASTORE = "${DOCKER_REGISTRY}/XXX"
def APP_VERSION = "1.0.1"
stage('Build') {
dockerInside('XXX/db-server:1.0.114', '') {
echo "Setting up artifactory location to push docker image ${DATASTORE}:${APP_VERSION}"
sh "docker build -f ./Dockerfile -t ${DATASTORE}:${APP_VERSION} ."
sh "docker tag ${DATASTORE}:${APP_VERSION} ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
withCredentials([
usernamePassword(
credentialsId: CORE_IZ_USER,
usernameVariable: 'LOG',
passwordVariable: 'PAS'
)]) {
// Doing some upload commands (see artifactory or docker upload commands from Jenkins)
sh "docker push ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
echo "Push to ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
}
}
}
stage('Docker image creation') {
echo "Docker image creation"
}
stage('Docker image upload') {
echo "Docker image upload"
}
}
This is my Dockerfile
FROM XXX/rhel:7.5
USER root
RUN yum -y install gcc && yum install -y git && yum install -y docker
# Install Go
RUN curl -O -s https://dl.google.com/go/go1.10.2.linux-amd64.tar.gz
RUN tar -xzf go1.10.2.linux-amd64.tar.gz -C /usr/local
ENV PATH /usr/local/go/bin:$PATH
ENV GOPATH /gopath
ENV GOBIN /usr/local/go/bin
WORKDIR /gopath/src/XXX
RUN mkdir -p /gopath/src/XXX
ADD . /gopath/src/XXX
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -tags netgo -installsuffix netgo -o ./db-server /gopath/src/XXX/datastore/main.go
ADD ./db-server /db-server
ENTRYPOINT ["/db-server"]

Related

Running sonarqube as container with same network as host

I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.

Getting error if I use docker build-arg in jenkins pipeline

I need to use host ssh key inside docker , for this purpose i have build docker like
docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev .
if we use direct docker command it is working fine , but if I use inside the jenkins pipe-line script getting below error
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 92: expecting '}', found 'ssh_prv_key' # line 92, column 116.
ev:${GIT_COMMIT} "--build-arg ssh_prv_ke
Below step i have used in jenkins pipe-line
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev ."
And docker file used like below
ARG ssh_prv_key
# Authorize SSH Host
# Add the keys and set permissions
RUN mkdir -p /root/.ssh
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
I solved a similar issue as follow:
Jenkins pipeline
sh "cp ~/.ssh/id_rsa id_rsa"
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} -f dockerfile-dev ."
sh "rm id_rsa"
Dockerfile
# Some instructions...
ADD id_rsa id_rsa
# Now use the "id_rsa" file inside the image...

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

How to write a multi-stage Dockerfile without from flag

This is actually the continuation of this question that I asked today.
I have a multi-stage Dockerfile that uses --from flag:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
WORKDIR /app
COPY --from=docker.m.our-intra.net/microsoft/dotnet:2.1-sdk /app/aspnetapp/MyProject.WebApi/out ./
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
With the help of this file I am able to build the image locally successfully.
BUT I can't use this Dockerfile in my Jenkins pipeline because the Jenkins Server engine is less than 17.05 version and it's not going to be updated (maybe later but not now).
I'm very new in Docker and Jenkins stuff. I would appreciate if anyone can help me to modify the Dockerfile in such way that I can use it without --from flag.
UPDATE:
The upper-mentioned Dockerfile is wrong. The working version of Dockerfile with the help of which I build the image on my local machine successfully and run the app also successfully is as follows:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app/aspnetapp/MyProject.WebApi/out ./
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
UPDATE 2:
I'm trying to follow Carlos advice and now I have two docker files.
This is my Docker-build:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
This my Dockerfile:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY . .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
This my Jenkinsfile:
def docker_repository_url = 'docker.m.our-intra.net'
def artifact_group = 'some-artifact-group'
def artifact_name = 'my-service-api'
pipeline {
agent {
label 'build'
}
stages {
stage('Checkout') {
steps {
script {
echo 'Checkout...'
checkout scm
echo 'Checkout Completed'
}
}
}
stage('Build') {
steps {
script {
echo 'Build...'
sh 'docker version'
sh 'docker build -t fact:v${BUILD_NUMBER} -f Dockerfile-build .'
echo 'Build Completed'
}
}
}
stage('Extract artifact') {
steps {
script {
echo 'Extract...'
sh 'docker create --name build-stage-container fact:v${BUILD_NUMBER}'
sh 'docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out .'
sh 'docker rm -f build-stage-container'
echo 'Extract Completed'
}
}
}
stage('Copy compiled artifact') {
steps {
script {
echo 'Copy artifact...'
sh "docker build -t ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER} -f Dockerfile ."
echo 'Copy artifact Completed'
}
}
}
stage('Push image') {
steps {
script {
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 'jenkins',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
]]) {
def username = env.USERNAME
def password = env.PASSWORD
echo 'Login...'
sh "docker login ${docker_repository_url} -u ${username} -p ${password}"
echo 'Login Successful'
echo 'Push image...'
sh "docker push ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
echo 'Push image Completed'
}
}
}
}
}
}
All steps are successed but when I try to run the image locally (after pulling it from Maven) or run it on OpehShift cluster it fails and says:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
What am I doing wrong?
TL;DR: You need to replicate the underlying functionality yourself, outside of Docker
Firstly, you are using the --from option wrong. To copy from a previous build stage, you must refer to its index or its name, e.g.:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
...
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY --from=0 /app/aspnetapp/MyProject.WebApi/out ./
or
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk AS build-stage
...
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY --from=build-stage /app/aspnetapp/MyProject.WebApi/out ./
With your current Dockerfile, it would try to copy the file from the upstream docker image, not from the previous build stage.
Secondly, you can't do multi-stage Docker builds with a version prior to 17.05. You need to replicate the underlying functionality yourself, outside of Docker.
To do so, you can have one Dockerfile to build your artifact and run a throwaway container based on that image, from which to extract the artifact. You don't need to run the container, you can simply create it with docker create (this creates the writeable container layer):
docker create --name build-stage-container build-stage-image
docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out .
Then you can have a second Dockerfile to build an image copying the artifact extracted from the previous stage, with a simple COPY from the build context.
#Carlos answer is perfectly valid. However as you are using jenkins and pipelines you might be happy with the following alternative solution:
If you are using jenkins with dynamic pod-provisioning on a kubernetes-distribution you can do the following:
Use a pod-template for your build which is based on <registry>/microsoft/dotnet:2.1-sdk. Compile your application within that pod in regular dotnet-way.
Keep the second part of your Dockerfile, but just copy the compiled artifact into the docker-image.
In summary you move out the first part of your Dockerfile into the Jenkinsfile to do the application build. The second part remains to do the docker-build from the already compiled binary.
The Jenkinsfile would look similar to this:
podTemplate(
...,
containers: ['microsoft/dotnet:2.1-sdk', 'docker:1.13.1'],
...
) {
container('microsoft/dotnet:2.1-sdk') {
stage("Compile Code") {
sh "dotnet restore"
sh "dotnet publish -c Release -o out"
}
}
container('docker:1.13.1') {
stage("Build Docker image") {
docker.build("mydockerimage:1.0")
}
}
}
This Jenkinsfile is far from complete and only illustrates how it would work.
Find more documentation here:
Jenkins kubernetes plugin
Jenkins docker global variable in scripted pipeline
This my final working solution.
Docker-build:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
Dockerfile:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
ADD output/out /output
WORKDIR /output
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Jenkinsfile:
def docker_repository_url = 'docker.m.our-intra.net'
def artifact_group = 'some-artifact-group'
def artifact_name = 'my-service-api'
pipeline {
agent {
label 'build'
}
stages {
stage('Checkout') {
steps {
script {
echo 'Checkout...'
checkout scm
echo 'Checkout Completed'
}
}
}
stage('Build') {
steps {
script {
echo 'Build...'
sh 'docker version'
sh "docker build -t sometag:v${BUILD_NUMBER} -f Dockerfile-build ."
echo 'Build Completed'
}
}
}
stage('Extract artifact') {
steps {
script {
echo 'Extract...'
sh "docker run -d --name build-stage-container sometag:v${BUILD_NUMBER}"
sh 'mkdir output'
sh 'docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out output'
sh 'docker rm -f build-stage-container'
sh "docker rmi -f sometag:v${BUILD_NUMBER}"
echo 'Extract Completed'
}
}
}
stage('Copy compiled artifact') {
steps {
script {
echo 'Copy artifact...'
sh "docker build -t ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER} -f Dockerfile ."
echo 'Copy artifact Completed'
}
}
}
stage('Push image') {
steps {
script {
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 'jenkins',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
]]) {
def username = env.USERNAME
def password = env.PASSWORD
echo 'Login...'
sh "docker login ${docker_repository_url} -u ${username} -p ${password}"
echo 'Login Successful'
echo 'Push image...'
sh "docker push ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
echo 'Push image Completed'
sh "docker rmi -f ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
}
}
}
}
}
}

Building Go app with "vendor" directory on Jenkins with Docker

I'm trying to set up a Jenkins Pipeline to build and deploy my first Go project using a Jenkinsfile and docker.image().inside . I can't figure out how to get go to pick up the dependencies in the vendor/ directory.
When I run the build, I get a bunch of errors:
+ goapp test ./...
src/dao/demo_dao.go:8:2: cannot find package "github.com/dgrijalva/jwt-go" in any of:
/usr/lib/go_appengine/goroot/src/github.com/dgrijalva/jwt-go (from $GOROOT)
/usr/lib/go_appengine/gopath/src/github.com/dgrijalva/jwt-go (from $GOPATH)
/workspace/src/github.com/dgrijalva/jwt-go
...why isn't it picking up the Vendor directory?
When I throw in some logging, it seems that after running sh "cd /workspace/src/bitbucket.org/nalbion/go-demo" the next sh command is still in the original ${WORKSPACE} directory. I really like the idea of the Jenkins file, but I can't find any decent documentation for it.
(Edit - there is decent documentation here but dir("/workspace/src/bitbucket.org/nalbion/go-demo") {} doesn't seem to work within docker.image().inside)
My Docker file resembles:
FROM golang:1.6.2
# Google's App Engine Go SDK
RUN wget https://storage.googleapis.com/appengine-sdks/featured/go_appengine_sdk_linux_amd64-1.9.40.zip -q -O go_appengine_sdk.zip && \
unzip -q go_appengine_sdk.zip -d /usr/lib/ && \
rm go_appengine_sdk.zip
ENV PATH /usr/lib/go_appengine:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GOPATH /usr/lib/go_appengine/gopath
# Add Jenkins user
RUN groupadd -g 132 jenkins && useradd -d "/var/jenkins_home" -u 122 -g 132 -m -s /bin/bash jenkins
And my Jenkinsfile:
node('docker') {
currentBuild.result = "SUCCESS"
try {
stage 'Checkout'
checkout scm
stage 'Build and Test'
env.WORKSPACE = pwd()
docker.image('nalbion/go-web-build:latest').inside(
"-v ${env.WORKSPACE}:/workspace/src/bitbucket.org/nalbion/go-demo " +
"-e GOPATH=/usr/lib/go_appengine/gopath:/workspace") {
// Debugging
sh 'echo GOPATH: $GOPATH'
sh "ls -al /workspace/src/bitbucket.org/nalbion/go-demo"
sh "cd /workspace/src/bitbucket.org/nalbion/go-demo"
sh "pwd"
sh "go vet ./src/..."
sh "goapp test ./..."
}
stage 'Deploy to DEV'
docker.image('nalbion/go-web-build').inside {
sh "goapp deploy --application go-demo --version v${v} app.yaml"
}
timeout(time:5, unit:'DAYS') {
input message:'Approve deployment?', submitter: 'qa'
}
stage 'Deploy to PROD'
docker.image('nalbion/go-web-build').inside {
sh "goapp deploy --application go-demo --version v${v} app.yaml"
}
} catch (err) {
currentBuild.result = "FAILURE"
// send notifications
throw err
}
}
I managed to get it working by including the cd in the same sh statement:
docker.image('nalbion/go-web-build:latest')
.inside("-v ${env.WORKSPACE}:/workspace/src/bitbucket.org/nalbion/go-demo " +
"-e GOPATH=/usr/lib/go_appengine/gopath:/workspace") {
sh """
cd /workspace/src/bitbucket.org/nalbion/go-demo
go vet ./src/...
goapp test ./...
"""
}

Resources