How to push the artifacts into docker image in Jenkins - docker

How can i create a docker image of the artifacts using docker instruction. I am using "Build Inside a docker container" in jenkins job.
This is the instruction in the dockerfile:
install openjdk 8`RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;`
Then I need to control back to Jenkins job to perform the Build process.
So in Build process in jenkins Job, "Execute Shell" commands are executed and artifacts are created.
It has some post build action to run the junit test cases and run coverage report.
In the end, I need the dockerfile to run the instruction to create image of the artifacts.
Add sourcefile destinationfile
Please suggest how to write docker instruction to give the control to jenkins job and get the control after build process is done.

You can use docker pipeline plugin to do this (see docker object)
node("docker") {
docker.withRegistry('<<your-docker-registry>>', '<<your-docker-registry-credentials-id>>') {
git url: "<<your-git-repo-url>>", credentialsId: '<<your-git-credentials-id>>'
sh "git rev-parse HEAD > .git/commit-id"
def commit_id = readFile('.git/commit-id').trim()
println commit_id
def app;
stage("build") {
app = docker.build "your-project-name"
}
stage("publish") {
app.push 'master'
app.push "${commit_id}"
}
}
}

Related

Running sonarqube as container with same network as host

I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.

Jenkins pipeline fails to run a remote script on a Jenkins node

I am trying to run the following cleanup script but it keeps failing on specific Jenkins nodes:
def BUILDERS = [:].asSynchronized()
def NODE_NAMES = [
'cleanuptooldean', //test
]
node('master') {
stage('Prepare the Pipeline') {
// get deploy pattern from params
for (NODE_NAME in NODE_NAMES) {
// Groovy closures stuff, need to copy it over
def FINAL_NODE_NAME = NODE_NAME
BUILDERS[FINAL_NODE_NAME] = {
node(FINAL_NODE_NAME) {
timeout(time:5, unit: "MINUTES") {
echo "Started Cleaning process of unused docker images from Jenkins Instance, Agent: "+env.NODE_NAME
sh "docker system prune -a --volumes -f"
echo "Cleaning up space from unused packages (orphaned dependencies), remove old kernels in Ubuntu, Agent: "+env.NODE_NAME
sh "sudo apt-get -y autoremove --purge"
echo "clean the apt cache on Ubuntu "+ env.NODE_NAME
sh "sudo apt-get -y clean"
echo "Finished Cleaning process of unused docker images from Jenkins Instance, Agent: "+env.NODE_NAME
}
}
}
}
}
}
the errors I get if I type "Sudo" at the beginning of "apt-get -y autoremove --purge" and "apt-get -y clean" is: "sudo: no tty present and no askpass program specified" needless to say that I have edited the sudoers file and added "jenkins ALL=(ALL) NOPASSWD: ALL" in order to test it at the end of the file.
If I remove the "Sudo" command the error I get is: "dial unix /var/run/docker.sock: connect: permission denied" which I tried to resolve by adding the "Jenkins" user to the "docker" group.
** I must say that when I run the commands locally, with and without "Sudo" they both works from "Jenkins" user, but when I try to do it remotely from Jenkins using pipeline it fails.
***this specific script works perfectly on other nodes
thanks in advance
apparently each node used a different user, so I had to add all the users to the docker's group and add them to the visudo file.

Executing gcloud command in Jenkins pipeline

I'm trying to run the gcloud command in a Jenkins declarative pipeline just like in the following example:
pipeline {
agent any
stages {
stage('Run gcloud version') {
steps {
sh 'gcloud --version'
}
}
}
}
I downloaded the "GCloud SDK Plugin" and configured it like this (in "Global Tool Configuration" for Jenkins):
but when I try to build the pipeline using the above Jenkinsfile, I'm getting a 'gcloud: not found' error in the pipeline.
I was able to run the command using the following Jenkinsfile:
pipeline {
agent any
stages {
stage('Run gcloud') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
sh '$GCLOUD_PATH/gcloud --version'
}
}
}
}
}
Note: I'm running Jenkins in kubernetes, so first I had to install the gcloud sdk in the Jenkins pod
I am running Jenkins 2.176.2 in containers and the GCloud plugin was not able to install the SDK in the slave (agents) containers.
I used the docker file to install it when deploying the agents:
RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk-stretch main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update -y && apt-get install google-cloud-sdk -y \
&& PATH=$PATH:/root/google-cloud-sdk/bin

Run sudo within Jenkins dockerfile pipeline

I am setting up a Jenkins pipeline (declarative script) using a Docker container agent built from a Dockerfile. I want one of the build stages to fetch dependent packages (Debian packages, from Artifactory, in my case) and then install them within the Docker container. Installing those packages (using dpkg, in my case) needs super-user permission, and thus sudo. How do I set up the pipeline and/or Dockerfile to enable that?
At present, my Jenkinsfile is somewhat like this:
pipeline {
agent {
dockerfile {
filename 'Dockerfile.jenkins'
}
}
stages {
stage('Set up dependencies') {
steps {
sh 'rm -rf dependent-packages && mkdir dependent-packages'
script {// Fetch packages from Artifactory
def packageserver = Artifactory.server 'deb-repo-srv'
def downloadSpec = ...
packageserver.download(downloadSpec)
}
sh 'sudo dpkg -i -R dependent-packages/'
}
}
...
}
}
And my Dockerfile is like this:
# Set up the O/S environment
FROM debian:9
# Add the build and test tools
RUN apt-get -y update && apt-get -y install \
cmake \
doxygen \
g++ \
libcppunit-dev \
make \
libxerces-c-dev
Because I am using a Dockerfile agent, simply adding the jenkins user to the sudoers file of the Jenkins server will not work.

How to use git credentials from Jenkins pipeline input into docker file?

I am trying to load Jenkins pipeline script from SCM. I have to build a docker image and push it to GCR. In the docker image, I need to install private git repositories. Here, I am trying to get git username password from Jenkins input. But I'm not sure how I can use it in the Dockerfile to pull the git repo. These are my Jenkinsfile and Dockerfile in SCM. Any suggestions?
Jenkinsfile :
node {
def app
stage('Clone repository') {
checkout scm
def COMMITHASH = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
echo ("Commit hash: "+COMMITHASH.substring(0,7))
}
stage('Build image') {
timeout(time: 600, unit: 'SECONDS') {
gitUser = input(
id: 'gitUser',
message: 'Please enter git credentials :',
parameters: [
[$class: 'TextParameterDefinition', defaultValue: "", description: 'Git user name', name: 'username'],
[$class: 'PasswordParameterDefinition', defaultValue: "", description: 'Git password', name: 'password']
])
}
/* Build docker image */
println('Build image stage');
app = docker.build("testBuild")
}
stage('Push image') {
/* Push image to GCR */
docker.withRegistry('https://us.gcr.io', 'gcr:***') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
Dockerfile :
# use a ubuntu 16.04 base image
FROM ubuntu:16.04
MAINTAINER "someuser#company.com"
# Set environment variables
ENV DEBIAN_FRONTEND noninteractive
ENV LC_ALL C.UTF-8
# Upgrade the system
RUN apt-get update && apt-get -y upgrade && apt-get install -y python-software-properties software-properties-common
# Install cert bot and apache
RUN apt-get install -y apache2
#Enable apache modules
RUN a2enmod ssl
RUN a2enmod headers
RUN a2enmod rewrite
# Create directory for web application
RUN mkdir -p /var/www/myApp
# Expose ssl port
EXPOSE 443
I want to install my private bitbucket repository in /var/www/myApp. Also, I want to avoid ssh authentication.
Do you have the requirement to always prompt for the credentials?
If not, you could store them in the Jenkins credential store and retrieve them via withCredentials step from the Jenkins Credentials Binding plugin. That way they are hidden in the logs if you do the build within the closure.
withCredentials([usernamePassword(
credentialsId: 'privateGitCredentials',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
)]) {
sh "docker build --build-arg username=$USERNAME --build-arg password=$PASSWORD -t <your tag> ."
}
You should pass your git username and password as environment variables during docker build and then call these variables inside Dockerfile.
Example Dockerfile -
FROM test
ARG username
ARG password
RUN git clone https://${username}:${password}#github.com/private-repo-name.git
Build command:
docker build --build-arg username=$git_username --build-arg password=$git_password -t <your tag> .

Resources