I am setting up a Jenkins pipeline (declarative script) using a Docker container agent built from a Dockerfile. I want one of the build stages to fetch dependent packages (Debian packages, from Artifactory, in my case) and then install them within the Docker container. Installing those packages (using dpkg, in my case) needs super-user permission, and thus sudo. How do I set up the pipeline and/or Dockerfile to enable that?
At present, my Jenkinsfile is somewhat like this:
pipeline {
agent {
dockerfile {
filename 'Dockerfile.jenkins'
}
}
stages {
stage('Set up dependencies') {
steps {
sh 'rm -rf dependent-packages && mkdir dependent-packages'
script {// Fetch packages from Artifactory
def packageserver = Artifactory.server 'deb-repo-srv'
def downloadSpec = ...
packageserver.download(downloadSpec)
}
sh 'sudo dpkg -i -R dependent-packages/'
}
}
...
}
}
And my Dockerfile is like this:
# Set up the O/S environment
FROM debian:9
# Add the build and test tools
RUN apt-get -y update && apt-get -y install \
cmake \
doxygen \
g++ \
libcppunit-dev \
make \
libxerces-c-dev
Because I am using a Dockerfile agent, simply adding the jenkins user to the sudoers file of the Jenkins server will not work.
Related
I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.
I am trying to run the following cleanup script but it keeps failing on specific Jenkins nodes:
def BUILDERS = [:].asSynchronized()
def NODE_NAMES = [
'cleanuptooldean', //test
]
node('master') {
stage('Prepare the Pipeline') {
// get deploy pattern from params
for (NODE_NAME in NODE_NAMES) {
// Groovy closures stuff, need to copy it over
def FINAL_NODE_NAME = NODE_NAME
BUILDERS[FINAL_NODE_NAME] = {
node(FINAL_NODE_NAME) {
timeout(time:5, unit: "MINUTES") {
echo "Started Cleaning process of unused docker images from Jenkins Instance, Agent: "+env.NODE_NAME
sh "docker system prune -a --volumes -f"
echo "Cleaning up space from unused packages (orphaned dependencies), remove old kernels in Ubuntu, Agent: "+env.NODE_NAME
sh "sudo apt-get -y autoremove --purge"
echo "clean the apt cache on Ubuntu "+ env.NODE_NAME
sh "sudo apt-get -y clean"
echo "Finished Cleaning process of unused docker images from Jenkins Instance, Agent: "+env.NODE_NAME
}
}
}
}
}
}
the errors I get if I type "Sudo" at the beginning of "apt-get -y autoremove --purge" and "apt-get -y clean" is: "sudo: no tty present and no askpass program specified" needless to say that I have edited the sudoers file and added "jenkins ALL=(ALL) NOPASSWD: ALL" in order to test it at the end of the file.
If I remove the "Sudo" command the error I get is: "dial unix /var/run/docker.sock: connect: permission denied" which I tried to resolve by adding the "Jenkins" user to the "docker" group.
** I must say that when I run the commands locally, with and without "Sudo" they both works from "Jenkins" user, but when I try to do it remotely from Jenkins using pipeline it fails.
***this specific script works perfectly on other nodes
thanks in advance
apparently each node used a different user, so I had to add all the users to the docker's group and add them to the visudo file.
I have the following jenkinsfile
pipeline {
agent {
dockerfile {
args "-u root -v /var/run/docker.sock:/var/run/docker.sock"
}
}
environment {
ESXI_CREDS = credentials('ESXI_CREDS')
PACKER_LOG = 1
}
stages {
stage('Build Base image') {
steps {
sh "ansible-galaxy install -r ./requirements.yml"
}
}
}
reference.yml
- src:
ssh://tfsserver/_git/ansible-sshd
scm: git
name: ansible-sshd
Which uses the following Dockerfile
FROM hashicorp/packer:full
RUN apk --no-cache add git openssh-client rsync jq py2-pip py-boto py2-six py2-cryptography py2-bcrypt py2-asn1crypto py2-jsonschema py2-pynacl py2-asn1 py2-markupsafe py2-paramiko py2-dateutil py2-docutils py2-futures py2-rsa py2-libxml2 libxml2 libxslt && \
apk --no-cache add gcc python2-dev musl-dev linux-headers libxml2-dev libxslt-dev && \
pip install ansible jsonmerge awscli boto3 hvac ansible-modules-hashivault molecule python-gilt python-jenkins lxml openshift docker docker-compose mitogen yamale ansible-lint && \
apk del gcc python2-dev musl-dev linux-headers libxml2-dev libxslt-dev
USER root
ENTRYPOINT []
When running the jensfile build above it appears get stuck on authentication with our tfs server and get the following error
+ ansible-galaxy install -r ./requirements.yml
[WARNING]: - ansible-sshd was NOT installed successfully: - command
/usr/bin/git clone
ssh://tfsserver/_git/ansible-sshdtmp5VN20Z (rc=128)
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
I am using git with tfs and I don't know how i can authenticate the agent with the git repo, also don't really want have to store the private key on the build agent and volume map it to the docker container not even sure if that would work I have even tried dynamicaly adding the private key to the container during build but it does not appear to work
withCredentials([sshUserPrivateKey(credentialsId: 'tfs', keyFileVariable: 'keyfile')]) {
sh "mkdir -p ~/.ssh && cp ${keyfile} ~/.ssh/id_rsa"
sh "ansible-galaxy install -r ./requirements.yml"
}
I had the same problem but ended up solving using sed.
withCredentials([usernamePassword(credentialsId: 'GIT_AUTHENTICATION', passwordVariable: 'password', usernameVariable: 'username')])
{
sh "sed -i 's/${git_url}/${username}:${password}#${git_url}/g' roles/requirements.yml"
sh "ansible-galaxy install -c -r roles/requirements.yml -p roles/"
sh "ansible-playbook site.yml -i ${inventory}"
}
Most remote repositories allow url authentication or oAuth tokens url, both work the same way:
{protocol}://${username}:${password}#{gitl_url}/${repo}
example:
https://username:password#github.com/username/repository.git
If your password has special characters use https://www.urlencoder.org/
and remember just use it with withCredentials, so that it obfuscates sensitive data.
I'm trying to run the gcloud command in a Jenkins declarative pipeline just like in the following example:
pipeline {
agent any
stages {
stage('Run gcloud version') {
steps {
sh 'gcloud --version'
}
}
}
}
I downloaded the "GCloud SDK Plugin" and configured it like this (in "Global Tool Configuration" for Jenkins):
but when I try to build the pipeline using the above Jenkinsfile, I'm getting a 'gcloud: not found' error in the pipeline.
I was able to run the command using the following Jenkinsfile:
pipeline {
agent any
stages {
stage('Run gcloud') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
sh '$GCLOUD_PATH/gcloud --version'
}
}
}
}
}
Note: I'm running Jenkins in kubernetes, so first I had to install the gcloud sdk in the Jenkins pod
I am running Jenkins 2.176.2 in containers and the GCloud plugin was not able to install the SDK in the slave (agents) containers.
I used the docker file to install it when deploying the agents:
RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk-stretch main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update -y && apt-get install google-cloud-sdk -y \
&& PATH=$PATH:/root/google-cloud-sdk/bin
How can i create a docker image of the artifacts using docker instruction. I am using "Build Inside a docker container" in jenkins job.
This is the instruction in the dockerfile:
install openjdk 8`RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;`
Then I need to control back to Jenkins job to perform the Build process.
So in Build process in jenkins Job, "Execute Shell" commands are executed and artifacts are created.
It has some post build action to run the junit test cases and run coverage report.
In the end, I need the dockerfile to run the instruction to create image of the artifacts.
Add sourcefile destinationfile
Please suggest how to write docker instruction to give the control to jenkins job and get the control after build process is done.
You can use docker pipeline plugin to do this (see docker object)
node("docker") {
docker.withRegistry('<<your-docker-registry>>', '<<your-docker-registry-credentials-id>>') {
git url: "<<your-git-repo-url>>", credentialsId: '<<your-git-credentials-id>>'
sh "git rev-parse HEAD > .git/commit-id"
def commit_id = readFile('.git/commit-id').trim()
println commit_id
def app;
stage("build") {
app = docker.build "your-project-name"
}
stage("publish") {
app.push 'master'
app.push "${commit_id}"
}
}
}