Pass Jenkins credentials to Docker build for Composer usage - docker

I've got a composer packages in our company's private repository on BitBucket. To access it I need to use credentials stored in Jenkins. Currently the whole build is based on Declarative Pipeline and Dockerfile. To pass credentials to Composer I need those credentials in build stage to pass them to Dockerfile.
How can I achieve it?
I've tried:
// Jenkinsfile
agent {
dockerfile {
label 'mylabel'
filename '.docker/php/Dockerfile'
args '-v /net/jenkins-ex-work/workspace:/net/jenkins-ex-work/workspace'
additionalBuildArgs '--build-arg jenkins_usr=${JENKINS_CREDENTIALS_USR} --build-arg jenkins_credentials=${JENKINS_CREDENTIALS} --build-arg test_arg=test'
}
}
// Dockerfile
ARG jenkins_usr
ARG jenkins_credentials
ARG test_arg
But the args are empty.

TL;DR
Use jenkins withCredentials([sshUserPrivateKey()]) and echo the private key into id_rsa in the container.
EDITED: Removed the "run as root" step, as I think this caused issues. Instead a jenkins user is created inside the docker container with the same UID as the jenkins user that builds the docker container (no idea if that matters, but we need a user with a home dir so we can create ~/.ssh/id_rsa)
For those that suffered like me... My solution is below. It is NOT ideal as:
it risks exposing your private key in the build logs if you are not careful (the below is careful, but it's easy to forget). (Although with that in mind, it appears extracting jenkins credentials is extremely easy for anyone with naughty intentions?)
So use with caution...
In my (legacy) git project, a simple php app with internal git based composer dependencies, I have
Dockerfile.build
FROM php:7.4-alpine
# install git, openssh, composer... whatever u need here, then:
# create a jenkins user inside the docker image
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
&& mkdir -p /home/jenkins/.ssh \
&& touch /home/jenkins/.ssh/id_rsa \
&& chmod 600 /home/jenkins/.ssh/id_rsa \
&& chown -R jenkins:jenkins /home/jenkins/.ssh
USER jenkins
# I think only ONE of the below are needed, not sure.
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /home/jenkins/.ssh/config \
&& ssh-keyscan bitbucket.org >> /home/jenkins/.ssh/known_hosts
Then in my Jenkinsfile:
def sshKey = ''
pipeline {
agent any
environment {
userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
}
stages {
stage('Prep') {
steps {
script {
withCredentials([
sshUserPrivateKey(
credentialsId: 'bitbucket-key',
keyFileVariable: 'keyFile',
passphraseVariable: 'passphrase',
usernameVariable: 'username'
)
]) {
sshKey = readFile(keyFile).trim()
}
}
}
}
stage('Build') {
agent {
dockerfile {
filename 'Dockerfile.build'
additionalBuildArgs "--build-arg UID=${userId}"
}
}
steps {
// Turn off command trace for next line, as we dont want to log ssh key
sh '#!/bin/sh -e\n' + "echo '${sshKey}' > /home/jenkins/.ssh/id_rsa"
// .. proceed with whatever else, like composer install, etc
To be fair, I think some of the RUN commands in the docker container aren't even necessary, or could be run from the jenkins file? ¯_(ツ)_/¯

There was a similar issue, supposedly fixed in PR 327, with pipeline-model-definition-1.3.9
So start checking the version of your plugin.
But heed also the Dockerfile warning:
It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.
Build-time variable values are visible to any user of the image with the docker history command.
Using buildkit with --secret is a better approach for that.

Related

Cloning Meson git subproject repositories from Docker?

My project is hosted on GitHub and uses Meson build system. The project heavily uses Meson's subproject feature: there is one top project repo which uses several "child" repositories. Child repos are cloned from the GitHub by Meson at the "setup" stage (see below).
I tried to build the project using Jenkins and Docker, but failed. The problem is the GitHub access from the Docker container.
Here is the Jenkins pipeline:
pipeline
{
agent { label 'ag1' }
stages
{
stage('testrun')
{
agent
{
dockerfile
{
label "ag2"
}
}
steps
{
sh "meson setup builddir"
sh "meson compile -C builddir"
}
The Jenkins test job works up to a point where Meson tries to fetch subproject repositories from GitHub: meson setup builddir. The error is ERROR: Git command failed.
How would I go about this problem? How could I allow Jenkins to access GitHub from the Docker container?
Here is a solution.
Add this to the Jenkinsfile:
dockerfile
{
args ('-v /home/jenkins/.ssh:/home/jenkins/.ssh')
additionalBuildArgs ('--build-arg UID=$(id -u) --build-arg GID=$(id -g)')
}
The host machine needs to have SSH key installed for accessing GitHub, and this key needs to be shared with Docker container. That's what the args line does using the -v (volume) option.
The additionalBuildArgs line sets the user id to be jenkins, and this also requires changes in the Dockerfile:
# Create a Jenkins user
ARG UNAME=jenkins
# Get group and user id from outside "additionalBuildArgs"
ARG GID=
ARG UID=
# Add Jenkins user with proper host group id and user id
RUN groupadd --gid $GID $UNAME
RUN useradd --gid $GID --uid $UID --home-dir /home/$UNAME --create-home $UNAME
Now the jenkins user can access GitHub repositories and meson commands can be used.

How to correctly pass ssh key file from Jenkins credentials variable into to docker build command?

This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .

Best solution to deploy (copy) the last version to the server using Jenkins Pipline

Here is my Jenkins Pipeline:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm build'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh './jenkins/scripts/kill.sh'
}
}
stage('Deploy') {
steps {
sh './jenkins/scripts/deploy.sh'
}
}
} }
I use Docker and jenkinsci/blueocean image to run Jenkins. The first two stages are kind of standard to build a NodeJS app, the third one, however, is the part that I want to Jenkins copy new files to the server. Here is the deploy.sh files:
#!/usr/bin/env sh
set -x
scp -o StrictHostKeyChecking=no -r dist/* deviceappstore:/var/www/my_website/static/
There are two problems, first jenkinsci/blueocean does not have scp (not setup) and second, the ~/.ssh/config does not exist inside of the Jankins docker image then SCP will fail to authenticate. My solution was to build a custom image extends from jenkinsci/blueocean, setup SCP and copy config file and SSH key into it.
There are some plugins like Publish Over SSH but it seems it's not useful for Pipeline projects.
Is there any better solution? It the whole scenario right or I'm doing something wrong? I'm looking for most secure and standard solution for this problem.
OK, I think I found a good solution.
Thanks to SSH Agent plugin I can easily pass the credentials to the SCP command and copy the files to the server. Something like this:
...
stage('Deploy') {
steps {
sshagent(['my SSH']) {
echo 'this works...'
sh 'scp -o StrictHostKeyChecking=no -r dist/* my_server:/var/www/my_site/static/'
}
}
}
...
This is perfect because all the credentials are inside of Jenkins server and there's nothing about it in the repo.
And to be able to use this, there's just one solution. You need to use apk inside of the jenkinsci/blueocean (alpine) image and setup openssh:
apk add openssh
Or better solution create a new Dockerfile and build your own version.

run jenkins pipeline agent with sudo

I have an Jenkins Server running in an docker container and have access to docker an the host system, so far it is working well. Now I want to set up a pipeline testing an script inside an docker container.
Jenkinsfile:
pipeline {
agent { docker 'nginx:1.11' }
stages {
stage('build') {
steps {
sh 'nginx -t'
}
}
}
}
Error Message:
> + docker pull nginx:1.11
>
> Warning: failed to get default registry endpoint from daemon (Got
> permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/info: dial unix
> /var/run/docker.sock: connect: permission denied). Using system
> default: https://index.docker.io/v1/
>
> Got permission denied while trying to connect to the Docker daemon
> socket at unix:///var/run/docker.sock: Post
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/images/create?fromImage=nginx&tag=1.11:
> dial unix /var/run/docker.sock: connect: permission denied
>
> script returned exit code 1
My problem is that jenkins needs to run the docker command with sudo, but how to say the agent running the command with sudo?
I have faced the same issue. After analysing the console log, I have found that the reason is that the Docker Jenkins Plugin starts a new container with a specific option -u 107:112:
...
docker run -t -d -u 107:112 ...
...
After trying many options such as: add jenkins to sudo group (it did not work because jenkins user does not exist in container), add USER root into Dockerfile, ... but none of them do the trick.
Finally I have found a solution that is using args in docker agent to overwrite the -u option. This is my Jenkinsfile:
pipeline {
agent {
docker {
image 'ubuntu'
args '-u root:sudo -v $HOME/workspace/myproject:/myproject'
}
}
stages {
stage("setup_env") {
steps {
sh 'apt-get update -y'
sh 'apt-get install -y git build-essential gcc cmake make'
}
}
stage("install_dependencies") {
steps {
sh 'apt-get install -y libxml2-dev'
}
}
stage("compile_dpi") {
steps {
sh 'cd /myproject && make clean && make -j4'
}
}
stage("install_dpi") {
steps {
sh 'cd /myproject && make install'
}
}
stage("test") {
steps {
sh 'do some test here'
}
}
}
post {
success {
echo 'Do something when it is successful'
bitbucketStatusNotify(buildState: 'SUCCESSFUL')
}
failure {
echo 'Do something when it is failed'
bitbucketStatusNotify(buildState: 'FAILED')
}
}
}
There's maybe a security issue here but it is not the problem in my case.
I'd solve the problem differently, matching the jenkins group id inside the container to that of the docker socket you've mounted a volume. I do this with an entrypoint that runs as root, looks up the gid of the socket, and if that doesn't match that of the gid inside the current container, it does a groupmod to correct it inside the container. Then I drop privileges to the jenkins user to launch Jenkins. This entrypoint run on every startup, but fairly transparently to the Jenkins app that is launched.
All the steps to perform this are included in this github repo: https://github.com/sudo-bmitch/jenkins-docker/
You can work around that by:
1- In your Dockerfile add jenkins to the sudoers file:
RUN echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
2- Add an extra step in your Jenkinsfile to give jenkins the right permissions to use docker:
pipeline {
agent none
stages {
stage("Fix the permission issue") {
agent any
steps {
sh "sudo chown root:jenkins /run/docker.sock"
}
}
stage('Step 1') {
agent {
docker {
image 'nezarfadle/tools'
reuseNode true
}
}
steps {
sh "ls /"
}
}
}
}
As others have suggested, the issue is that jenkins does not have permission to run docker containers. Let's go over the ways you could launch jenkins first, and then see what could be done in each of these ways.
1. running jenkins manually
Surely you could download & run jenkins with java as suggested in here. In this method, you could do several things to allow your jenkins user to use docker:
a. give jenkins user root access:
I do not suggest this way, after all you are giving your pipelines access to everything! So you probably do not want this to happen.
b. add jenkins user to docker group
Like explained here you could manage docker as non-root user. just add your user to docker group and thats all. I recommend it if you know who is going to use docker (cause well, you are giving him root access in docker in a way).
c. make docker rootless
This is a new feature docker added to its arsenal recently. You could read in detail what it implies here. To tell you the truth I am not a fan of this feature! The reason is that you can not (at least I could not find a way) to make it work for a user in a container (as you need to stop docker service to make it happen), also I had some difficulties configing dns when using rootless mode. But it should be fine if you are not in a container.
2. running jenkins in docker
This method is more troublesome actually! I struggled with the ways I could use docker in jenkins container, but in the end got the results needed, so was worth the effort.
To run docker in jenkins (which is also a docker container itself) you have three ways:
1. use dind (docker in docker)
It is pretty straight forward, you run dind image & connect docker in jenkins container to the dind, without any special permission handling you can use docker at will.
2. use dood (docker outside of docker)
mount docker path as a volume in the docker run script for your jenkins, note that you need to use one of the two ways I explained above (in running jenkins manually) to be able to use docker, it could be a bit tricky but possible.
3. run agent as a docker in a different environment & connect remote agent in jenkins
At last it is possible to run the agent separately & connecting the remote agent in jenkins. Although this does not exactly answer your question, but is a way you could use.
These ways for just running a docker in jenkins, you will probably have some issues after you ran a docker as agent, like having permission issues in the agent container itself, which is most likely because of the agent's user (if you like, you could access the user with command
docker exec -it [agent container id] whoami
e.g. in this sample the user in agent is node
agent {
docker { image 'node:14-alpine' }
}
steps{
sh 'npm i -g random'
}
so it would throw an error because the node user does not have permission to install npm module globally (I know, it is weird!)
so as luongnv89 mentioned, you could change the user running the docker like this
agent {
docker { image 'node:14-alpine' args '-u root' }
}
Hope this was helpful understanding the whole picture. 😊
What worked for me was
node() {
String jenkinsUserId = sh(returnStdout: true, script: 'id -u jenkins').trim()
String dockerGroupId = sh(returnStdout: true, script: 'getent group docker | cut -d: -f3').trim()
String containerUserMapping = "-u $jenkinsUserId:$dockerGroupId "
docker.image('image')
.inside(containerUserMapping + ' -v /var/run/docker.sock:/var/run/docker.sock:ro') {
sh "..."
}
}
This way the user in the container still uses the jenkins user id + group id to avoid permissions conflicts with shared data but is also member of the docker group inside container which is required to access the docker socket (/var/run/docker.sock)
I prefer this solution as it doesn't require any additional scripts or dockerfiles
I just had the same exact issue. You need to add jenkins user to docker group:
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
JENKINS_USER=jenkins
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
sudo groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
sudo usermod -aG ${DOCKER_GROUP} ${JENKINS_USER}
fi
# Start Jenkins service
sudo service jenkins restart
After you run the above, pipelines successfully start docker
I might have found a reasonably good solution for this.
Setup
I run Jenkins as a container and use it to build containers on the dockerhost it's running on. To do this, I pass /var/run/docker.sock as a volume to the container.
Just to reiterate the disclaimer some other people already stated: Giving access to the docker socket is essentially like giving root access to the machine - be careful!
I assume that you've already installed docker into your Jenkins Image.
Solution
This is based on the fact, that the docker binary is not in the first directory of $PATH. We basically place a shell script that runs sudo docker instead of just the plain docker command (and passes the parameters along).
Add a file like this to your jenkins repository and call it docker_sudo_overwrite.sh:
#! /bin/sh
# This basically is a workaround to add sudo to the docker command, because aliases don't seem to work
# To be honest, this is a horrible workaround that depends on the order in $PATH
# This file needs to be place in /usr/local/bin with execute permissions
sudo /usr/bin/docker $#
Then extend your Jenkins Dockerfile like this:
# Now we need to allow jenkins to run docker commands! (This is not elegant, but at least it's semi-portable...)
USER root
## allowing jenkins user to run docker without specifying a password
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/bin/docker" >> /etc/sudoers
# Create our alias file that allows us to use docker as sudo without writing sudo
COPY docker_sudo_overwrite.sh /usr/local/bin/docker
RUN chmod +x /usr/local/bin/docker
# switch back to the jenkins-user
USER jenkins
This gives the jenkins service user the ability to run the docker binary as root with sudo (without providing a password). Then we copy our script to /usr/local/bin/docker which "overlays" the actual binary and runs it with sudo. If it helps, you can look at my example on Github.
Same issue here where.
[...]
agent { docker 'whatever_I_try_doesnt_work'} # sudo, jenkins user in dockerroot group etc
[...]
So my workaround is to add it as one of the steps in the the build stage of the pipeline as follow:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo docker pull python:3.5.1'
}
}
}
}

Docker Plugin for Jenkins Pipeline - No user exists for uid 1005

I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}

Resources