Unable to use gcloud in a jenkins docker-agent - docker

I'm trying to run a jenkins pipeline with a docker agent (google/cloud-sdk:alpine) to deploy my code to App Engine. Unfortunately, it seams I have no permission to to that although I'm root in the docker.
The issue tends to be the same as in this post : Jenkins Pipeline gcloud problems in docker
But there is no right answer to this issue.
When I launch theses command by hand, everything works.
My Jenkinsfile is :
pipeline {
agent {
docker {
image 'registry.hub.docker.com/google/cloud-sdk:alpine'
args '-v $HOME:/home -w /home'
}
}
stages {
stage('Deploy') {
steps {
withCredentials([file(credentialsId: 'bnc-hub', variable: 'SECRET_JSON')]) {
sh '''
set +x
gcloud auth activate-service-account --key-file $SECRET_JSON
gcloud config set project bnc-hub
gcloud app deploy app.yaml
'''
}
}
}
}
}
The return in Jenkins is :
[workspace] Running shell script
+ set +x
WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2018.12.28]: Permission denied.
Please verify that you have permissions to write to the parent directory.)
script returned exit code 1

By default HOME=/
Add
HOME=$WORKSPACE
before
gcloud auth activate-service-account --key-file=${GCP_SA}
worked for me

Related

"EACCES: permission denied" when running npm install by jenkins and using docker agent

i've created a simple nodejs project containing a Jenkinsfile with this content:
pipeline {
agent { docker { image 'node:12-alpine' } }
stages {
stage('build') {
steps {
sh 'npm install'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
jenkins service is running by its own user named "jenkins", i've added that user to docker user group due to previous permission issues (permission denied while trying to connect to the Docker daemon) and i already have docker plugin installed in jenkins.
but when i run the build job, i get the following error and the build fails
EACCES: permission denied, mkdir '/.npm'
if i use
agent any
in the Jenkinsfile, i don't get the mentioned error. but i want to use docker agents.
why is this happening? am i missing something ?
The user can't write in the default cache directory, you can overwrite it using this environment variable at the beginning of your pipeline :
environment {
NPM_CONFIG_CACHE = "${WORKSPACE}/.npm"
}

Docker: not found when running cmds in jenkinsfile

I am new to docker and CI. I am trying to create a jenkinsfile that would build and test my application, then build a docker image with the Dockerfile i've composed and then push it into AWS ECR. The steps I am stuck on is building an image with docker, i receive and error message docker: not found. I downloaded docker plug-in and configured it in the global tool configuration tab. Am i not adding it into tools correctly?
There was another post wear you could use kubernetes to do that however kubernetes no longer supports docker.
image of how i configured docker in global tools config:
global tool config
error
/var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: 1: /var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: docker: not found
error with permission to sock
def gv
containerVersion = "1.0"
appName = "foodcore"
imageName = appName + ":" + version
pipeline {
agent any
environment {
CI = 'true'
}
tools {
nodejs "node"
docker "docker"
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
}
}
}
stage("test") {
when {
expression {
script {
CODE_CHANGES == false
}
}
}
steps {
dir("client") {
sh 'npm test'
}
}
}
stage("build docker image") {
when {
expression {
script {
env.BRANCH_NAME.toString().equals('Main') && CODE_CHANGES == false
}
}
}
steps {
sh "docker build -t ${imageName} ."
}
}
stage("push docker image") {
when {
expression {
env.BRANCH_NAME.toString().equals('Main')
}
}
steps {
sh 'aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin repoURI'
sh 'docker tag foodcore:latest ...repoURI
sh 'docker push repoURI'
}
}
}
}
Use echo hello world to make...
Docker should be installed on the server Jenkins is running. The docker plugin provided by Jenkins is just like a tool to generate some snippets for the pipeline scripts. Installing and configuring the tool doesn't install a docker daemon. Please check if docker is installed on the OS or not.
As we can see in the thread, you are start getting permission denied on docker.sock.
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
To use docker inside Jenkins build, There are 2 methods.
Use Jenkins docker plugins as describe in above solution.
Or install docker itself in the Jenkins container and mount the docker.sock file.

Passing jenkins secrets file to docker image run

I'm building a Jenkins pipeline, I've a builde image in my repo, and I've uploaded a secret file that I need to provide to my building job to Jenkins credentials as a Secret file. I need to copy this file to the working directory of a docker run command that run a command on the builder image.
I'm using this to retrieve the file as env variable:
withCredentials([
file(credentialsId: 'keystore', variable: 'KEYSTORE'), {
try {
docker run parameters ${image} -e ${KEYSTORE} command...
}
Any ideas on how can I make that file available inside the container when run the docker image?
If you want to get the credentials file inside a docker container, you'll also need to volume mount it.
docker run parameters ${image} -e ${KEYSTORE} -v ${KEYSTORE}:${KEYSTORE}:ro
In case you use the built-in docker support, jenkins will take care of that:
pipeline {
agent {
dockerfile {
dir 'build'
}
}
stages {
stage('Build') {
steps {
withCredentials([file(credentialsId: 'keystore', variable: 'KEYSTORE')]) {
sh 'ls -l'
}
}
}
}
}

ssh-agent not working on jenkins pipeline

I am newbie and trying to implement CI/CD for my hello world reactive spring project. After releasing the image to docker repo, the next step is to connect to aws ec2 and run the created image. I have already installed ssh agen plugin and tested positive in the ssh connection configured in Mangejenkins->configuration system->ssh client.
Also My system env variabes has path=C:\Windows\System32\OpenSSH\ssh-agent.exe
In the last step I am getting :
Could not find ssh-agent: IOException: Cannot run program "ssh-agent": CreateProcess error=2, The system cannot find the file specified
Check if ssh-agent is installed and in PATH
[ssh-agent] FATAL: Could not find a suitable ssh-agent provider
My Pipelien code:
pipeline {
agent any
tools {
maven 'maven'
jdk 'jdk1.8'
}
environment {
registry ="my-registry"
registryCredential=credentials('docker-credentials')
}
stages {
stage('SCM') {
steps {
git branch: 'master',
credentialsId: 'JenkinsGitlab',
url:'https://www.gitlab.com/my-repo/panda-app'
}
}
stage('Build') {
steps {
bat 'mvn clean package spring-boot:repackage'
}
}
stage('Dockerize') {
steps {
bat "docker build -t ${registry}:${BUILD_NUMBER} ."
}
}
stage('Docker Login') {
steps{
bat "docker login -u ${registryCredential_USR} -p ${registryCredential_PSW}"
}
}
stage('Release to Docker hub') {
steps{
bat "docker push ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to AWS') {
steps {
sshagent(['panda-ec2']) {
bat "ssh -o StrictHostKeyChecking=no ubuntu#my-aws-host sudo docker run -p 8080:8080 ${registry}:${BUILD_NUMBER}"
}
}
}
}}
The build-in SSH-agent of Windows is incompatible with Jenkins SSH-Agent plugin.
I'm using the SSH-agent from the Git installation. Make sure to insert the directory(!) path of Git ssh-agent.exe before any other path, to prevent the use of Windows SSH-agent.
With a default Git for Windows installation, you can set the PATH environment variable like this:
path=c:\Program Files\Git\usr\bin;%path%
For me it didn't work to set the env var from within Jenkins UI. I added it through the settings app. When doing so, make sure to insert it before "%SystemRoot%\system32\OpenSSH".

Best solution to deploy (copy) the last version to the server using Jenkins Pipline

Here is my Jenkins Pipeline:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm build'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh './jenkins/scripts/kill.sh'
}
}
stage('Deploy') {
steps {
sh './jenkins/scripts/deploy.sh'
}
}
} }
I use Docker and jenkinsci/blueocean image to run Jenkins. The first two stages are kind of standard to build a NodeJS app, the third one, however, is the part that I want to Jenkins copy new files to the server. Here is the deploy.sh files:
#!/usr/bin/env sh
set -x
scp -o StrictHostKeyChecking=no -r dist/* deviceappstore:/var/www/my_website/static/
There are two problems, first jenkinsci/blueocean does not have scp (not setup) and second, the ~/.ssh/config does not exist inside of the Jankins docker image then SCP will fail to authenticate. My solution was to build a custom image extends from jenkinsci/blueocean, setup SCP and copy config file and SSH key into it.
There are some plugins like Publish Over SSH but it seems it's not useful for Pipeline projects.
Is there any better solution? It the whole scenario right or I'm doing something wrong? I'm looking for most secure and standard solution for this problem.
OK, I think I found a good solution.
Thanks to SSH Agent plugin I can easily pass the credentials to the SCP command and copy the files to the server. Something like this:
...
stage('Deploy') {
steps {
sshagent(['my SSH']) {
echo 'this works...'
sh 'scp -o StrictHostKeyChecking=no -r dist/* my_server:/var/www/my_site/static/'
}
}
}
...
This is perfect because all the credentials are inside of Jenkins server and there's nothing about it in the repo.
And to be able to use this, there's just one solution. You need to use apk inside of the jenkinsci/blueocean (alpine) image and setup openssh:
apk add openssh
Or better solution create a new Dockerfile and build your own version.

Resources