Jenkins Pipeline gcloud problems in docker - docker

I'm trying to set up jenkins pipeline according to
this article but instead use google container registry to push the docker images to.
The Problem: The part which fails me is this jenkinsfile stage block
stage ('Push Docker Image To Container Registry') {
docker.image('google/cloud-sdk:alpine').inside {
sh "echo ${env.GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
The Error:
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.components.update) Could not create directory [/.config/gcloud]: Permission denied.
I can't run any command to do with gcloud as the error above is what i get all the time.
I tried create the "/.config" directory manually logged into the aws instance and open up the permission of the folder to everyone but that didn't help either.
I also can't find anywhere how to properly setup google cloud for jenkins pipeline using docker.
Any suggestions are greatly appreciated :)

It looks like it's trying to write data directly into your root file system directory.
The .config directory for gcloud would normally be in the following locations for username and/or root user:
/home/yourusername/.config/gcloud
/root/.config/gcloud
It looks like, for some reason, jenkins thinks the parent directory should be in /.
I would try checking where your cloud sdk config directories are on the machine you are running this on (and for the user the scripts runs as):
$ sudo find / -iname "gcloud"
And look for location similars to those printed above.
Could it be that the Cloud SDK is installed in a none standard location on the machine?

Related

Problems transferring build artifacts from Jenkins running in a docker container

I'm a little bit of a newb, with this CI/CD container stuff so please correct me anywhere I'm wrong.
I can't seem to find out how to send by npm build files created on my jenkins instance (workspace) to a remote server. I have a pipeline that successfully pulls in my github repo, does all my fun npm stuff (npm install, test, build). I see my build dir in my jenkins instance /workspace.
My environment is as follows. We have a server where docker (with Portainer) is installed. Jenkins is running in a container with a volume mounted (my react build dir goes here). No issues with the pipeline or building etc. I just can't figure out how to push my artifacts from my jenkins workspace directory to my 'remote' dev server.
I can successfully open a console in my jenkins container (portainer as the jenkins user) and scp files from the workspace directory using my remote server creds(but password is necessary).
I installed and used "Publish Over SSH" Jenkins plugin and get a successful "Test Configuration" from my setup.
I created my RSA keys on the REMOTE machine (that I'm trying to push my build files to).
I then pasted the private key (created without a password) into the plugin at the 'Use password authentication, or use a different key' section. Again, I get a successful test connection.
In my pipeline the last step is deploying and I use this command
sh 'scp -r build myusername#xx.xx.xx.xx:/var/files/react-tester'
I get a 'Permission denied (publickey,password).' error. I have no password associated with the rsa key. I tried both ways, creating the rsa key on the remote machine as my remote user, and the jenkins machine as the jenkins user. I've read examples of people creating the keys both ways, but not sure which user/machine combo to create the keys and paste to which section of the 'Publish Over SSH' plugin.
I'm out of ideas.
First, go to "Manage Jenkins" > "Credentials", add a new SSH credential of type "SSH Username with private key" and fill the "Username" and your private key (generate one if you haven't done it yet) fields (you can also upload one). Don't forget that you have to copy the generated public key to the ${SSH_USERNAME}/.ssh/authorized_keys file on the remote server.
I'm assuming you're using a scripted or DSL pipeline here. In your code, after you've builded your application, you can push it to your server adding this step to your pipeline:
pipeline {
stages {
stage("Pushing changes to remote server") {
steps {
script {
def remote_server = "1.2.3.4"
withCredentials([sshUserPrivateKey(credentialsId: 'my-key', keyFileVariable: 'SSH_KEY', passphraseVariable: '', usernameVariable: 'SSH_USERNAME')]) {
sh "scp -i \${SSH_KEY} build/ ${SSH_USERNAME}#${remote_server}:/var/files/react-tester/"
}
}
}
}
}
}
Best regards.

Jenkins Pipeline - connect to Docker host using SSH credentials

We would like to use Jenkins Pipelines to work with AWS ECR images on a remote host that has Docker installed, but does not (and will not) expose the Docker socket over port 2376.
A couple of simpler options include:
using the existing Jenkins SSH/scripts
using the pipeline ssh-agent and running commands in-line there
However, because the declarative docker plugin seems to have everything needed it would be cleaner to use this since tags, etc., will all align with other parts of the pipeline.
All examples on the internet show
docker.withServer("tcp://X:2376","credentialsId") {...}
However from configuring the Jenkins Cloud Config -> Docker Templates it seems ssh is provided so we tried the following:
stages {
stage('Deploy to Remote Host') {
steps {
script {
docker.withServer("ssh://ec2-x-x-x-x.mars-1.compute.amazonaws.com:22", "ssh-credentials-id") {
docker.withRegistry("https://1234567890.dkr.ecr.mars-1.amazonaws.com", "ecr:mars-1:ecr-credentials-id") {
docker.pull('my-image:latest')
}
}
}
}
}
}
Unfortunately, we get the following connection error:
error during connect: Post http://docker/v1.40/auth: command [ssh -p 22 ec2-x-x-x-x.mars-1.compute.amazonaws.com -- docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Host key verification failed.
We have Docker v19 on the server, and the ssh key is fine using ssh-agent.
Any ideas about what we need to do to get this working?
I solved this by adding SSH commands (invoked on the agent) to the remote/target host, then invoking 'docker pull...', 'docker run...' etc using the shell.... however doing this in Jenkins Pipeline is fragile - the ssh commands turn into scripts, the credentials lookups are fairly complex & messy to look at and the overall outcome, while presenting a nice pipeline in Blue Ocean, is going to become difficult to code/maintain.
So following the advice that 'doing everything in Jenkins Pipeline is an anti-pattern' I broke the process into two pieces - building/pushing the image using Jenkins Pipeline + Docker Plugin which is documented and supported as a first class concern, and then invoking a Ansible 'build job' from the Pipeline passing dev/stage/live parameters, and letting Ansible take care of all of the other variables and secrets needed. This means the end result is similar in Blue Ocean, but the complex deployment logic is coded more cleanly in an appropriate tool designed to handle my use-case, and looping through environments/hosts is much cleaner in Ansible [or substitute your preferred deployment tool here].
To get rid of the "Host key verification failed" error I used this in my pipeline:
sh '''
[ -d ~/.ssh ] || mkdir ~/.ssh && chmod 0700 ~/.ssh && ssh-keyscan HOSTNAME >> ~/.ssh/known_hosts
'''

access denied on a jenkins build to a remote windows file server

I'm new to Jenkins and I am trying to play around with it.
I'm trying to run a pipeline with a command that will run a simple dir on a remote windows fileserver (with a UNC path provided).
pipeline {
agent any
stages {
stage('Read File') {
steps {
bat 'whoami'
bat label: 'check directory', script: 'dir \\\\filesrv\\C$\\NewUser'
}
}
}
}
The whoami command returns the Jenkins AD user i configured to run the service on the slave
but after that I get an error Access is denied.
I tried giving the Jenkins AD service user local admin permission on the Jenkins master and slave servers and also on the file server. didn't help.
I also tried to explicitly giving that user full control permission on the folder I'm trying to access (located on the file server). didn't help.
I also tried giving permission to the computer accounts like many thread suggenst and point to this link https://serverfault.com/questions/135867/how-to-grant-network-access-to-localsystem-account also didn't help.
Will appreciate some assistance in understanding what permission is it missing?
Thanks in advance
i'll post the solution for any feature reference for those who are using windows environment...
the thing i was missing was making the target folder a shared folder.
so instead of \\\\filesrv\\C$\\NewUser' the path that worked is \\\\filesrv\\NewUser'
where NewUser is the name of the share

Jenkins user not in passwd on dynamic jnlp slave in kubernetes

I am building a system to do c++ cmake builds primarily. I have Jenkins firing the dynamic pods, firing off shell scripts, etc. But, I can't get it to checkout the code. Now, my Jenkinsfile launches a container that the actual compile is supposed to be run in. That "sub" container is tuned to compile C++ code. Now, I have jenkins running scripts and such in that pod, but, when i try
checkout scm
im getting errors saying
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress git#gitlab.com:mystuff/hello-world-cmake.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: No user exists for uid 1000080000
fatal: Could not read from remote repository.
my home folder is the standard /home/jenkins and the workspace folder is there, etc, etc. But, when I dump the /etc/passwd file, the jenkins user isn't listed in it.
Whats the appropriate way to add the jenkins user to that file?
What image are you using for Jenkins slave? Does it have user jenkins? If it has you need to specify this in your spec for Jenkins slave:
spec:
securityContext:
runAsUser: 1000
UPDATE:
You cannot run default Jenkins image in Openshift, because Openshift runs containers as random user. You should run Jenkins from builtin Jenkins template "Jenkins Persistent". If you don't have this template and don't have Jenkins image stream - you can try to use image openshift/jenkins-2-centos7. See details at:
https://github.com/openshift/jenkins/issues/168
https://github.com/openshift/jenkins

how I can change permission for a folder in jenkins?

I have jenkins working in local server on my mac and for a job, I need to run a script shell witch it need to read a json file. I tryied to put it in workflow-lib file. But when I run the job for testing, the script return an error witch says :
/Users/****/.jenkins/workflow-libs/testCollections: Permission denied
Build step 'Run a shell script' marked build as failure
Finished: FAILURE
So I know that the script can read the json file but I don't know how I can give it this permission.
thank you for helping
Please follow this command to give the permissions.
sudo chown -R <jenkins user>:<jenkins group> /jenkins_root_path
here <jenkins user> and jenkins group is the user and group which you are running jenkins under.
If the jenkins user is a member in the sudoers list:
sudo rm -rf /Users/****/.jenkins/workflow-libs/testCollections
But this can be dangerous.
I would highly suggest you run the Jenkins process as the jenkins user/group and not muck around in there as your own user account.
This will ensure your file permissions are proper and that the jenkins process only has access to the areas it needs as well as letting you spin up new slaves without worrying about permissions and custom settings.
You can always use sudo to become the 'jenkins' user to work with the files.

Resources