jenkins ssh-agent is not working on specific slave machine - jenkins

Suddenly my sshagent step in jenkins pipeline is not working and throwing below error
But same step with same creds is working fine on master and other slave agents.Also able to see ssh-agent is running on the slave node
$ ps -ef|grep -i ssh-agent
root 8548 1 0 11:40 ? 00:00:00 ssh-agent -s
root 9622 1 0 12:29 ? 00:00:00 ssh-agent -s
stage which failing
stage('Pull ansible roles') {
steps {
sshagent(credentials:[XX_XX_KEY_ID]){
sh '''
ansible-galaxy install -r requirements.yml -p roles
'''
}
}
}
Could you please guide me what may be the issue

I have found the issue is with /tmp permission. Jenkins slave user unable to execute ssh-agent command
[devops#01 ~]$ ssh-agent
mkdtemp: private socket dir: Permission denied
[devops#01 ~]$ sudo chmod 1777 /tmp
[devops#01 ~]$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-B3lS2R3p2bpT/agent.10399; export SSH_AUTH_SOCK;
SSH_AGENT_PID=10400; export SSH_AGENT_PID;
echo Agent pid 10400;
[devops#01 ~]$
Reference - https://github.com/babun/babun/issues/815

Related

Add user in Docker container, UID mismatch when running Jenkins job

I am running a Jenkins pipeline in a Docker container. The Docker container creates an unpriviliged user to run as:
RUN useradd jenkins --shell /bin/bash --create-home
RUN mkdir -p /home/jenkins/src && chown -R jenkins:jenkins /home/jenkins
USER jenkins
WORKDIR /home/jenkins/src
Jenkins runs this as:
docker run -t -d -u 1000:1000 [-v and -e flags etc.]
This works when I run Jenkins manually as my personal account (uid 1000) on the host. But now I changed it so that Jenkins is started automatically by systemd, and using a specifiy jenkins user with uid 1006, gid 1009:
docker run -t -d -u 1006:1009 [-v and -e flags etc.]
This mismatch causes my build to fail. I also get all kinds of problems, like this prompt in the container:
I have no name!#6d3b27a803e4:/$
Creating a jenkins user in the container seems like something that there should be a recipe for. How do I get the UIDs on host and container to match? What is the best practice?
Add something like usermod --uid $HOST_UID jenkins to the Dockerfile?
There seems to be no way to tell Docker to map host uid 1006 to container uid 1000, is there?
I face the same problem when run Jenkins job and found the solution working for me.
As you wrote above if you run job in container Jenkins automatically starts Docker like this:
docker run -t -d -u 113:117 [-v and -e flags etc.]
He takes his UID and GID locally and uses in Docker run command. But when steps commands start executing in the container docker doesn't have this values inside. It's cause of this trouble:
I have no name!#6d3b27a803e4:/$
The solution for me was to mount "passwd" and "group" files inside container like this:
pipeline {
agent {
docker {
image 'your_image'
args '-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group'
}
}
stages {
stage('stage_name') {
steps {
sh '''
some_commands
'''
}
}
}
And create user in Dockerfile like this:
RUN groupadd jenkins && useradd -m -d /var/lib/jenkins -g jenkins -G root jenkins
Hope this helps someone)
This worked for me:
docker {
image 'your_image'
args '-e HOME=/tmp/home'
}
/tmp is writable for everyone. No need to create user.

Run host Docker from within Jenkins Docker

Is it possible to create and run Docker containers for CI/CD from within a running Jenkins Docker Container? So basically access Docker on the host server from within a running container.
On my host server (Ubuntu 19.04) Docker (Docker version 19.03.3) is installed. By running the following command I create a Jenkins Container that I give permissions to Docker (so I thought):
mkdir /home/myuser/Desktop/jenkins_home
docker run -dit --name jenkins -v /home/myuser/Desktop/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -p 8080:8080 jenkins/jenkins:lts
Within Jenkins I create a Pipeline that loads a Jenkinsfile from Git that looks like this:
pipeline {
agent {
docker {
image 'ubuntu:19.04'
args '-u root:sudo -p 3000:3000'
}
}
stages {
stage('Install') {
steps {
sh 'apt-get update'
sh 'apt-get install -y curl'
sh 'curl -sL https://deb.nodesource.com/setup_13.x | sh -'
sh 'curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -'
sh 'echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list'
sh 'apt-get update'
sh 'apt-get install -y nodejs yarn'
}
}
stage('Build') {
steps {
sh './build.sh'
}
}
}
}
When I run the Pipeline it crashes when trying to instruct Docker to pull the ubuntu:19.04 Docker image. The error is docker: not found.
Somewhere a connection between my Jenkins Container and the host Docker access files is misconfigured. What configuration is necessary to run Docker commands on the host server from within the Docker Container?
If you want to create and run Docker containers for CI/CD from Jenkins container,
This can be achieved creating a shell command on Jenkins job that runs an ssh command on Docker host.
This needs as requirements that Jenkins container ssh public key is authorized on Docker host, so authorized_keys file should exist on Docker host.
To use the same ssh keys inside Jenkins container can be used a bind mount with ssh keys on Jenkins containers.
Example using docker-compose:
volumes:
- /home/user/.ssh/id_rsa:/var/jenkins_home/.ssh/id_rsa
- /home/user/.ssh/id_rsa.pub:/var/jenkins_home/.ssh/id_rsa.pub
This is an example content of a shell command used to launch and update containers on Docker host from a Jenkins job:
cat ./target/stack/installer-*.tar | ssh root#${DOCKER_HOST} \
/home/user/Build-Server/remote-installer.sh
In the command above an installer is launched on Docker host. As result new containers are deployed/updated on Docker host.
The remote-installer.sh script receive the file from standard input and unpack it using tar command.
TEMPDIR=`mktemp -d`
echo "unarchiving to $TEMPDIR"
tar xv -C "$TEMPDIR"
...
This works for both cases having Docker containers on same server as Jenkins container or having Docker containers and Jenkins container on different servers.

Jenkins: dockerfile agent commands not run in container

This is my Jenkinsfile:
pipeline {
agent {
dockerfile true
}
stages {
stage('Run tests') {
steps {
sh 'pwd'
sh 'vendor/bin/phpunit'
}
}
}
}
I'm running Jenkins, and although I am able to build the image successfully, "Run tests" is run outside of the new container in the host. This is not good; I want the command to run from within the new container built with the help of the dockerfile agent.
I know that the shell command is run in the host, because I've already tried debugging with sh pwd to which I got /var/jenkins_home/workspace/youtube-delete-tracker_jenkins.
Here is the end of the output in the console for the Jenkins job:
Step 18/18 : RUN chmod a+rw database/
---> Using cache
---> 0fedd44ea512
Successfully built 0fedd44ea512
Successfully tagged e74bf5ee4aa59afc2c4524c57a81bdff8a341140:latest
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
+ docker inspect -f . e74bf5ee4aa59afc2c4524c57a81bdff8a341140
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** e74bf5ee4aa59afc2c4524c57a81bdff8a341140 cat
$ docker top 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Run tests)
[Pipeline] sh
+ pwd
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow
[Pipeline] sh
+ vendor/bin/phpunit
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: 1: /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: vendor/bin/phpunit: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
$ docker rm -f 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
As you can see, pwd gives me a path on the host (the Jenkins job folder), and vendor/bin/phpunit was not found (which should be there in the container, because the package manager successfully built it as per the docker build output that I didn't include).
So how can I get the sh commands running from within the container? Or alternatively how do I get the image tag name generated by the dockerfile agent so that I could manually do docker run from outside the new container to run the new container?
INFO: The issue doesn't seem to have to do with the Declarative Pipelines, because I just tried doing it Imperative style and also get the same pwd of the Jenkins container: https://github.com/amcsi/youtube-delete-tracker/blob/4bf584a358c9fecf02bc239469355a2db5816905/Jenkinsfile.groovy#L6
INFO 2: At first I thought this was an Jenkins-within-Docker issue, and I wrote my question as such... but it turned out I was getting the same issue if I ran Jenkins on my host rather than within a container.
INFO 3: Versions...
Jenkins ver. 2.150.1
Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
EDIT:
Jenkins mounts its local workspace into the docker container, and cd into it automatically. Therefore, the WORKDIR you set in Dockerfile gets over-ridden. In your console output, the docker run command shows this fact:
docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z
From docker run man page (man docker run):
-w, --workdir=""
Working directory inside the container
The default working directory for running binaries within a container is the root directory (/). The developer can set a different
default with the
Dockerfile WORKDIR instruction. The operator can override the working directory by using the -w option.
-v|--volume[=[[HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
Create a bind mount.
If you specify, -v /HOST-DIR:/CONTAINER-DIR, Docker
bind mounts /HOST-DIR in the host to /CONTAINER-DIR in the Docker
container. If 'HOST-DIR' is omitted, Docker automatically creates the new
volume on the host. The OPTIONS are a comma delimited list and can be:
So, you need to manually cd into your own $WORKDIR first before executing the said commands. Btw, you might also want to create a symbolic link (ln -s) from jenkins volume mount dir to your own $WORKDIR. This allows you to browse the workspace via the Jenkins UI etc.
Finally, to re-iterate for clarity, Jenkins run all its build stages inside a docker container if you specify the docker agent at the top.
Old answer:
Jenkins run all its build stages inside a docker container if you specify the docker agent at the top. You can verify under which executor and slave the build is running on using environment variables.
So, first find out an environment variable that is specific to your slave. I use NODE_NAME env var for this purpose. You can find out all available environment variables in your Jenkins instance via http://localhost:8080/env-vars.html (replace the host name to match your instance).
...
stages {
stage('Run tests') {
steps {
echo "I'm executing in node: ${env.NODE_NAME}"
sh 'vendor /bin/phpunit'
}
}
}
...
A note regarding vendor/bin/phpunit was not found - this seems to be due to a typo if I'm not mistaken.

Do docker pull using jenkins

I would like to do next steps using jenkins:
1- docker pull <image_name>
2- docker run -i -t <command>
I´ve installed docker plugin on jenkins but is it this prossible? The documentations in docker plugin page is very poor .
These steps are executed programmatically by the plugin.
Alternatively you can execute an script into a jenkins slave with docker installed in build->execute shell:
#!/bin/bash
export image=`docker images httpd|wc -l`
echo image $image
if [ "$image" -lt "1" ];
then
docker pull httpd
fi
export container=`docker ps -all -f="name=webcontainer"|wc -l`
echo container $container
if [ "$container" -gt "1" ];
then
echo "Deleting webcontainer"
docker rm -f webcontainer
fi
BUILD_ID=dontKillMe docker run -d -t -p8888:80 --name webcontainer httpd
You can interact with created docker with below command:
`docker exec -it webcontainer /bin/bash`
These days (mid 2017, more than a year after the OP's question), you would use an inside directive of a Jenkins pipeline to pull and run within a docker image some commands.
For instance (Using Jenkins Pipelines with Docker), using the Docker Pipeline plugin:
docker.image('ruby:2.3.1').inside {
stage("Install Bundler") {
sh "gem install bundler --no-rdoc --no-ri"
}
stage("Use Bundler to install dependencies") {
sh "bundle install"
}
}

Cannot run sudo from Hudson job, but sudo works directly on the system

We have a linux system that we do not have full control of. Basically we cannot modify sudoers file there (it is on a remote, read only file system).
Our "solution" for hudson user to have sudo privileges was to add this user to sudo group in /etc/group file. With this approach I can execute sudo as hudson user once I ssh to the machine. However, when I try to execute sudo from a Hudson job on this system, I get the following error:
+ id
uid=60000(hudson) gid=60000(hudson) groups=60000(hudson),31(sudo)
+ cat passfile
+ sudo -S -v
Sorry, user hudson may not run sudo on sc11136681.
+ cat passfile
+ sudo -S ls /root
hudson is not allowed to run sudo on sc11136681. This incident will be reported.
The above is trying to execute:
cat passfile | sudo -S -v
cat passfile | sudo -S ls /root
Why does it work when I ssh to the machine directly but does not when Hudson uses ssh? Is there a way to make sudo work in Hudson job without adding hudson user to the sudoers file?
Edit: here is output when executing sudo commands after I ssh to the system as hudson user:
[hudson#sc11136681 ~]$ cat passfile | sudo -S -v
[sudo] password for hudson: [hudson#sc11136681 ~]$
[hudson#sc11136681 ~]$
[hudson#sc11136681 ~]$ cat passfile | sudo -S ls /root
anaconda-ks.cfg install.log.syslog jaytest
install.log iscsi_pool_protocol_fields_file subnets
The solution to this problem that worked for us was to install local sudo on the system. Command used:
sudo yum reinstall sudo
Once installed, had to make sure the right sudo was used:
export PATH=/usr/bin:$PATH
The above can be added to slave configuration so it works for all jobs on that slave.

Resources