Error when using withCredentials in a Jenkinsfile - jenkins

In one of the stages of my Jenkins pipeline, I do
stage('SSH into the Server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
ssh-add -L
ssh <username>#<server> docker ps
'''
}
}
}
Just to simply ssh into a server and check docker ps.
The credentialId is from the Global Credentials in my Jenkins server.
However, when running this,
I get
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=57272;' export 'SSH_AGENT_PID;' echo Agent pid '57272;'
++ SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=57272
++ export SSH_AGENT_PID
++ echo Agent pid 57272
Agent pid 57272
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
And just fails with no more messages.
Am I doing it wrong?

Based on your intention, I think that's a very complicated way to do it.
I'd strongly recommend using SSH agent plugin.
https://jenkins.io/doc/pipeline/steps/ssh-agent/
You can achieve it in one step.
sshagent (credentials: ['<ID>']) {
sh 'ssh <username>#<server> docker ps'
}
Use the same UserPrivateKey's credentialsId from the Global Credentials that you mentioned above.

Related

jenkins how to use ssh-agent in docker

My jenkins is run in docker, I write a demo to remote my server with ssh-agent.
Here is my pipeline
pipeline {
agent any
stages {
stage('Hello') {
steps {
sshagent (credentials: ['hehu']) {
sh 'ssh -o StrictHostKeyChecking=no -l yunwei xxx.xxx.xx.25 -a'
sh 'pwd'
sh 'whoami'
}
}
}
}
}
Output
It looks like pwd and whoami command still run in jenkins docker not my server. I have no idea how to use this plugin, I can't find any usage from ssh-agent document.
You should use:
sh 'ssh -o StrictHostKeyChecking=no -l yunwei x.x.x.x pwd && whoami && cmd...'

Jenkins inconsistency (file changes everytime)

I am new to Jenkins and still trying to understand how it actually works.
What I am trying to do is pretty simple. I trigger the build whenever I push it to my Github repo.
Then, I try to ssh into a server.
My pipeline looks like this:
pipeline {
agent any
stages {
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<id>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cd ~/.ssh
ls
cat ${KEY_FILE} > ./deployer_key.key
eval $(ssh-agent -s)
chmod 600 ./deployer_key.key
ssh-add ./deployer_key.key
ssh root#<my-server> ps -a
ssh-agent -k
'''
}
}
}
}
}
It's literally a simple ssh task.
However, I am getting inconsistent results.
When I check the log,
Failed Case
Masking supported pattern matches of $KEY_FILE
[Pipeline] {
[Pipeline] sh
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51703;' export 'SSH_AGENT_PID;' echo Agent pid '51703;'
++ SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51703
++ export SSH_AGENT_PID
++ echo Agent pid 51703
Agent pid 51703
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Host key verification failed.
When I ls inside the .ssh directory, it has those files.
In the success case,
Success Case
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
authorized_keys.bak <----------
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51566;' export 'SSH_AGENT_PID;' echo Agent pid '51566;'
++ SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51566
++ export SSH_AGENT_PID
++ echo Agent pid 51566
Agent pid 51566
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Warning: Permanently added '<my-server>' (RSA) to the list of known hosts.
It has the authorized_keys.bak file.
I don't really think that file makes the difference, but all success logs have that file while all failure logs do not. Also, I really don't get why each build has different files in it. Aren't they supposed to be independent of each other? Isn't that the point of Jenkins (trying to build/test/deploy in a new environment)?
Any help would be appreciated. Thanks.

SSH into a remote server in a Jenkins pipeline using withCredentials

I want to SSH into a server to perform some tasks in my Jenkins pipeline.
Here are the steps that I went through.
In my remote server, I used ssh-keygen to create id_rsa and id_rsa.pub
I copied the string in the id_rsa and pasted to the Private Key field in the Global Credentials menu in my Jenkins server.
In my Jenkinsfile, I do
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
more ${KEY_FILE}
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
cd ~/.ssh
echo "ssh-rsa ... (the string from the server's id_rsa.pub)" >> authorized_keys
ssh root#<server_name> docker ps
'''
}
}
}
It pretty much creates an ssg-agent using the private key of the remote server and adds a public key to the authorized key.
This as a result gives me, Host key verification failed
I just simply wanted to ssh into the remote server, but I keep facing this issue. Any help?
LOG
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=26354;' export 'SSH_AGENT_PID;' echo Agent pid '26354;'
++ SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=26354
++ export SSH_AGENT_PID
++ echo Agent pid 26354
Agent pid 26354
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
Identity added: ./key_key.key (./key_key.key)
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key_key.key root#<server> docker ps
Warning: Permanently added '<server>, <IP>' (ECDSA) to the list of known hosts.
WARNING!!!
READ THIS BEFORE ATTEMPTING TO LOGON
This System is for the use of authorized users only. ....
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
It is failing because of StrictHostKeyChecking enabled. Change your ssh command as below and it should work fine.
ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root#<server_name> docker ps
StrictHostKeyChecking=no will disable the prompt for host key verification.
UserKnownHostsFile=/dev/null will skip the host key checking by sending the key to /dev/null

execute commands on remote host in a Jenkinsfile

i am trying to ssh into a remote host and then execute certain commands on the remote host's shell. Following is my pipeline code.
pipeline {
agent any
environment {
// comment added
APPLICATION = 'app'
ENVIRONMENT = 'dev'
MAINTAINER_NAME = 'jenkins'
MAINTAINER_EMAIL = 'jenkins#email.com'
}
stages {
stage('clone repository') {
steps {
// cloning repo
checkout scm
}
}
stage('Build Image') {
steps {
script {
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no'
sh "echo pwd"
sh 'sudo -i -u root'
sh 'cd /opt/docker/web'
sh 'echo pwd'
}
}
}
}
}
}
But upon running this job it executes sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no' successfully but it stops there and does not execute any further commands. I want to execute the commands that are written after ssh command inside the remote host's shell. any help is appreciated.
I would try something like this:
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no "echo pwd && sudo -i -u root && cd /opt/docker/web && echo pwd"'
}
I resolve this issue
script
{
sh """ssh -tt login#host << EOF
your command
exit
EOF"""
}
stage("DEPLOY CONTAINER"){
steps {
script {
sh """
#!/bin/bash
sudo ssh -i /path/path/keyname.pem username#serverip << EOF
sudo bash /opt/filename.sh
exit 0
<< EOF
"""
}
}
}
There is a better way to run commands on remote using SSH. I know this is late answer but I just explored this thing so would like to share and this will help others to resolve this problem easily.
I just found this link helpful on how to run multiple commands on remote using SSH. Also we can run multiple commands conditionally as mentioned in above blog.
By going through it, I found the syntax:
ssh username#hostname "command1; command2;commandN"
Now, how to run command inside remote hots using SSH in Jenkins pipeline?
Here is the solution:
pipeline {
agent any
environment {
/*
define your command in variable
*/
remoteCommands =
"""java --version;
java --version;
java --version """
}
stages {
stage('Login to remote host') {
steps {
sshagent(['ubnt-creds']) {
/*
Provide variable as argument in ssh command
*/
sh 'ssh -tt username#hostanem $remoteCommands'
}
}
}
}
}
Firstly and optionally, you can define a variable that holds all commands separated by ;(semicolon) and then pass it as parameter in command.
Another way, you can also pass your commands directly to ssh command as
sh "ssh -tt username#hostanem 'command1;command2;commandN'"
I have used it in my code and it's working great!
see the output here
Happy Learning :)

Docker Plugin for Jenkins Pipeline - No user exists for uid 1005

I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}

Resources