I am new to Jenkins and still trying to understand how it actually works.
What I am trying to do is pretty simple. I trigger the build whenever I push it to my Github repo.
Then, I try to ssh into a server.
My pipeline looks like this:
pipeline {
agent any
stages {
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<id>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cd ~/.ssh
ls
cat ${KEY_FILE} > ./deployer_key.key
eval $(ssh-agent -s)
chmod 600 ./deployer_key.key
ssh-add ./deployer_key.key
ssh root#<my-server> ps -a
ssh-agent -k
'''
}
}
}
}
}
It's literally a simple ssh task.
However, I am getting inconsistent results.
When I check the log,
Failed Case
Masking supported pattern matches of $KEY_FILE
[Pipeline] {
[Pipeline] sh
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51703;' export 'SSH_AGENT_PID;' echo Agent pid '51703;'
++ SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51703
++ export SSH_AGENT_PID
++ echo Agent pid 51703
Agent pid 51703
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Host key verification failed.
When I ls inside the .ssh directory, it has those files.
In the success case,
Success Case
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
authorized_keys.bak <----------
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51566;' export 'SSH_AGENT_PID;' echo Agent pid '51566;'
++ SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51566
++ export SSH_AGENT_PID
++ echo Agent pid 51566
Agent pid 51566
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Warning: Permanently added '<my-server>' (RSA) to the list of known hosts.
It has the authorized_keys.bak file.
I don't really think that file makes the difference, but all success logs have that file while all failure logs do not. Also, I really don't get why each build has different files in it. Aren't they supposed to be independent of each other? Isn't that the point of Jenkins (trying to build/test/deploy in a new environment)?
Any help would be appreciated. Thanks.
Related
In one of the stages of my Jenkins pipeline, I do
stage('SSH into the Server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
ssh-add -L
ssh <username>#<server> docker ps
'''
}
}
}
Just to simply ssh into a server and check docker ps.
The credentialId is from the Global Credentials in my Jenkins server.
However, when running this,
I get
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=57272;' export 'SSH_AGENT_PID;' echo Agent pid '57272;'
++ SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=57272
++ export SSH_AGENT_PID
++ echo Agent pid 57272
Agent pid 57272
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
And just fails with no more messages.
Am I doing it wrong?
Based on your intention, I think that's a very complicated way to do it.
I'd strongly recommend using SSH agent plugin.
https://jenkins.io/doc/pipeline/steps/ssh-agent/
You can achieve it in one step.
sshagent (credentials: ['<ID>']) {
sh 'ssh <username>#<server> docker ps'
}
Use the same UserPrivateKey's credentialsId from the Global Credentials that you mentioned above.
I want to SSH into a server to perform some tasks in my Jenkins pipeline.
Here are the steps that I went through.
In my remote server, I used ssh-keygen to create id_rsa and id_rsa.pub
I copied the string in the id_rsa and pasted to the Private Key field in the Global Credentials menu in my Jenkins server.
In my Jenkinsfile, I do
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
more ${KEY_FILE}
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
cd ~/.ssh
echo "ssh-rsa ... (the string from the server's id_rsa.pub)" >> authorized_keys
ssh root#<server_name> docker ps
'''
}
}
}
It pretty much creates an ssg-agent using the private key of the remote server and adds a public key to the authorized key.
This as a result gives me, Host key verification failed
I just simply wanted to ssh into the remote server, but I keep facing this issue. Any help?
LOG
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=26354;' export 'SSH_AGENT_PID;' echo Agent pid '26354;'
++ SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=26354
++ export SSH_AGENT_PID
++ echo Agent pid 26354
Agent pid 26354
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
Identity added: ./key_key.key (./key_key.key)
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key_key.key root#<server> docker ps
Warning: Permanently added '<server>, <IP>' (ECDSA) to the list of known hosts.
WARNING!!!
READ THIS BEFORE ATTEMPTING TO LOGON
This System is for the use of authorized users only. ....
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
It is failing because of StrictHostKeyChecking enabled. Change your ssh command as below and it should work fine.
ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root#<server_name> docker ps
StrictHostKeyChecking=no will disable the prompt for host key verification.
UserKnownHostsFile=/dev/null will skip the host key checking by sending the key to /dev/null
I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world
I try to execute such a scenery via Jenkins "execute shell" build step:
rm -r -f _dpatch;
mkdir _dpatch;
mkdir _dpatch/deploy;
from_revision='HEAD';
to_revision='2766920';
git diff --name-only $from_revision $to_revision > "_dpatch/deploy/files.txt";
for file in $(<"_dpatch/deploy/files.txt"); do cp --parents "$file" "_dpatch"; done;
whoami
Build ends successfully with console output:
[Deploy to production] $ /bin/sh -xe /tmp/hudson8315034696077699718.sh
+ rm -r -f _dpatch
+ mkdir _dpatch
+ mkdir _dpatch/deploy
+ from_revision=HEAD
+ to_revision=2766920
+ git diff --name-only HEAD 2766920
+
+ whoami
jenkins
Finished: SUCCESS
The problem is line "for file in" is just ignored, I do not understand why.
Content of files.txt is not empty and looks like this:
addons/tiny_mce/plugins/image/plugin.min.org.js
addons/webrtc/adapter-latest.js
templates/standard/style/review.css
More over, when I execute via ssh the same script in the same jenkins workspace folder under the same user (jenkins) - "for file in" line executes normally and creates files in "_dpatch" subfolder as it should.
My environment:
Debian 8,
Jenkins 2.45
Thanks
Possibly your /bin/sh is a POSIX bourne shell. It think that the $(< construct is a bash-ism, so it will not work with /bin/sh.
Try to replace
$(<"_dpatch/deploy/files.txt")
with
$(cat "_dpatch/deploy/files.txt")
Alternatively, prepend your build step with #!/bin/bash.
If your login shell is bash, then this also explains why everything works fine via ssh.
Try substituting for with while loop. And also add some more logging
rm -r -f _dpatch;
mkdir _dpatch;
mkdir _dpatch/deploy;
from_revision='HEAD';
to_revision='2766920';
git diff --name-only $from_revision $to_revision > "_dpatch/deploy/files.txt" && echo "git diff finished"
while IFS= read -r line; do
echo $line
cp --parent $line $_dpatch
done < _dpatch/deploy/files.txt
whoami
Problem statement:
Can sign repos from inside normal terminal (also inside docker). From jenkins job, repo creation/signing fails. Job hangs.
Configuration:
Jenkins spawns docker container to create/sign deb repository.
Private and public keys all present.
gpg-agent installed on the docker container to sign the packages.
~/.gnupg/gpg.conf file has "use-agent" enabled
Progress:
Can start gpg-agent using jenkins on the docker container.
Can use gpg-preset-passphrase to cache passphrase.
Can use [OUTSIDE JENKINS]
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
to fetch the passphrase from gpg-agent and sign the repo.
Problem:
from inside a jenkins job, the command "reprepro --ask-passphrase -Vb ..." hangs.
Code:
starting gpg-agent:
GPGAGENT=/usr/bin/gpg-agent
GNUPG_PID_FILE=${GNUPGHOME}/gpg-agent-info
GNUPG_CFG=${GNUPGHOME}/gpg.conf
GNUPG_CFG=${GNUPGHOME}/gpg-agent.conf
function start_gpg_agent {
GPG_TTY=$(tty)
export GPG_TTY
if [ -r "${GNUPG_PID_FILE}" ]
then
source "${GNUPG_PID_FILE}" count=$(ps lax | grep "${GPGAGENT}" | grep "$SSH_AGENT_PID" | wc -l)
if [ $count -eq 0 ]
then
if ! ${GPGAGENT} 2>/dev/null then
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --enable-ssh-support \
--allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
if [[ $? -eq 0 ]]
then
echo "INFO::agent started"
else
echo "INFO::Agent could not be started. Exit."
exit -101
fi
fi
fi
else
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
fi
}
options file:
default-cache-ttl 31536000
default-cache-ttl-ssh 31536000
max-cache-ttl 31536000
max-cache-ttl-ssh 31536000
enable-ssh-support
debug-all
saving passphrase.
/usr/lib/gnupg2/gpg-preset-passphrase -v --preset --passphrase ${_passphrase} ${_fp}
finally (for completion), sign repo:
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}