I want to SSH into a server to perform some tasks in my Jenkins pipeline.
Here are the steps that I went through.
In my remote server, I used ssh-keygen to create id_rsa and id_rsa.pub
I copied the string in the id_rsa and pasted to the Private Key field in the Global Credentials menu in my Jenkins server.
In my Jenkinsfile, I do
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
more ${KEY_FILE}
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
cd ~/.ssh
echo "ssh-rsa ... (the string from the server's id_rsa.pub)" >> authorized_keys
ssh root#<server_name> docker ps
'''
}
}
}
It pretty much creates an ssg-agent using the private key of the remote server and adds a public key to the authorized key.
This as a result gives me, Host key verification failed
I just simply wanted to ssh into the remote server, but I keep facing this issue. Any help?
LOG
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=26354;' export 'SSH_AGENT_PID;' echo Agent pid '26354;'
++ SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=26354
++ export SSH_AGENT_PID
++ echo Agent pid 26354
Agent pid 26354
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
Identity added: ./key_key.key (./key_key.key)
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key_key.key root#<server> docker ps
Warning: Permanently added '<server>, <IP>' (ECDSA) to the list of known hosts.
WARNING!!!
READ THIS BEFORE ATTEMPTING TO LOGON
This System is for the use of authorized users only. ....
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
It is failing because of StrictHostKeyChecking enabled. Change your ssh command as below and it should work fine.
ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root#<server_name> docker ps
StrictHostKeyChecking=no will disable the prompt for host key verification.
UserKnownHostsFile=/dev/null will skip the host key checking by sending the key to /dev/null
Related
My Jenkins is lost connection with the Tomcat server. I also has added private key in Jenkins credentials.
This is my jenkinsfile for 'Deploy-toTomcat' stage
steps {
sshagent(['tomcat']) {
sh 'scp -o StrictHostKeyChecking=no target/*.war
ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war'
}
}
}
This is the error when I am trying to build the pipeline in Jenkins
+ scp -o StrictHostKeyChecking = no target/WebApp.war ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war
command-line line 0: missing argument.
lost connection
script returned exit code 1
error
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 139377 killed;
[ssh-agent] Stopped.
I also put command chmod 777 webapps
I am following this link https://www.youtube.com/watch?v=dSMSHGoHVJY&list=PLjNII-Jkdjfz5EXWlGMBRk63PC8uJsHMo&index=7 to deploy the tomcat.
Hope anyone knows can answer my question on how I want to deploy to tomcat. The source code that I test to build the pipeline also from https://github.com/cehkunal/webapp.git. Thank you.
The error is because it did not recognize which one the authorized keys. What I've done
delete all previous keys in ./ssh file,
ssh-keygen -t rsa
mv id_rsa.pub authorized_keys
chmod 0600 /home/username/.ssh/authorized_keys
chmod 0700 /home/username/.ssh
cat id_rsa
Finally insert id_rsa in manage credentials Jenkins.
I have the weirdest error in GitHub Actions that I have been trying to resolve for multiple hours now and I am all out of ideas.
I currently use a very simple GitHub Action. The end goal is to run specific bash commands via ssh in other workflows.
Dockerfile:
FROM ubuntu:latest
COPY entrypoint.sh /entrypoint.sh
RUN apt update && apt install openssh-client -y
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
mkdir -p ~/.ssh/
echo "$1" > ~/.ssh/private.key
chmod 600 ~/.ssh/private.key
echo "$2" > ~/.ssh/known_hosts
echo "ssh-keygen"
ssh-keygen -y -e -f ~/.ssh/private.key
echo "ssh-keyscan"
ssh-keyscan <IP>
ssh -i ~/.ssh/private.key -tt <USER>#<IP> "echo test > testfile1"
echo "known hosts"
cat ~/.ssh/known_hosts
wc -m ~/.ssh/known_hosts
action.yml
name: "SSH Runner"
description: "Runs bash commands in remote server via SSH"
inputs:
ssh_key:
description: 'SSH Key'
known_hosts:
description: 'Known Hosts'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.ssh_key }}
- ${{ inputs.known_hosts }}
current workflow file in the same repo:
on: [push]
jobs:
try-ssh-commands:
runs-on: ubuntu-latest
name: SSH MY_TEST
steps:
- name: Checkout
uses: actions/checkout#v2
- name: test_ssh
uses: ./
with:
ssh_key: ${{secrets.SSH_PRIVATE_KEY}}
known_hosts: ${{secrets.SSH_KNOWN_HOSTS}}
In the github action online console I get the following output:
ssh-keygen
---- BEGIN SSH2 PUBLIC KEY ----
Comment: "2048-bit RSA, converted by root#844d5e361d21 from OpenSSH"
AAAAB3NzaC1yc2EAAAADAQABAAABAQDaj/9Guq4M9V/jEdMWFrnUOzArj2AhneV3I97R6y
<...>
9f/7rCMTJwae65z5fTvfecjIaUEzpE3aen7fR5Umk4MS925/1amm0GKKSa2OOEQnWg2Enp
Od9V75pph54v0+cYfJcbab
---- END SSH2 PUBLIC KEY ----
ssh-keyscan
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
<IP> ssh-ed25519 AAAAC3NzaC1lZD<...>9r5SNohBUitk
<IP> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRNWiDWO65SKQnYZafcnkVhWKyxxi5r+/uUS2zgYdXvuZ9UIREw5sumR95kbNY1V90<...>
qWXryZYaMqMiWlTi6ffIC5ZoPcgGHjwJRXVmz+jdOmdx8eg2llYatRQbH7vGDYr4zSztXGM77G4o4pJsaMA/
***
Host key verification failed.
known hosts
***
175 /github/home/.ssh/known_hosts
As far as I understand *** is used to replace GitHub secrets which in my case is the key of the known host. Getting *** as a result for the ssh-keyscan and the cat known_host should mean, that the known_hosts file is correct and a connection should be possible. Because in both cases the console output is successfully censored by GitHub. And since the file contains 175 characters I can assume it contains the actual key. But as one can see the script fails with Host key verification failed.
When I do the same steps manually in another workflow with the exact same input data I succeed. Same goes for ssh from my local computer with the same private_key and known_host files.
This for example works with the exact same secrets
- name: Create SSH key
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ../private.key
sudo chmod 600 ../private.key
echo "$SSH_KNOWN_HOSTS_PROD" > ~/.ssh/known_hosts
shell: bash
env:
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
SSH_KNOWN_HOSTS: ${{secrets.SSH_KNOWN_HOSTS}}
- name: SSH into DO and run
run: >
ssh -i ../private.key -tt ${SSH_USERNAME}#${SERVER_IP}
"
< commands >
"
Using the -o "StrictHostKeyChecking no" flag on the ssh command in the entrypoint.sh also works. But I would like to avoid this for security reasons.
I have been trying to solve this issue for hours, but I seem to miss a critical detail. Has someone encountered a similar issue or knows what I am doing wrong?
So after hours of searching I found out what the issue was.
When force accepting all host keys with the -o "StrictHostKeyChecking no" option no ~/.ssh/known_hosts file is created. Meaning that the openssh-client I installed in the container does not seem to read from that file.
So telling the ssh command where to look for the file solved the issue:
ssh -i ~/.ssh/private.key -o UserKnownHostsFile=/github/home/.ssh/known_hosts -tt <USER>#<IP> "echo test > testfile1"
Apparently one can also change the location of the known_hosts file within the ssh_config permanently (see here).
Hope this helps someone at some point.
First, add a chmod 600 ~/.ssh/known_hosts as well in your entrypoint.
For testing, I would check if options around ssh-keyscan make any difference:
ssh-keyscan -H <IP>
# or
ssh-keyscan -t rsa -H <IP>
Check that your key is generated using the default rsa public-key cryptosystems.
The HostKeyAlgorithms used might be set differently, in which case:
ssh-keyscan -H -t ecdsa-sha2-nistp256 <IP>
I am new to Jenkins and still trying to understand how it actually works.
What I am trying to do is pretty simple. I trigger the build whenever I push it to my Github repo.
Then, I try to ssh into a server.
My pipeline looks like this:
pipeline {
agent any
stages {
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<id>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cd ~/.ssh
ls
cat ${KEY_FILE} > ./deployer_key.key
eval $(ssh-agent -s)
chmod 600 ./deployer_key.key
ssh-add ./deployer_key.key
ssh root#<my-server> ps -a
ssh-agent -k
'''
}
}
}
}
}
It's literally a simple ssh task.
However, I am getting inconsistent results.
When I check the log,
Failed Case
Masking supported pattern matches of $KEY_FILE
[Pipeline] {
[Pipeline] sh
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51703;' export 'SSH_AGENT_PID;' echo Agent pid '51703;'
++ SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51703
++ export SSH_AGENT_PID
++ echo Agent pid 51703
Agent pid 51703
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Host key verification failed.
When I ls inside the .ssh directory, it has those files.
In the success case,
Success Case
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
authorized_keys.bak <----------
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51566;' export 'SSH_AGENT_PID;' echo Agent pid '51566;'
++ SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51566
++ export SSH_AGENT_PID
++ echo Agent pid 51566
Agent pid 51566
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Warning: Permanently added '<my-server>' (RSA) to the list of known hosts.
It has the authorized_keys.bak file.
I don't really think that file makes the difference, but all success logs have that file while all failure logs do not. Also, I really don't get why each build has different files in it. Aren't they supposed to be independent of each other? Isn't that the point of Jenkins (trying to build/test/deploy in a new environment)?
Any help would be appreciated. Thanks.
In one of the stages of my Jenkins pipeline, I do
stage('SSH into the Server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
ssh-add -L
ssh <username>#<server> docker ps
'''
}
}
}
Just to simply ssh into a server and check docker ps.
The credentialId is from the Global Credentials in my Jenkins server.
However, when running this,
I get
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=57272;' export 'SSH_AGENT_PID;' echo Agent pid '57272;'
++ SSH_AUTH_SOCK=/tmp/ssh-.../agent.57271
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=57272
++ export SSH_AGENT_PID
++ echo Agent pid 57272
Agent pid 57272
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
And just fails with no more messages.
Am I doing it wrong?
Based on your intention, I think that's a very complicated way to do it.
I'd strongly recommend using SSH agent plugin.
https://jenkins.io/doc/pipeline/steps/ssh-agent/
You can achieve it in one step.
sshagent (credentials: ['<ID>']) {
sh 'ssh <username>#<server> docker ps'
}
Use the same UserPrivateKey's credentialsId from the Global Credentials that you mentioned above.
Problem statement:
Can sign repos from inside normal terminal (also inside docker). From jenkins job, repo creation/signing fails. Job hangs.
Configuration:
Jenkins spawns docker container to create/sign deb repository.
Private and public keys all present.
gpg-agent installed on the docker container to sign the packages.
~/.gnupg/gpg.conf file has "use-agent" enabled
Progress:
Can start gpg-agent using jenkins on the docker container.
Can use gpg-preset-passphrase to cache passphrase.
Can use [OUTSIDE JENKINS]
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
to fetch the passphrase from gpg-agent and sign the repo.
Problem:
from inside a jenkins job, the command "reprepro --ask-passphrase -Vb ..." hangs.
Code:
starting gpg-agent:
GPGAGENT=/usr/bin/gpg-agent
GNUPG_PID_FILE=${GNUPGHOME}/gpg-agent-info
GNUPG_CFG=${GNUPGHOME}/gpg.conf
GNUPG_CFG=${GNUPGHOME}/gpg-agent.conf
function start_gpg_agent {
GPG_TTY=$(tty)
export GPG_TTY
if [ -r "${GNUPG_PID_FILE}" ]
then
source "${GNUPG_PID_FILE}" count=$(ps lax | grep "${GPGAGENT}" | grep "$SSH_AGENT_PID" | wc -l)
if [ $count -eq 0 ]
then
if ! ${GPGAGENT} 2>/dev/null then
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --enable-ssh-support \
--allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
if [[ $? -eq 0 ]]
then
echo "INFO::agent started"
else
echo "INFO::Agent could not be started. Exit."
exit -101
fi
fi
fi
else
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
fi
}
options file:
default-cache-ttl 31536000
default-cache-ttl-ssh 31536000
max-cache-ttl 31536000
max-cache-ttl-ssh 31536000
enable-ssh-support
debug-all
saving passphrase.
/usr/lib/gnupg2/gpg-preset-passphrase -v --preset --passphrase ${_passphrase} ${_fp}
finally (for completion), sign repo:
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}