My Jenkins is lost connection with the Tomcat server. I also has added private key in Jenkins credentials.
This is my jenkinsfile for 'Deploy-toTomcat' stage
steps {
sshagent(['tomcat']) {
sh 'scp -o StrictHostKeyChecking=no target/*.war
ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war'
}
}
}
This is the error when I am trying to build the pipeline in Jenkins
+ scp -o StrictHostKeyChecking = no target/WebApp.war ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war
command-line line 0: missing argument.
lost connection
script returned exit code 1
error
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 139377 killed;
[ssh-agent] Stopped.
I also put command chmod 777 webapps
I am following this link https://www.youtube.com/watch?v=dSMSHGoHVJY&list=PLjNII-Jkdjfz5EXWlGMBRk63PC8uJsHMo&index=7 to deploy the tomcat.
Hope anyone knows can answer my question on how I want to deploy to tomcat. The source code that I test to build the pipeline also from https://github.com/cehkunal/webapp.git. Thank you.
The error is because it did not recognize which one the authorized keys. What I've done
delete all previous keys in ./ssh file,
ssh-keygen -t rsa
mv id_rsa.pub authorized_keys
chmod 0600 /home/username/.ssh/authorized_keys
chmod 0700 /home/username/.ssh
cat id_rsa
Finally insert id_rsa in manage credentials Jenkins.
Related
In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.
I want to SSH into a server to perform some tasks in my Jenkins pipeline.
Here are the steps that I went through.
In my remote server, I used ssh-keygen to create id_rsa and id_rsa.pub
I copied the string in the id_rsa and pasted to the Private Key field in the Global Credentials menu in my Jenkins server.
In my Jenkinsfile, I do
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
more ${KEY_FILE}
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
cd ~/.ssh
echo "ssh-rsa ... (the string from the server's id_rsa.pub)" >> authorized_keys
ssh root#<server_name> docker ps
'''
}
}
}
It pretty much creates an ssg-agent using the private key of the remote server and adds a public key to the authorized key.
This as a result gives me, Host key verification failed
I just simply wanted to ssh into the remote server, but I keep facing this issue. Any help?
LOG
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=26354;' export 'SSH_AGENT_PID;' echo Agent pid '26354;'
++ SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=26354
++ export SSH_AGENT_PID
++ echo Agent pid 26354
Agent pid 26354
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
Identity added: ./key_key.key (./key_key.key)
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key_key.key root#<server> docker ps
Warning: Permanently added '<server>, <IP>' (ECDSA) to the list of known hosts.
WARNING!!!
READ THIS BEFORE ATTEMPTING TO LOGON
This System is for the use of authorized users only. ....
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
It is failing because of StrictHostKeyChecking enabled. Change your ssh command as below and it should work fine.
ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root#<server_name> docker ps
StrictHostKeyChecking=no will disable the prompt for host key verification.
UserKnownHostsFile=/dev/null will skip the host key checking by sending the key to /dev/null
I am attempting to use sshagent in Jenkins to pass my private key into the terraform container to allow terraform to source a module in a private repo.
stage('TF Plan') {
steps {
container('terraform') {
sshagent (credentials: ['6c92998a-bbc4-4f27-b925-b50c861ef113']){
sh 'ssh-add -L'
sh 'terraform init'
sh 'terraform plan -out myplan'
}
}
}
}
When running the job it fails with the following output:
[ssh-agent] Using credentials (id_rsa_jenkins)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent"
exit
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20; export SSH_AUTH_SOCK;
SSH_AGENT_PID=21; export SSH_AGENT_PID;
echo Agent pid 21;
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20
SSH_AGENT_PID=21
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/agent/workspace/demo#tmp/private_key_2729797926.key (user#workstation.local)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh-add -L
ssh-rsa REDACTED user#workstation.local
[Pipeline] sh
+ terraform init
[0m[1mInitializing modules...[0m
- module.demo_proj
Getting source "git::ssh://git#bitbucket.org/company/terraform-module"
[31mError downloading modules: Error loading modules: error downloading 'ssh://git#bitbucket.org/company/deploy-kickstart-project': /usr/bin/git exited with 128: Cloning into '.terraform/modules/e11a22f40c64344133a98e564940d3e4'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[0m[0m
[Pipeline] }
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent" "-k"
exit
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 21 killed;
[ssh-agent] Stopped.
I've triple checked and I am for sure using the correct key pair. I am able to git clone locally from my mac to the repo with no issues.
An important note is that this Jenkins deployment is running within Kubernetes. The Master stays up and uses the Kubernetes plugin to spawn agents.
What does the Host key verification failed. error mean? From my research it can be due to known_hosts not properly being set. Is ssh-agent responsible for that?
Turns out it was an issue with known_hosts not being set. As a workaround we added this to our jenkinsfile
environment {
GIT_SSH_COMMAND = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
}
Use case:
I have a Jenkins pipeline to update my development environment.
My dev env is an EC2 aws instance with docker compose.
The automation was written along the lines of:
withAWS(profile: 'default') {
sh "ssh -o StrictHostKeyChecking=no -i ~/my-key.pem user#$123.456.789 /bin/bash -c 'run some command like docker pull'"
}
Now, I have other test environments, and they all have some sort of docker-compose file, configurations and property files that requires me to go over all of them when something needs to change.
To help with that, I created a new repository to keep all the different environment configurations, and my plan is to have a clone of this repo in all my development and test environments, so when I need to change something, I can just do it locally, push it, and have my jenkins pipeline update the repository in whichever environment it is updating.
My jenkins has a ssh credential for my repo (it uses in another job that clones the repo and run tests on source code), so I want to use that same credential.
Question: can I somehow, through ssh'ing into another machine, use Jenkins ssh-agent credentials to clone/update a bitbucket repository?
Edit:
I changed the pipeline to:
script {
def hgCommand = "hg clone ssh://hg#bitbucket.org/my-repo"
sshagent(['12345']) {
sh "ssh -o StrictHostKeyChecking=no -i ~/mykey.pem admin#${IP_ADDRESS} /bin/bash -c '\"${hgCommand}\"'"
}
}
And I am getting:
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-FOburguZZlU0/agent.662
SSH_AGENT_PID=664
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/abc#tmp/private_key_12345.key (rsa w/o comment)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
[test-env-config] Running shell script
+ ssh -o StrictHostKeyChecking=no -i /home/jenkins/mykey.pem admin#123.456.789 /bin/bash -c "hg clone ssh://hg#bitbucket.org/my-repo"
remote: Warning: Permanently added the RSA host key for IP address '765.432.123' to the list of known hosts.
remote: Permission denied (publickey).
abort: no suitable response from remote hg!
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 664 killed;
[ssh-agent] Stopped.
First some background to understand the reasoning (this is pure ssh, no Jenkins or Mercurial specific): the ssh-agent utility works by making a UNIX domain socket to be then used by ssh. The ssh command attempts to communicate with the agent if it finds the the environment variable SSH_AUTH_SOCK. In addition, ssh can be instructed to forward the agent, via -A. For more details, see the man pages of ssh-agent and ssh.
So, assuming that your withAWS context makes the environment variable SSH_AUTH_SOCK (set by the plugin) available, I think it should be enough to:
add -A to your ssh invocation
in the part 'run some command like docker pull', add the hg clone command, ensuring you are using the ssh:// schema for the mercurial URL.
Security observation: -o StrictHostKeyChecking=no should be used as a last resort. From your example, the IP address of the target is fixed, so you should do the following:
remove the -o StrictHostKeyChecking=no
one-shot: get the host fingerprint of 123.456.789 (for example by ssh-ing into it and then looking for the associated line in your $HOME/.known_hosts). Save that line in a file, say 123.456.789.fingerpint
make the file 123.456.789.fingerprint available to Jenkins when it is invoking your sample code. This can be done by committing that file in the repo that contains the Jenkins pipeline, it is safe to do so since it doesn't contain secrets.
Finally, change your ssh invocation to something like ssh -o UserKnownHostsFile=/path/to/123.456.789.fingerprint ...
In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs