Configuring Jenkins with SSH jump hosts - jenkins

How do I configure SSH connections in jenkins, when I have an intermediate bastion with its own user and key like this:
Host jump
User user1
HostName jumpdns
IdentityFile /Users/myname/.ssh/jumpkey.pem
Host server
User user2
HostName serverdns
IdentityFile /Users/myname/.ssh/serverkey.pem
ForwardAgent yes
ProxyJump jump
This works on cli as ssh server. But I dont know how to encode that into my jenkins that is running locally in my laptop from within my user and not as a separate jenkins user ie. JENKINS_HOME=/Users/myname/.jenkins
I looked into Publish over SSH plugin and it does provide for a jumpdns option but not jump's own user and key. And it seems like others have been been looking for it without a solution.
What is the best way to configure Jenkins for my SSH setup?

Assuming you are on jenkins version: 2.303.2. This is the latest version as of now.
If your master has a SSH version(OpenSSH_7.4p1 for example) which supports jump host option then you can try this:
-Select Launch method as 'Launch agent via execution via execution of command on controller'
-Launch command: ssh -tt -J user#jump_host_name user#destination_host
https://www.tecmint.com/access-linux-server-using-a-jump-host/

Related

Getting 'Host key verification failed' when using ssh in docker context

i am setting up docker context like described here and cofigured the ssh key and the context. Unfortunately I keep getting an error from docker while i'm in the new context:
docker context use myhostcontext
docker ps
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l user -- myhost docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.
Suprisingly when i ssh into user#myhost the connection is established as it should be.
ssh -vv user#myhost shows that it uses the given key in ~/.ssh/config
Additional Info:
Platform: Ubuntu 20.04
Docker: 20.10.23
OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020
Here is what i've done:
I've created a docker context with
docker context create myhostcontext --docker "host=ssh://user#myhost"
I also created a new ssh keypair with ssh-keygen (tried with rsa and ecdsa),
executed ssh-add /path/to/key and ssh-copy-id -i /path/to/key user#myhost
I tried with using "id_rsa" as keyname as well as "myhost" to make sure its not just a default naming problem.
Looking at several instructions (e.g. This question) unfortunately did not help. I also checked the authorized_keys on the remote host and the public key on my local machine, they match.
My ~/.ssh/config looks like this
Host myhost
HostName myhost
User user
StrictHostKeyChecking no
IdentityFile ~/.ssh/myhost
Also removing entries from known_host did not help.
Using the remote hosts IP instead of its name did not help either.
Installing ssh-askpass just shows me, that the authenticity could not be established (default message when using ssh on a host for the first time). Since I later want to use docker context in a CI/CD environment i don't want to have any non-cli stuff.
The only other possible "issue" that comes to my mind is that the user of the remote host is different that the one i am using on the client. But - if understood correctly - that should not be an issue and also i would not know how to manage that.
Any help or suggestion is highly appreciated, since I am struggling with this for days.
Thanks in advance :)

forwarding ssh-agent on vagrant to other user than vagrant

I am currently setting up a virtual machine for my company's testing environment in vagrant. Of course, this machine needs to be able to pull from our github repositories. This should be achieved using the host machine's ssh keys. I have already set
config.ssh.forward_agent = true
in my vagrantfile, and connecting to github works fine in the vagrant user. However, since that machine needs to run jenkins, this needs to work for the jenkins user as well. Running ssh-add as jenkins does not add the host's key, though.
I found several semi-related discussions here on stackoverflow and on superuser, but none seemed to address or even solve the issue. I have no idea how to make this work, or whether this is possible at all in vagrant, so I am grateful for any pointers.
As you have not included any exact errors and what you have tried,
Let's say you are on the VM, and you want to git pull from a remote git repo
You also have a ssh private key on the VM, that is authorized to pull from the git repo via ssh:
Try this on the VM's cli:
git config core.sshCommand 'ssh -i /root/.ssh/git_private.key -F /dev/null' && ssh-agent sh -c 'ssh-add /root/.ssh/git_private.key; git pull'
and of course reference the correct path to the private ssh key that you would use to auth to git repo
I ran su command to switch to root. Using default password: vagrant.
From there su jenkins - switching user to jenkins, no password this time.
ran ssh-keygen - to generate the keys. Stored them in the default folder suggested: /var/lib/jenkins/ (actually overwrote the existing ones). That is the home folder of this jenkins user, because it is not a regular user/account, but so called "service account" I believe.
After that I just uploaded that .pub key to my bitbucket account, and everything ran fine, my jenkins could authenticate.

SSH keys keep getting deleted from Google Compute Engine VM

Background:
I am running a Google Compute Engine VM, called host.
There is a Docker container running on the machine called container.
I connect to the VM using an account called user#gmail.com.
I need to connect through ssh from the container to the host, without being prompted for the user password.
Problem:
Minutes after successfully connecting from the container to the host, the user/.ssh/authorized_keys gets "modified" by some process from Google itself. As far as I understood this process appends some ssh keys needed to connect to the VM. In my case though, the process seems to overwrite the key that I generated from the container.
Setup:
I connect to host using Google Compute Engine GUI, pressing on the SSH button.
Then I follow the steps described in this answer on AskUbuntu.
I set the password for user on host:
user#host:~$ sudo passwd user
I set PasswordAuthentication to yes in sshd_config, and I restart sshd:
user#host:~$ sudo nano /etc/ssh/sshd_config
user#host:~$ sudo systemctl restart sshd
I enter in the Docker container using bash, I generate the key, and I copy it on the host:
user#host:~$ docker exec -it container /bin/bash
(base) root#container-id:# ssh-keygen
(base) root#container-id:# ssh-copy-id user#host
The key is successfully copied to the host, the host is added to the known_hosts file, and I am able to connect from the container to the host without being prompted for the password (as I gave it during the ssh-copy-id execution).
Now, if I detach from the host, let some time pass, and attach again, I find that the user/.ssh/authorized_keys file contains some keys generated by Google, but there is no trace of my key (the one that allows the container to connect to the host).
What puzzles me more than everything is that we consistently used this process before and we never had such problem. Some accounts on this same host have still keys from containers that no longer exist!
Does anyone has any idea about this behavior? Do you know about any solutions that let me keep the key for as long as it is needed?
It looks like the accounts daemon is doing this task. You could refer this discussion thread for more details about this.
You might find the OS Login API a easier management option. Once enabled, you can use a single gcloud command or API call to add SSH keys.
In case anyone has trouble with this even AFTER adding SSH keys to the GCE metadata:
Make sure your username is in the SSH key description section!
For example, if your SSH key is
ssh-rsa AAAA...zzzz
and your login is ubuntu, make sure you actually enter
ssh-rsa AAAA...zzzz ubuntu
since it appears Google copies the key to the authorized_keys of the user specified inside the key.
In case anyone is still looking for solution for this, I solved this issue by storing the SSH Keys in Compute Engine Metadata https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys

jenkins-cli not authenticating with provided ssh private key

[Help]
Description of Problem
jenkins-cli not authenticating with provided ssh private key
Observed
when passing the jenkins-cli command:
java -jar ~/jenkins-cli.jar -s http://localhost:8080 -i ~/.ssh/ccdevops who-am-i
The console output is:
Authenticated as: anonymous
Authorities:
Desired
Jenkins should authenticate as the user with the matching public key in their profile
Relevant Information
jenkins v 2.46.3 and using the correct cli jar for the version
Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-78-generic x86_64)
only using recommend plugins
running on azure cloud in china east datacenter
azure network security group for the vm is configured to allow traffic
ssh key being used was created on the ubuntu machine jenkins is running on and the public key is in the users entry in the jenkins user database
Key was created using the instructions on the github site
jenkins-cli is running on the server and not from a remote host
Steps Tried Already
tried different keys with and without passphrases
tried the web address with localhost and the ip address
tried other jenkins-cli commands with same result
tried to create other users and putting an public ssh key in their profile. (no duplicate keys between users)
tried moving the location of the jenkins-cli jar from server root to jenkins home directory
You should also specify the SSH method and the user on the command line with -ssh and -user USER_NAME respectively. After that, your command would look like this:
java -jar ~/jenkins-cli.jar -s http://localhost:8080 -i ~/.ssh/ccdevops who-am-i -ssh -user USER_NAME
Also note that you'll need to be able to access the server via SSH as well.

Unable to ssh to master node in mesos local cluster installed system

I am a newbie to Mesos. I have installed a DCOS cluster locally in one system (Centos 7).
Everything went up properly and I am able to access the GUI of DCOS but when I am trying to connect through CLI, it is asking me for password.
I have not been prompted for any kind of password during local installation through vagrant.
But when I issue the following command:
[root#blade7 dcos-vagrant]# dcos node ssh --master-proxy --leader
Running `ssh -A -t core#192.168.65.90 ssh -A -t core#192.168.65.90 `
core#192.168.65.90's password:
Permission denied, please try again.
core#192.168.65.90's password:
I don’t know the password to be given.
Kindly help me in resolving this issue
Since the local installation bases on vagrant, you can use the following convenient workaround: directly log into the virtual machines by using vagrant's ssh.
open a terminal and enter vagrant global-status to see a list of all running vagrant environments (name/id)
switch into your dcos installation directory (e.g., cd ~/dcos-vagrant), which contains the file Vagrantfile
run vagrant ssh <name or (partial) id> in order to ssh into the virtual machine. For example, vagrant ssh m1 connects to the master/leader node, which gives you essentially the same shell as dcos node ssh --master-proxy --leader would do.
Two more tips:
within the virtual machine, the directory /vagrant is mounted to the current directory of the host machine, which is nice for transferring files into/from the VM
you may try to find out the correct ssh credentials of the default vagrant user and then add these (rather than the pem file retrieved from a cloud service provider) via ssh-add to your host machine. This should give you the ability to login via dcos node ssh --master-proxy --leader --user=vagrant without a password
The command shows that you are trying to login to the server using the userid "core". If you do not know the password of user "core", I suggest reset "core" user password and try it again.

Resources