warning REMOTE HOST IDENTIFICATION HAS CHANGED - ruby-on-rails

Yesterday I was trying to update my ruby on rails application by uploading it with capistrano but I had to cancel the upload in the middle of the process, immediately I was trying to access the server via ssh with ssh deploy#my_ip_server and it was waiting to access, I ended up restarting the aws instance.
Today I am trying to access the server via ssh and I get this alert:
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:Adfadssdgdfg......
Please contact your system administrator.
Add correct host key in /home/jeff/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/jeff/.ssh/known_hosts:23
remove with:
ssh-keygen -f "/home/jeff/.ssh/known_hosts" -R "my_ip_server"
ECDSA host key for my_ip_server has changed and you have requested strict checking.
The ip of my instance changed I imagine it is because I restarted the instance, I immediately changed the .ssh/authorized_keys file with a new access key.
To access by ssh I have a security rule in aws that only allows access with the ip of my machine.
Should I be worried about this alert? being that the instance is new and at the moment it is only in testing phase.

Related

How to load known host key from String instead of filesystem?

I have a Spring Boot application with Apache SSHD. Therefore, the application needs a known host key. How to provide this known host key?
For production I can use a static known_hosts file, but for integration test I need a dynamically generated known host key, because the port of the SSH server is not static and SSH doesn't support portless known host keys, see SSH_KNOWN_HOSTS FILE FORMAT.
Known hosts file
[localhost]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC2tFA40NT1KDyQxjld9fZlInZ3iOinn/hHZrcPq9XBzW09HeQVVz4hhPZ3sThxLdU/ZXTyaTY/RG3SahtRTI6pqK96etZa9gS6w13SIBtyrfsTuqc0TpfuMAYZFhXpOubKFzGpV+///1qJ61PmLRxEklpPnIb++uMMCaEyG1chtRli2ywokjCiNyUqJ6HLKPW3LIzDNtHA+I5EKkIP/vCFma3xBI2N74chaT6eL67zuULT3hai8g57hrgR3LeIvg6e2E6PlYJYmNHqO9L+TCbs3VN+OJxcl//bLMWxHQaGBUgi5nlKLzuGwNFl+KdMo8a+igwuGg9vlaFv5YSkvA+s99AxtOoiwViVX8+V7WwBvNmfh2Spp1jUNIclKdqKO2y8qxKb70KrDIPQ0pgdKs+Dm2v3FxIO6dNi6FXzwds0DLmNiAcUnPBzBQ5DRkGSz/Ih/OA3BwjSFQVS+j+tAOH1NftPi+U7SelOqJMYBk7Q48F3ZQ5Tr7yCbSgC2Jxz9i3XmEBTKInwq/dz9rM4Hae7g+SLWHxHvav4so9c01cfgG+dyouphXvh7mpuGlt/Jieg0B24GY5Mgr7EVT3+e77922yNzvsHNNnOuEW/uCmpsq2BXWpxhpoKLvOZcgVD8XdQfboO8PxBURFJ9/Lg3F0LQrbNMlaWcV3P1p7TgmevoQ==
Documentation
In the documentation is listed an implementation for known host keys saved in the filesystem and an implementation for one static server key, see ServerKeyVerifier:
ServerKeyVerifier
client.setServerKeyVerifier(...); sets up the server key verifier. As part of the SSH connection initialization protocol, the server proves its "identity" by presenting a public key. The client can examine the key (e.g., present it to the user via some UI) and decide whether to trust the server and continue with the connection setup. By default the client is initialized with an AcceptAllServerKeyVerifier that simply logs a warning that an un-verified server key was accepted. There are other out-of-the-box verifiers available in the code:
RejectAllServerKeyVerifier - rejects all server key - usually used in tests or as a fallback verifier if none of it predecesors validated the server key
RequiredServerKeyVerifier - accepts only one specific server key (similar to certificate pinning for SSL)
KnownHostsServerKeyVerifier - uses the known_hosts file to validate the server key. One can use this class + some existing code to update the file when new servers are detected and their keys are accepted.
But I couldn't find an example for RequiredServerKeyVerifier.
Research
I could disable server key validation in my integration tests, but I want to test the configuration code for server key validation, too.
I could dynamically change the known_hosts file in filesystem to change the port, but that is error prune and it increases the complexity (file permissions, parallel access).
Question
How to load known host key from String instead of filesystem?

Jenkins cannot connect to EC2 using private key, but I can connect using Putty

I recently inherited a Jenkins instance running on an AWS EC2 server. It has several pipelines to different EC2 servers that are running successfully. I'm having trouble adding a new node to a new EC2 web server.
I have an account on that new web server named jenkins. I generated keys, added the ssh-rsa key to ~/.ssh/authorized_keys, and verified I was able to connect with the jenkins user via Putty.
In Jenkins, under Dashboard > Credentials > System > Global Credentials, I created new credentials as follows:
Username: jenkins
Private Key -> Enter Key Directly: Pasted in the key beginning with "BEGIN RSA PRIVATE KEY":
Finally, I created a new node using those credentials, to connect via SSH and use the "Known hosts file Verification Strategy."
Unfortunately, I'm getting the following error when I attempt to launch the agent:
[01/04/22 22:16:43] [SSH] WARNING: No entry currently exists in the
Known Hosts file for this host. Connections will be denied until this
new host and its associated key is added to the Known Hosts file. Key
exchange was not finished, connection is closed.
I verified I have the correct Host name configured in my node.
I don't know what I'm missing here, especially since I can connect via Putty.
Suggestions?
Have you added the new node to the known hosts file on the Controller node?
I assume Putty was your local machine rather than the controller?
See this support article for details
https://support.cloudbees.com/hc/en-us/articles/115000073552-Host-Key-Verification-for-SSH-Agents#knowhostsfileverificationstrategy
Sounds like your system doesn't allow for automatic hostkeys into the known_hosts file. You can check for the UpdateHostKeys flag in either your user, system, or potentially whatever user Jenkins runs under, SSH Config file. You can read more about the specific flag I'm talking about here.
If you need to add that hostkey manually, here's a nice write up for how to do it.

Is there any way I can fix this problem with my coral dev board?

I was using coral board with my login credentials before but the SSH didn't seem to work so I removed the keys from the coral in order to generate new ones and now its not letting me in the board. I'm a noob at this, if you answer this please be specific. its for my college project. How do i change the accesskeys in the directory?
Waiting for a device...
Connecting to green-horse at 192.168.101.2
Key not present on green-horse -- pushing
Couldn't connect to keymaster on green-horse: [Errno 111] Connection refused.
Did you previously connect from a different machine? If so,
mdt-keymaster will not be running as it only accepts a single key.
You will need to either:
1) Remove the key from /home/mendel/.ssh/authorized_keys on the
device via the serial console
- or -
2) Copy the mdt private key from your home directory on this host
in ~/.config/mdt/keys/mdt.key to the first machine and use
'mdt pushkey mdt.key' to add that key to the device's
authorized_keys file.
Failed to push via keymaster -- will attempt password login as a fallback.
Can't login using default credentials: Bad authentication type; allowed types: ['publickey']
ssh should also works (it is what I use), but you'll need generate a key on your host machine and then put it in the ~/.ssh/authorized_keys on the board, there could be multiple keys placed in that file, and mdt needs to be one of them.
To recovers mdt access, you can check here: https://coral.ai/docs/dev-board/mdt/#recover-mdt-access
To ssh into the board, generate your own ssh key:
ssh-keygen
and your new key will be in ~/.ssh/id_rsa.pub, you can put that key on the board in order to ssh.

Jenkins ssh: Recover deleted ssh known host or recreate it?

Recently I got an error doing ssh to another remote server from Jenkins pipeline. I forget to save all the log but here's a part of it
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
In the error log, there's a suggestion to run this command to fix it
sudo ssh-keygen -f "/var/lib/jenkins/.ssh/known_hosts" -R "<<remote ssh ip>>"
so I ran it.
Previously some remote ssh command does run before it returns an error.
But now it seems can't connect to the remote at all, the remote ssh command fails from the beginning.
Failed to add the host to the list of known hosts (/var/lib/jenkins/.ssh/known_hosts).
from How can I get rid of " WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" I tried running ssh-keygen -R <<remote ssh ip>> but the error still appear.
How can I recover the deleted line? or recreate a new one?
First, you have to understand what the message means before you "get rid of it".
The message means the destination server has changed its identity or someone is hacking you man-in-the-middle like and the server you are trying to reach, is not the server you think it is.
So first of all you must make sure there is no man-in-the-middle hack going on.
Then, you go into the known_hosts file and delete just the line with the server you are about to connect to.
After saving, you get asked wether you want to trust the server or not just as a connection to a yet unknown host.

The authenticity of host 'bitbucket.org (131.103.20.168)' can't be established

In Cloud9 I do:
$ git push -u origin --all
The authenticity of host 'bitbucket.org (131.103.20.168)' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)?
I added the ssh-key from cloud9 to Bitbucket. Shouldn't that be enough to have Bitbucket authenticated by Cloud9?
No. When you'll first connecting to bitbucket, ssh client on your machine will store RSA fingerprint in file called known_hosts. Then before each connection server fingerprint will be validated with stored one (to avoid man-in-the-middle attack).
So - you need to accept this fingerprint only once (if you're diligent you should compare it with fingerprint provided by bitbucket).
If your key is added, you might be missing this important step...
When we get the prompt Are you sure you want to continue connecting(yes/no)? then we should type yes before hitting the return/Enter key.
Good Luck.

Resources