How to load known host key from String instead of filesystem? - apache-sshd

I have a Spring Boot application with Apache SSHD. Therefore, the application needs a known host key. How to provide this known host key?
For production I can use a static known_hosts file, but for integration test I need a dynamically generated known host key, because the port of the SSH server is not static and SSH doesn't support portless known host keys, see SSH_KNOWN_HOSTS FILE FORMAT.
Known hosts file
[localhost]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC2tFA40NT1KDyQxjld9fZlInZ3iOinn/hHZrcPq9XBzW09HeQVVz4hhPZ3sThxLdU/ZXTyaTY/RG3SahtRTI6pqK96etZa9gS6w13SIBtyrfsTuqc0TpfuMAYZFhXpOubKFzGpV+///1qJ61PmLRxEklpPnIb++uMMCaEyG1chtRli2ywokjCiNyUqJ6HLKPW3LIzDNtHA+I5EKkIP/vCFma3xBI2N74chaT6eL67zuULT3hai8g57hrgR3LeIvg6e2E6PlYJYmNHqO9L+TCbs3VN+OJxcl//bLMWxHQaGBUgi5nlKLzuGwNFl+KdMo8a+igwuGg9vlaFv5YSkvA+s99AxtOoiwViVX8+V7WwBvNmfh2Spp1jUNIclKdqKO2y8qxKb70KrDIPQ0pgdKs+Dm2v3FxIO6dNi6FXzwds0DLmNiAcUnPBzBQ5DRkGSz/Ih/OA3BwjSFQVS+j+tAOH1NftPi+U7SelOqJMYBk7Q48F3ZQ5Tr7yCbSgC2Jxz9i3XmEBTKInwq/dz9rM4Hae7g+SLWHxHvav4so9c01cfgG+dyouphXvh7mpuGlt/Jieg0B24GY5Mgr7EVT3+e77922yNzvsHNNnOuEW/uCmpsq2BXWpxhpoKLvOZcgVD8XdQfboO8PxBURFJ9/Lg3F0LQrbNMlaWcV3P1p7TgmevoQ==
Documentation
In the documentation is listed an implementation for known host keys saved in the filesystem and an implementation for one static server key, see ServerKeyVerifier:
ServerKeyVerifier
client.setServerKeyVerifier(...); sets up the server key verifier. As part of the SSH connection initialization protocol, the server proves its "identity" by presenting a public key. The client can examine the key (e.g., present it to the user via some UI) and decide whether to trust the server and continue with the connection setup. By default the client is initialized with an AcceptAllServerKeyVerifier that simply logs a warning that an un-verified server key was accepted. There are other out-of-the-box verifiers available in the code:
RejectAllServerKeyVerifier - rejects all server key - usually used in tests or as a fallback verifier if none of it predecesors validated the server key
RequiredServerKeyVerifier - accepts only one specific server key (similar to certificate pinning for SSL)
KnownHostsServerKeyVerifier - uses the known_hosts file to validate the server key. One can use this class + some existing code to update the file when new servers are detected and their keys are accepted.
But I couldn't find an example for RequiredServerKeyVerifier.
Research
I could disable server key validation in my integration tests, but I want to test the configuration code for server key validation, too.
I could dynamically change the known_hosts file in filesystem to change the port, but that is error prune and it increases the complexity (file permissions, parallel access).
Question
How to load known host key from String instead of filesystem?

Related

How Should I Encrypt Files At Rest in an S3 Bucket From an ECS Docker Container using AES?

I have been tasked with adding application-level file-at-rest encryption to files stored in an S3 bucket and accessed from within a Docker container in ECS.
I have been told that the default server-side encryption of the S3 bucket is insufficient and that I should implement additional encryption of the files at rest at the application level using AES encryption.
We are currently using GPG asymmetric encryption for files coming from upstream and going downstream. Our private key and the downstream application's public key are both obtained from Secrets Manager.
For files at rest, it seems appropriate to use a symmetric key also stored in Secrets Manager.
Anywhere I search, however, it seems that the intended use of AES encryption is typically meant for files that are stored on the server where they are being used. Typically, to my understanding, a passphrase is provided and used to secure a symmetric key stored somewhere on the server by gpg.
Since the application is running within a Docker container, it does not seem appropriate for the key to be stored only on the application server as it would be lost when the container is rebuilt and deployed.
How can I accomplish AES encryption/decryption of the files in the bucket with a pre-existing key that is stored outside the container (i.e. Secrets Manager in this case)?

warning REMOTE HOST IDENTIFICATION HAS CHANGED

Yesterday I was trying to update my ruby on rails application by uploading it with capistrano but I had to cancel the upload in the middle of the process, immediately I was trying to access the server via ssh with ssh deploy#my_ip_server and it was waiting to access, I ended up restarting the aws instance.
Today I am trying to access the server via ssh and I get this alert:
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:Adfadssdgdfg......
Please contact your system administrator.
Add correct host key in /home/jeff/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/jeff/.ssh/known_hosts:23
remove with:
ssh-keygen -f "/home/jeff/.ssh/known_hosts" -R "my_ip_server"
ECDSA host key for my_ip_server has changed and you have requested strict checking.
The ip of my instance changed I imagine it is because I restarted the instance, I immediately changed the .ssh/authorized_keys file with a new access key.
To access by ssh I have a security rule in aws that only allows access with the ip of my machine.
Should I be worried about this alert? being that the instance is new and at the moment it is only in testing phase.

What is the proper way of adding trust certificates to confluent kafka connect docker image

I have a kafka connect cluster (cp_kafka_connect_base) on docker, and I need to include a .pem file in order to connect to a source over TLS. It seems there are already a number of trusted certificates included in connect, so how would I add a new trusted certificate without invalidating the old ones?
Specific problem
I want to use MongoDB Source Connector, alongside a number of other connectors. As per documentation, I have imported my .pem certificate in a .jks store, and added the following envvars to my kafka connect containers:
KAFKA_OPTS="
-Djavax.net.ssl.trustStore=mystore.jks
-Djavax.net.ssl.trustStorePassword=mypass
This lets me connect to my data source, but invalidates other TLS connections, unless I add them all to my .jks. Since all other TLS connections work out of the box, I shouldn't need to manually import every single one of them to a .jks, just to make one connector implementation happy.
I have also tried setting:
CONNECT_SSL_TRUSTSTORE_TYPE: "PEM"
CONNECT_SSL_TRUSTSTORE_LOCATION: "myloc"
but the truststore location config isn't known, and TLS doesn't work:
WARN The configuration 'ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)

Jenkins cannot connect to EC2 using private key, but I can connect using Putty

I recently inherited a Jenkins instance running on an AWS EC2 server. It has several pipelines to different EC2 servers that are running successfully. I'm having trouble adding a new node to a new EC2 web server.
I have an account on that new web server named jenkins. I generated keys, added the ssh-rsa key to ~/.ssh/authorized_keys, and verified I was able to connect with the jenkins user via Putty.
In Jenkins, under Dashboard > Credentials > System > Global Credentials, I created new credentials as follows:
Username: jenkins
Private Key -> Enter Key Directly: Pasted in the key beginning with "BEGIN RSA PRIVATE KEY":
Finally, I created a new node using those credentials, to connect via SSH and use the "Known hosts file Verification Strategy."
Unfortunately, I'm getting the following error when I attempt to launch the agent:
[01/04/22 22:16:43] [SSH] WARNING: No entry currently exists in the
Known Hosts file for this host. Connections will be denied until this
new host and its associated key is added to the Known Hosts file. Key
exchange was not finished, connection is closed.
I verified I have the correct Host name configured in my node.
I don't know what I'm missing here, especially since I can connect via Putty.
Suggestions?
Have you added the new node to the known hosts file on the Controller node?
I assume Putty was your local machine rather than the controller?
See this support article for details
https://support.cloudbees.com/hc/en-us/articles/115000073552-Host-Key-Verification-for-SSH-Agents#knowhostsfileverificationstrategy
Sounds like your system doesn't allow for automatic hostkeys into the known_hosts file. You can check for the UpdateHostKeys flag in either your user, system, or potentially whatever user Jenkins runs under, SSH Config file. You can read more about the specific flag I'm talking about here.
If you need to add that hostkey manually, here's a nice write up for how to do it.

neo4j docker image (vps managed with plesk), cannot assign certificates for secure bolt connection with Let's encrypt certificate

I'm trying to run neo4j community on a vps via a docker image managed with plesk.
I am however having issues configuring the SSL certificate so I can connect to it securely from nodejs.
Currently, the error I'm getting is quite straightforward in node:
Neo4jError: Failed to connect to server.
Please ensure that your database is listening on the correct host and port and that you have
compatible encryption settings both on Neo4j server and driver. Note that the default encryption
setting has changed in Neo4j 4.0. Caused by: Server certificate is not trusted. If you trust the
database you are connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the signing
certificate, or the server certificate, to the list of certificates trusted by this driver using
`neo4j.driver(.., { trustedCertificates:['path/to/certificate.crt']}). This is a security measure
to protect against man-in-the-middle attacks. If you are just trying Neo4j out and are not
concerned about encryption, simply disable it using `encrypted="ENCRYPTION_OFF"` in the driver
options. Socket responded with: DEPTH_ZERO_SELF_SIGNED_CERT
I've mapped the volumes as follows:
/certificates to the letsencrypt live folder for the domain db.example.com
Then I'm trying to connect to it via: bolt://db.example.com:32771
When i check via browser, the certificate being served is self-signed. I have try to add this certificate to the trusted certificates in windows but it didn't do anything at all.
Also added the path to the trusted certificates when instantiating the driver:
this._driver = neo4j.driver(process.env.Neo4jUri, token, {
encrypted: true,
trustedCertificates: ['ssl/neo4j.crt'],
});
I've also tried to copy the files within that certificate folder so that the appropriate files are named as mentioned in this article.

Resources