We have a Rails app running in Windows that uses Puma. So far we've stored our SSL/TLS certificates on the filesystem, which seems to be the standard in general, and the way Puma is designed to take in that data when starting up.
We would like to instead keep only an encrypted PKCS#12 file (.12) on disk, that holds all certificate data (one or more certificates and the private key), pulls out the specific certs during puma start up into variables, and then feeds that directly into the Puma ssl_bind command.
So I'm trying to figure out if Puma can accept variables that hold certificate data, as opposed to providing the expected cert_path and key_path that point at plaintext files.
I've tried a few different ways of replacing the file paths with variables, but I only get errors so far (see below for example). I've output the cert key along side the same from the file system, and they look identical to me. I've read other somewhat related SO threads that suggest maybe I need to add newlines or otherwise slightly manipulate the data in my variables, but that line of thinking has confused me so far and I'm not sure if it really pertains to my scenario. I think it comes down to ssl_bind expecting a file path, and likely running "file open" logic under the hood. Does it simply not support taking it directly?
Here is an example of what works today:
# tls.key and tls.crt are already sitting on filesystem
ssl_bind '0.0.0.0', '443', {
key: 'certs/tls.key',
cert: 'certs/tls.crt',
no_tlsv1: true,
no_tlsv1_1: true,
verify_mode: 'none'
}
Here is an example of what we want to do
require 'openssl'
# get p12 password out of secrets at runtime
p12_password = Rails.application.credentials.p12[:password].to_s
# open encrypted p12 file
p12 = OpenSSL::PKCS12.new(File.binread('certs/tls.p12'), p12_password)
# pull out certificate and key from p12
leafkey = p12.key.to_pem
leafcertificate = p12.certificate.to_pem
ssl_bind '0.0.0.0', '443', {
key: leafkey,
cert: leafcertificate,
no_tlsv1: true,
no_tlsv1_1: true,
verify_mode: 'none'
}
The error we receive from the above is:
C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/puma-4.3.8/lib/puma/minissl.rb:209:in `key=': No such key file '-----BEGIN EC PRIVATE KEY-----MDECAQEEIBccaYhSLodf 4TRzzWkOE5rr8t Ul0oQHcjYmmoiuvloAoGCCqGSM4jdu73-----END EC PRIVATE KEY-----' (ArgumentError)
This is the valid EC key data, but Puma/ssl_bind appears confused (not surprisingly) that it's not the expected path to a file on disk that contains this data. Can we trick Puma into accepting it directly this way?
Thank you for reading and taking the time to express any thoughts you may have!
This requirement was added as a enhancement in this PR
So far it looks like I was able to update Puma from 4.3.8 directly to 5.6.2 without any fuss. We did need to update 2 options to the *_pem versions, i.e.,
cert becomes cert_pem and
key becomes key_pem
With this in place, it JUST WORKED.
Example running with puma 5.6.2:
require 'openssl'
# using cleartext password for testing/simplicity
p12 = OpenSSL::PKCS12.new(File.binread('certs/tls.p12'), "1234")
leafcertificate = p12.certificate.to_pem
leafkey = p12.key.to_pem
ssl_bind '0.0.0.0', '443', {
cert_pem: leafcertificate,
key_pem: leafkey,
no_tlsv1: true,
no_tlsv1_1: true
}
Personal lessons learned: I should have prioritized digging into the Puma repo: pull requests, issues, etc.
Related
I am trying to connect to the HiveMQ broker using ESP32, a SIM7020 NB-IoT module and the library Magellan_SIM7020E found at:https://github.com/AIS-DeviceInnovation/Magellan_SIM7020E.
It requires an SSL certificate to be pasted in a header file with this format (here abbreviated), where the lines of XXXXXX are to be replaced with the characters of the certificate:
/= Certificate Authority info =/
/= CA Cert in PEM format =/
const char rootCA[] = {"-----BEGIN CERTIFICATE-----"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
...
...
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"-----END CERTIFICATE-----"};
const char clientCA[] = {"-----BEGIN CERTIFICATE-----"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
...
...
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXX"
"-----END CERTIFICATE-----"};
const char clientPrivateKey[] = {"-----BEGIN RSA PRIVATE KEY-----"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
...
...
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"-----END RSA PRIVATE KEY-----"};
I managed to work with the free service with this setup (using the non SSL client in the library) but the I am failing with the SSL version used to access the paid broker.
By free service I mean broker.hivemq.com and by paid I mean xxxxxxx.s2.eu.hivemq.cloud.
I tried to use the certificate indicated in the FAQ (https://community.hivemq.com/t/frequently-asked-questions/514) which gives me a file called isrgrootx1.pem but it has a different format.
I tried cutting the .pem into sections with length that match each of the three entries required in the header file but the total length does not match.
The downloaded .pem file has a single block of 1856 characters whereas the ESP32 seems to need three blocks for root, client and client private with lengths of 1116, 1232 and 1592 respectively, adding up to much more than the PEM file length.
Is this the right file?
If so, how do I convert it to the format that I need?
If not, where from can I get the certificate?
I tried to follow a previous answer (Can't connect ESP32 to MQTT). That is for WiFi, not NB-IoT but it seems to require a similar format for the certificate. It is suggeseted to install and use OpenSSL but I can't figure out how to install it and I don't even know if it would actually do what I need if I did manage to install it.
As an alternative is there any way I can access the paid broker without SSL as I don't really need that level of security and it is an added complication, especially with regard to keeping the certificates up to date.
i'm facing a problem regarding the shared secret in "clients.conf" file in freeradius server 3.0.25.
I tried to follow the documentation, but with no luck, in particular I'm trying to use the exact example in documentation of the octal representation of the secret "AB":
clients.conf:
secret = "\101\102"
then I run the radtest:
./radtest -x testing password localhost 0 "AB"
in server debug log I find:
"Dropping packet without response because of error: Received packet from 127.0.0.1 with invalid Message-Authenticator! (Shared secret is incorrect.)"
I tried every combination that come in mind: with or without quotes, with the "-t eap-md5" parameter in radtest, ..
Of course if I write 'secret = "AB" ' in clients.conf everything works, but I need octal representation because a client of ours uses special non printable characters in the secret.
Any help is appreciated
Thanks
I was able to make it work by changing the default value of parameter correct_escapes in file radiusd.conf:
correct_escapes = false <-- it was 'true' by default
Still is not clear to me why it doesn't work with correct_escapes set to 'true', maybe it's a bug?
What is a best, simple way to authenticate Vision API on heroku?
In development I just use:
#vision = Google::Cloud::Vision.new( project: "instacult",
keyfile: "path/to/keyfile.json" )
Where keyfile is a json produced by google after creating service account (https://cloud.google.com/vision/docs/common/auth).
But obviously I can't just upload the keyfile to github.
I tried saving whole json to Heroku's config vars and running:
Rails.env.production? ? ENV["GOOGLE_CREDENTIALS"] : path
But I got "is not a valid file" in heroku's logs. Seems logical since I'm not passing a file but an object. But how to get over it?
Cheers,
Kai
SOLVED:
Turns out you can provide a json object in environment variable, but there is a naming convention.
Here are the environment variables (in the order they are checked) for
credentials:
VISION_KEYFILE - Path to JSON file
GOOGLE_CLOUD_KEYFILE - Path to JSON file
VISION_KEYFILE_JSON - JSON contents
GOOGLE_CLOUD_KEYFILE_JSON - JSON contents
source: https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-vision/v0.23.0/guides/authentication
So I ended up with calling:
#vision = Google::Cloud::Vision.new( project: "instacult")
Having set VISION_KEYFILE_JSON in my ~/.bashrc:
export VISION_KEYFILE_JSON='the_json_content'
and on heroku (https://devcenter.heroku.com/articles/config-vars#limits).
Amazon S3, using rails and fog.
Trying to precompile my assets with rake assets:precompile:
message:
[WARNING] fog: followed redirect to myproject.de.s3-us-west-2.amazonaws.com, connecting to the matching region will be more performant
rake aborted!
hostname does not match the server certificate (OpenSSL::SSL::SSLError)
So there is something with OpenSSL
What I tried already:
I have already tried to config certificates in application.rb like this: with no success.
AWS.config(:http_handler => AWS::Http::HTTPartyHandler.new(:ssl_ca_path => "/etc/ssl/certs"))
also installed openssl on Ubuntu 12.04 from here
Question is:
How Amazon S3 deals with certificates
Actually you can use a bucket name with a dot. All you have to do is add :path_style => true to your config.fog_credentials.
In your example, it would give:
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['S3_KEY'],
:aws_secret_access_key => ENV['S3_SECRET'],
:region => ENV['S3_REGION'],
:path_style => true
}
config.fog_directory = "myproject.de"
TLDR; Solution
In order to access your S3 bucket URLs via httpS, you will need to either:
Choose a bucket name such that it contains no periods '.' and use the "Virtual Hosted–Style" URL, such as https://simplebucketname.s3.amazonaws.com/myObjectKey OR
Use the "Path Style" URL form that specifies the bucket name separately, after the host name, for example: https://s3.amazonaws.com/mybucket.mydomain.com/myObjectKey
With fog, you can set the option: :path_style => true as this solution explained.
The Problem & Explanation
The SSL Certificate Validation problem arises from using dots '.' in the S3 Bucket Name along with the "Virtual Hosted–Style Method" URL format.
The Amazon S3 Documentation states that it allows two main URL formats for accessing S3 Buckets and Objects:
Path Style Method (being deprecated)
Virtual Hosted–Style Method
So what's happening is this:
Fog is trying to request a URL to your bucket like: https://myproject.de.s3-us-west-2.amazonaws.com/foo/bar
The Hostname in the request is myproject.de.s3-us-west-2.amazonaws.com
SSL Cert for *.amazonaws.net is served during SSL TLS Negotiation
Fog tries to validate the SSL Cert & CA Cert Chain
Fog tries to match the Cert's CN *.s3.amazonaws.com against myproject.de.s3-us-west-2.amazonaws.com
According to Certificate CN wildcard matching rules in RFC 2818, the sub-subdomain does not match wildcard CN: *.s3.amazonaws.com
Connection fails with hostname does not match the server certificate due to Invalid SSL Cert CA Validation
The dots in S3 URL problem is mentioned around the internet such as in the Drupal Project, AWS Forums, Python Boto Library and is very well explained in this blog post entitled: Amazon S3 Gotcha: Using Virtual Host URLs with HTTPS <-- I highly recommend reading this one for further clarification.
Problem is with naming of bucket, in this case : myproject.de, which is format that Amazon S3 services do not consider as valid.(no dot in the name).
I have changed the name of bucket from myproject.de into myprojectde and it works now.
Rules for Bucket Naming
In all regions except for the US Standard region a bucket name must
comply with the following rules. These result in a DNS compliant
bucket name.
Bucket names must be at least 3 and no more than 63 characters long
Bucket name must be a series of one or more labels separated by a
period (.), where each label:
Must start with a lowercase letter or a number
Must end with a lowercase letter or a number
Can contain lowercase letters, numbers and dashes
Bucket names must not be formatted as an IP address (e.g.,
192.168.5.4)
The following are examples of valid bucket names:
myawsbucket
my.aws.bucket
myawsbucket.1
The following are examples of invalid bucket names:
Invalid Bucket Name Comment .myawsbucket Bucket name cannot start with
a period (.). myawsbucket. Bucket name cannot end with a period (.).
my..examplebucket There can only be one period between labels
Note if you want to access a bucket using a virtual hosted-style request, for example http://mybucket.s3.amazonaws.com over SSL, the bucket name cannot include a period (.).
further reference is here
I have an FTP server which only accepts connections through running FTPS (explicit FTP over TLS). I need to be able to connect to this using a Ruby on Rails app.
Does anybody know of a method to do this? I have tried the Net::FTP library but this does not appear to support FTPS connections.
How about using Net::FTPTLS ?
Since Ruby 2.4, TLS over FTP has been available with Net::FTP... this has caused gems like double-bag-ftps to become archived and all your google searches to yield outdated answers.
If you can do explicit FTP over TLS (Connects to FTP normally, then issues a command AUTH TLS to switch to TLS Mode), then great... that should be able to use Ruby's Net::FTP out of the box by just passing {ssl: true} in the options.
Implicit FTP over TLS (runs over TLS from the get-go) does not work out of the box, however, and you must override Net::FTP's connection method to establish an SSL socket and then optionally send commands to the FTP server.
Inidka K posted a Github Gist, but since those are bad form (can go stale), I've posted my version that works against a ShareFile Implicit FTP setup (which seems to only support Implicit FTP):
require 'net/ftp'
class ImplicitFtp < Net::FTP
FTP_PORT = 990
def connect(host, port = FTP_PORT)
synchronize do
#host = host
#bare_sock = open_socket(host, port)
begin
ssl_sock = start_tls_session(Socket.tcp(host, port))
#sock = BufferedSSLSocket.new(ssl_sock, read_timeout: #read_timeout)
voidresp
if #private_data_connection
voidcmd("PBSZ 0")
voidcmd("PROT P")
end
rescue OpenSSL::SSL::SSLError, Net::OpenTimeout
#sock.close
raise
end
end
end
end
Then, in your code:
ftp_options = {
port: 990,
ssl: true,
debug_mode: true, # If you want to see what's going on
username: FTP_USER,
password: FTP_PASS
}
ftp = ImplicitFtp.open(FTP_HOST, ftp_options)
puts ftp.list
ftp.close
I done something like this with Implicit/Explicit FTPS, I used double-bag-ftps gem that I patched to support reuse of ssl session. It's a requirement for a lot of ftps servers.
I put the code on github here : https://github.com/alain75007/double-bag-ftps
EDIT: I figured out how to get it running locally, but am having issues getting it to work on Heroku. That's a bit of a departure from this question, so I've created a new one:
Heroku with FTPTLS - Error on SSL Connection
require 'net/ftptls'
ftp = Net::FTPTLS.new()
ftp.passive = true
#make sure you define port_number
ftp.connect('host.com', port_number)
ftp.login('Username', 'Password')
ftp.gettextfile('filename.ext', 'where/to/save/file.ext')
ftp.close
If you want to use Implicit FTPS, please try this gist.
For Explicit FTPs, you can use the ruby gem ftpfxp.
Support for implicit FTPS was merged into ruby/net-ftp in January 2022 (in this PR). If you want to make use of this straight away, you can include the latest version directly in your Gemfile:
gem "net-ftp", github: "ruby/net-ftp", branch: "master"
Then you just need:
options = {
ssl: true,
port: 990,
implicit_ftps: true,
username: "your-user",
password: "*********",
debug_mode: true
}
Net::FTP.open("yourhost.com", options) do |ftp|
ftp.list.map{ |f| puts f }
end
I implemented an ftps solution using double-bag-ftps
double-bag-ftps