I have a Rails 3.2.8 app, with Ruby 1.9.3 on Ubuntu 12.04. It uses mechanize to connect to an https web site.
I am seeing this error intermittently:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
I do set the CA file:
Mechanize.new do |agent|
agent.ssl_version = "SSLv3"
agent.ca_file = Rails.root.join("lib/cacert.pem").to_s
end
I have also tried using cert_store:
cert_store = OpenSSL::X509::Store.new
cert_store.set_default_paths
Mechanize.new do |agent|
agent.ssl_version = "SSLv3"
agent.cert_store = cert_store
end
And setting the store explicitly:
cert_store = OpenSSL::X509::Store.new
cert_store.add_file Rails.root.join("lib/cacert.pem").to_s
Mechanize.new do |agent|
agent.ssl_version = "SSLv3"
agent.cert_store = cert_store
end
These errors appear regardless of which method I use to specify the CA/certificates (including relying on default behaviour). When I run the code manually from rails console, it works fine. Which of the above, if any, are correct? What else can I do to debug this?
Related
Im using rails 5 with sentry installed, I have tested it locally and it already works, however moving to production I am getting a certificate error when I boot up console and test Sentry.capture_message("new test 2") with the following error:
Event sending failed: SSL_connect returned=1 errno=0 state=error: certificate verify failed (certificate has expired)
Unreported Event: new test 2
exception happened in background worker: SSL_connect returned=1 errno=0 state=error: certificate verify failed (certificate has expired)
My code is as follows
Sentry.init do |config|
config.dsn = ENV["SENTRY_DNS"]
config.breadcrumbs_logger = [:active_support_logger, :http_logger]
config.traces_sample_rate = 0.25
config.enabled_environments = %[ staging ]
end
Your issue is that your server is attempting to verify the ssl cert when connecting to sentry. For
Sentry.init do |config|
config.transport.ssl_verification = false
config.dsn = ENV["SENTRY_DNS"]
config.breadcrumbs_logger = [:active_support_logger, :http_logger]
config.traces_sample_rate = 0.25
config.enabled_environments = %[ staging ]
end
When attempting to send to sentry your server is failing to verify the SSL certificate correctly. You can cancel verification by adding the above option. This is a bit of a security hole so the more correct way would be to set:
config.transport.ssl_ca_file = 'path to a valid local cert file'
instead.
I'm trying to connect a webcrawler that accesses a certain site via SSL and queries my data on that site. The authentication of this site is via a self-signed Digital Certificate. At the moment I want to access the site, I upload this certificate in .pfx format to my api, convert it to .pem, and when I try to access the site with this certificate, the response comes with status 403 (forbidden ).
However, when I try to access the site through a browser with the certificate in .pfx format I usually get it.
I already tried using Mechanize, and it worked for a while (until a few months ago it worked), but then it started to give the error:
SSL_connect returned = 1 errno = 0 state = SSLv3 read finished A: sslv3 alert bad certificate
The site is old, it does not receive updates frequently.
After that I already tried to use the net / http lib and the error persisted, I tried to use the httprb gem and lastly I tried with Faraday. All attempts ended either in that error quoted above or with the response status == 403.
What can I do to be able to connect? Is there something wrong with my script? Is it missing any information I need to get through?
Code:
# Faraday customs method:
class FaradayHttp
def with_openssl
system "openssl pkcs12 -in my-certificate-path -out certificate-output-path -nodes -password pass:certificate-password"
def cert_object
OpenSSL::X509::Certificate.new File.read("certificate-output-path")
end
# create PKey
def key_object
OpenSSL::PKey.read File.read("certificate-output-path")
end
faraday = Faraday::Connection.new 'https://example-site.com',
:ssl => {
certificate: cert_object,
private_key: key_object,
version: :SSLv3,
verify: false
}
faraday
end
end
# Controller that try to connect with the ssl server:
agent = FaradayHttp.new.with_openssl
page = agent.get '/login_path'
# mypki will prompt you for certificates
require 'mypki'
# faraday will use certificates from mypki
require 'faraday'
faraday = Faraday::Connection.new 'https://example-site.com'
faraday.get '/login_path'
Currently running into an issue where my background workers which are communicating with elasticsearch via elasticsearch-client are running into SSL errors inside Faraday.
The error is this:
SSL_connect returned=1 errno=0 state=SSLv3 read server hello A: sslv3 alert handshake failure
The configuration works fine some of the time (around ~50%) and it has never failed for me inside of a console sessions.
The trace of the command is this:
curl -X GET 'https://<host>/_alias/models_write?pretty
The client config is this
Thread.current[:chewy_client] ||= begin
client_configuration[:reload_on_failure] = true
client_configuration[:reload_connections] = 30
client_configuration[:sniffer_timeout] = 0.5
client_configuration[:transport_options] ||= {}
client_configuration[:transport_options][:ssl] = { :version => :TLSv1_2 }
client_configuration[:transport_options][:headers] = { content_type: 'application/json' }
client_configuration[:trace] = true
client_configuration[:logger] = Rails.logger
::Elasticsearch::Client.new(client_configuration) do |f|
f.request :aws_signers_v4,
credentials: AWS::Core::CredentialProviders::DefaultProvider.new,
service_name: 'es',
region: ENV['ES_REGION'] || 'us-west-2'
end
end
As you can see I explicitly set the ssl version to TSLv1_2, but still getting an SSLv3 error.
Thought maybe it was a race condition issue. So ran a script spawning about 10 processes with 50 threads each and calling the sidekiq perform method inside and still not able to reproduce.
I am using the managed AWS 2.3 Elasticsearch if that is at all relevant.
Any help or guidance in the right direction would be greatly appreciated, I would be happy to attach as much info as needed.
Figured it out. The problem was that the elasticsearch-ruby gem autoloads in an http adapter that it detects if one is not specified. The one used in my console was not the one getting auto loaded into sidekiq.
The sidekiq job was using the HTTPClient adapter which did not respect the SSL version option. Thus I was getting this error. After explicitly defining the faraday adapter it worked.
I am trying to parse an HTTPS XML feed via Nokogiri but I get this OpenSSL error:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError)
I can also see the SSL_CERT_FILE:
echo $SSL_CERT_FILE
/home/user/certs/cacert.pem
This is how I am trying to parse:
#feed = "https://example.com/feed1.xml"
doc = Nokogiri::XML(open(#feed)
I tried to bypass the OpenSSL verification, but I still get the same error:
doc = Nokogiri::XML(open(#feed,{ssl_verify_mode: OpenSSL::SSL::VERIFY_NONE}))
Can anyone help?
This problem usually appears on Windows.
One quick solution is to pass ssl_verify_mode to open
require 'open-uri'
require 'openssl'
open(some_url, ssl_verify_mode: OpenSSL::SSL::VERIFY_NONE)
Another quick one is overriding OpenSSL::SSL::VERIFY_PEER in the beginning of your script by doing
require 'openssl'
OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE
Those who want real solution can try method described on https://gist.github.com/fnichol/867550
I am using https://github.com/ruby-ldap/ruby-net-ldap (net-ldap) gem to verify the authenticity of a user in my rails app. But before passing data to the ldap server, I need to verify that I am talking with the same secure server.
Is there a workaround which allows me to verify the certificate in ruby
Additional details: (things I have tried)
The certificate which is passed on to me is same as the one I see when I run
openssl s_client -showcerts -connect "<host>:<port>" </dev/null 2>/dev/null|openssl x509 -outform PEM
I used http://www.ldapsoft.com/ to connect to client's server
Unless I add the certificate file given to me in Security > Manage server certificates, I get a warning saying unknown security certificate
I tried do it manually first in plain ruby (without gem)
But i get following error
test-ssl.rb:23:in `connect': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError)
Code:
cert_store = OpenSSL::X509::Store.new
cert_store.add_file "server-wildcard.crt"
io = TCPSocket.new("SECURELDAP.MYSITE.EDU","636")
ctx = OpenSSL::SSL::SSLContext.new
#ctx.cert = OpenSSL::X509::Certificate.new(File.read("server-wildcard.crt"))
#ctx.client_ca = OpenSSL::X509::Certificate.new(File.read("server-wildcard.crt"))
#ctx.ca_file = "server-wildcard.crt"
#ctx.ca_path = "./"
ctx.verify_mode = OpenSSL::SSL::VERIFY_PEER | OpenSSL::SSL::VERIFY_FAIL_IF_NO_PEER_CERT
ctx.cert_store = cert_store
conn = OpenSSL::SSL::SSLSocket.new(io, ctx)
conn.connect
I am posting my solution here for the sake of completeness.
net-ldap gem override to support certificate validation
https://gist.github.com/mintuhouse/9931865
Ideal Solution:
Maintain list of trusted root CAs on your server
(If you are lazy like me, have a cron job which will download (weekly maintained by curl) copy from http://curl.haxx.se/ca/cacert.pem)
Override Net::HTTP to always use this trusted certificate list
As of today (late 2016), ruby-net-ldap supports this upstream! However, tls_options needs to be passed with verify_mode set to a value other than the default VERIFY_NONE.
# optional: create/pass your own cert_store
cert_store = OpenSSL::X509::Store.new
cert_store.set_default_paths # or add your own CAdir, &c.
# attributes documented for OpenSSL::SSL::SSLContext are valid here
tls_options = {
verify_mode: OpenSSL::SSL::VERIFY_PEER
cert_store: cert_store
}
ldap = Net::LDAP.new(
:host => host,
:port => port,
:encryption => {
:method => :simple_tls, # could also be :start_tls
:tls_options => tls_options
}
)