Weird behavior on form data signature with a GET request - ruby-on-rails

Case study
I have 2 apps (Rails => the sender and Laravel => the receiver) in which I sign data using a private and public key in order to ensure the accuracy of information between request. The data are sent from one app to another using GET parameters :
domain.com/callback?order_id=12&time=2015-10-01T22:38:20Z&signature=VX2WxlTaGK5N12GhZ5oqXU5h3wW/I70MYZhLbAYNQ79pFquuhdOerwBwqaq2BRuGyhKoY6VEHJkNnFjLAJkQD6Q5z4Vmk...
Problem
I'm experiencing an odd behavior between the staging server and the local one regarding the signature of those datas.
When testing on staging, the generate link (GET) looks like (source code from chrome) :
And on the local server, it's the exact same formatted html (except the data that change of course). By the way, I'm using HAML
The callback URL is generated from a decorator :
def url_to_store
params = url_params.to_a.map { |a| a.join('=') }.join('&')
signature = Shield::Crypto.new(params).signature
"#{object.referer}?#{params}&signature=#{signature}"
end
def url_params
{
order_id: object.id,
transaction_id: object.transaction_id,
user_id: object.user_id,
status: object.status,
time: Time.now.utc.iso8601,
reference: object.success? ? object.reference : ''
}
end
When clicking on the link from the staging, I get redirected to the other app which actually validate the signature. Every thing works.
However, the same thing does not apply to the locale server (my machine). When clicking on the link, the signature contains spaces (%20) :
signature=wP5EmeIGzXynwJc+BDV+jGVzyYhZOJuu7PzCXgnP2qbBfdqrAceEjxgh1EH2%20%20%20%20%20%20%20%20%20%20%20%20tvR66o3IA
Which of course make the other app reject the request as the signature is invalid. That's my issue. The exact same app. The exact same code base and version (a.k.a commit sha). Different behavior.
I don't know how to reproduce it. I was hoping some of you guys already experienced similar cases and could give me a hint.
Ideas?
NOTE: I'm using the exact same callback url (local PHP app) to test both the staging and the local server. I don't think the problem comes from the PHP app. Something related to a Rails debug stuff maybe?

The problem came from Haml and it's ugly mode. In development, it is set to false by default which causes the HTML code to be pad and somehow was messing with the signature.
The related github issues has been found here https://github.com/haml/haml/issues/636 and here https://github.com/haml/haml/issues/828
So to fix it, I created an initializer to enable it as default :
config/initializers/haml.rb
require 'haml/template'
Haml::Template.options[:ugly] = true

Related

Migrating U2F to WebAuthn gem in Ruby, where to get the parameters for AuthenticatorAttestationResponse

I have a couple of questions about the WebAuthn gem and the use of U2fMigrator.
I hope someone can point me in the right direction about it.
I am in the step just after converting my old U2F credentials using U2fMigrator.
migrated_credential = WebAuthn::U2fMigrator.new(
app_id: my_domain,
certificate: u2f_registration.certificate,
key_handle: u2f_registration.key_handle,
public_key: u2f_registration.binary_public_key,
counter: u2f_registration.counter
)
The documentation says: ā€œU2fMigrator class quacks like WebAuthn::AuthenticatorAttestationResponseā€ but without verify implementation.
Does that mean I need to create an instance of this AuthenticatorAttestationResponse for authentication?
If so. Where I should get this data from?
assertion_response = WebAuthn::AuthenticatorAssertionResponse.new(
credential_id: '',
authenticator_data: '',
client_data_json: '',
signature: '',
)
I am guessing that will allow me to authenticate the new migrated credentials like this:
assertion_response.verify(
WebAuthn::Credential.options_for_get(:extensions => { appid: my_domain }).challenge,
allowed_creadentials: migrated_credential.credential,
rp_id: my_domain
)
And also, I am guessing I don't need to re-register these credentials yet.
I am following this documentation:
https://github.com/cedarcode/webauthn-ruby/blob/master/docs/u2f_migration.md
https://github.com/castle/ruby-u2f
https://github.com/cedarcode/webauthn-ruby/blob/master/README.md#authentication
UPDATE 1
I've found this cool explanation in this guide
I will dig into it and I'll post the solution if I can find it.
UPDATE 2
I've spent the whole week trying to get the authenticatorAssertionResponse
from
Unfortunately, I only get a message saying I don't have a key registered:
I'm passing through the extension and appid where the U2F credential was registered originally. I wonder if it stoped working now the deprecation is complete.
U2fMigrator is instantiated with data that's already stored in your database. Instances of it respond to the same methods as AuthenticatorAttestationResponse, except it misses a verify method since the data was already verified in the past. In other words: the migrator behaves nearly the same as a freshly WebAuthn registered authenticator and it is meant to be used as such.
Does that mean I need to create an instance of this
AuthenticatorAttestationResponse for authentication?
Yes. The AuthenticatorAttestationResponse is instantiated with browser data from the WebAuthn navigator.credentials.get call. This in itself is unrelated to the U2F migration question, except for the part where the data comes from for its verify method. This comes either from a migrator instance (in the "real time conversion" approach) or is retrieved from the database.
Hope that makes sense, PRs welcome to improve the docs!

Verify Shopify webhook

I believe that to have a Shopify webhook integrate with a Rails app, the Rails app needs to disable the default verify_authenticity_token method, and implement its own authentication using the X_SHOPIFY_HMAC_SHA256 header. The Shopify docs say to just use request.body.read. So, I did that:
def create
verify_webhook(request)
# Send back a 200 OK response
head :ok
end
def verify_webhook(request)
header_hmac = request.headers["HTTP_X_SHOPIFY_HMAC_SHA256"]
digest = OpenSSL::Digest.new("sha256")
request.body.rewind
calculated_hmac = Base64.encode64(OpenSSL::HMAC.digest(digest, SHARED_SECRET, request.body.read)).strip
puts "header hmac: #{header_hmac}"
puts "calculated hmac: #{calculated_hmac}"
puts "Verified:#{ActiveSupport::SecurityUtils.secure_compare(calculated_hmac, header_hmac)}"
end
The Shopify webhook is directed to the correct URL and the route gives it to the controller method shown above. But when I send a test notification, the output is not right. The two HMACs are not equal, and so it is not verified. I am fairly sure that the problem is that Shopify is using the entire request as their seed for the authentication hash, not just the POST contents. So, I need the original, untouched HTTP request, unless I am mistaken.
This question seemed like the only promising thing on the Internet after at least an hour of searching. It was exactly what I was asking and it had an accepted answer with 30 upvotes. But his answer... is absurd. It spits out an unintelligible, garbled mess of all kinds of things. Am I missing something glaring?
Furthermore, this article seemed to suggest that what I am looking for is not possible. It seems that Rails is never given the unadulterated request, but it is split into disparate parts by Rack, before it ever gets to Rails. If so, I guess I could maybe attempt to reassemble it, but I would have to even get the order of the headers correct for a hash to work, so I can't imagine that would be possible.
I guess my main question is, am I totally screwed?
The problem was in my SHARED_SECRET. I assumed this was the API secret key, because a few days ago it was called the shared secret in the Shopify admin page. But now I see a tiny paragraph at the bottom of the notifications page that says,
All your webhooks will be signed with ---MY_REAL_SHARED_SECRET--- so
you can verify their integrity.
This is the secret I need to use to verify the webhooks. Why there are two of them, I have no idea.
Have you tried doing it in the order they show in their guides? They have a working sample for ruby.
def create
request.body.rewind
data = request.body.read
header = request.headers["HTTP_X_SHOPIFY_HMAC_SHA256"]
verified = verify_webhook(data, header)
head :ok
end
They say in their guides:
Each Webhook request includes a X-Shopify-Hmac-SHA256 header which is
generated using the app's shared secret, along with the data sent in
the request.
the keywors being "generated using shared secret AND DATA sent in the request" so all of this should be available on your end, both the DATA and the shared secret.

Rails refuses to forget code, leaving ssl and initialize method errors

I am noticing a pattern of rails acting as if a line of code is still written once it has been deleted, and I think it may have something to do with changing its defaults too much. I have two examples.
In the first, I set config.force_ssl => true in my config file (a major mistake) and immediately got an error on a page where I was introducing an api via a script tag:
My server gave me an error because the response length of the input wasn't known. I tried enabling streaming in my controller, and it failed. I even tried setting config.force_ssl => false, but this too was useless. So, I deleted the config.force_ssl => true line, but in Firefox, the page with the error continued to route to an "https://" url and then give me the same error. Chromium did not, so I switched to using that, but to this day, I still cannot load the page in Firefox without an error.
Now for the second issue. More recently, I created a model where I wanted to create a custom initialize method with four parameters.
association.rb
def initialize(tag_index, relative_index, type, relevance)
#assigning variables
end
In my controller, I assigned these accordingly.
tags_controller.rb
a = Association.new(id, tag_two.id, type, relevance)
Immediately, I get an error that I have 4 for 2 parameters. Thinking it's just rails being picky, I take away "type" and "relevance." Now, though, I get an error message telling me there is no method 'check_validity!' for class 30:Fixnum. So, I remove the initialize function altogether, and just as before, Rails refuses to recognize that the lines of code have been deleted, giving me errors when I enter parameters for Association.new, and telling me I'm missing parameters when I don't enter any at all.
If anyone can help with the little pieces such as how to fix a response length error with ssl, or how to deal with the 'check_validity!' method, that would be great. Better, though, would be if someone could explain why Rails refuses to let old pieces of code be deleted. This is something that has frustrated me to no end, and I can't find anything on any of these forums about how to fix it.
Thanks so much!

vcr does not know how to handle this request

Hi I am trying to test google auth with cucumber using vcr with a tag.
Everything goes fine till token expires. I think when it expires this happens
But I have a file with this content
http_interactions:
- request:
method: post
uri: https://accounts.google.com/o/oauth2/token
body:
If I allow vcr to record new requests the content of this cassette changes. I don't understand why, if the method and uri do not change POST to https://accounts.google.com/o/oauth2/token.
I changed tag to record new episodes and now test is passing... I am clueless.
I run the test again and now I am having this when POST to token url is being done:
Completed 500 Internal Server Error in 449ms
Psych::BadAlias (Unknown alias: 70317249293120):
Maybe you have some parameters inside the post, which are different for every request? If so, you can tell VCR to ignore this parameters by adding match_requests_on: [:method, VCR.request_matchers.uri_without_params("your_param")] to your VCR configuration.
In depth analyse your request, and find out which parameters are changing. You can tell VCR also to match on other criterias, have a look here https://www.relishapp.com/vcr/vcr/v/2-4-0/docs/request-matching
Ok, here's a solution...
The problem comes, as I said in the comment, from refreshing the token. When using oauth you have a token, which may be expired (or not). If you run the test and the token is fresh, that request isn't called. But if the token has expired it has to refresh it, and thus vcr throws an error.
To solve that, what I did is add the refresh token url to the ignored requests of vcr:
VCR.configure do |c|
c.cassette_library_dir = 'fixtures/vcr_cassettes'
c.hook_into :webmock # or :fakeweb
c.ignore_request {|request| request.uri == 'https://accounts.google.com/o/oauth2/token' }
end
It's not the best solution, since sometimes the token gets refreshed in the tests... but it's the best solution I could find...
I was getting the same issue with the same URL. For me, the problem was that my code was attempting to make the same call to https://accounts.google.com/o/oauth2/token more than once.
One of the potential solutions given in the VCR error message tells you the solution:
The cassette contains an HTTP interaction that matches this request, but it has already been played back. If you wish to allow a single HTTP interaction to be played back multiple times, set the :allow_playback_repeats cassette option
In my case, adding this option fixed the problem, as it tells VCR to revert back to its 1.x functionality of not re-recording duplicate requests, but simply playing back the result of a previously recorded duplicate request.
I am using Cucumber, so my solution was to add the following to my features/support/vcr.rb:
VCR.cucumber_tags do |t|
t.tag '#vcr', use_scenario_name: true
t.tag '#new_episodes', record: :new_episodes
t.tag '#allow_playback_repeats', use_scenario_name: true, allow_playback_repeats: true, record: :new_episodes
end
Notice the #allow_playback_repeats tag. I simply tagged my scenario with this tag, and everything worked properly thereafter:
#allow_playback_repeats
Scenario: Uploading a video initiates an upload to YouTube
Note that it doesn't work if you specify both #vcr and #allow_playback_repeats.
If you're using RSpec, you'll need to adapt the solution accordingly, but, it should be as simple as:
it "does something", :vcr => { allow_playback_repeats: true } do
...
end
I met the same problem, and finally found the there is a parameter changed every time.
you my solution is: copy and paste the mock parameter and real parameter together, and compare with them , and also make sure your next unit test would generate new parameter.

Serving files over HTTPS dynamically based on request.ssl? with Attachment_fu

I see there is a :user_ssl option in attachment_fu which checks the amazon_s3.yml file in order to serve files via https://
In the s3_backend.rb you have this method:
def self.protocol
#protocol ||= s3_config[:use_ssl] ? 'https://' : 'http://'
end
But this then makes it serve ALL s3 attachments with SSL. I'd like to make it dynamic depending if the current request was made with https:// i.e:
if request.ssl?
#protocol = "https://"
else
#protocol = "http://"
end
How can I make it work in this way? I've tried modifying the method and then get the NameError: undefined local variable or method `request' for Technoweenie::AttachmentFu::Backends::S3Backend:Module error
The problem is that the method you're modifying (Technoweenie::AttachmentFu::Backends::AWS::S3.protocol) is static and does not have access to the file or request in question. The one you want to modify is Technoweenie::AttachmentFu::Backends::AWS::S3#s3_url(thumbnail). You'll have to add an options argument so your controller can pass in whether it wants SSL or not, since this model-level package has no understanding of controller-level issues like "current request" (nor should it).
The real answer, though, is "you probably don't want to do this." If the customer is saying something like "we have a freemium model wherein only our paying customers get SSL transfers of their photos," you should push back: "it's actually harder to cripple SSL file transfers, and it's likely to just introduce bugs down the road. Let's think of another freemium option to offer." If the customer doesn't really care, you might as well just turn SSL on for all uploads.
This is significant issue that needs to be solved correctly, or the implications are quite nasty (particularly if you don't test in IE, the errors and warnings may slip by you). My solution is to put the following in ApplicationController
around_filter :set_attachment_fu_protocol
def set_attachment_fu_protocol
protocol = Technoweenie::AttachmentFu::Backends::S3Backend.instance_variable_get(:#protocol)
Technoweenie::AttachmentFu::Backends::S3Backend.instance_variable_set(:#protocol, request.protocol)
yield
ensure
Technoweenie::AttachmentFu::Backends::S3Backend.instance_variable_set(:#protocol, protocol)
end
This solution was designed to have the following properties:
Doesn't require patching attachment_fu
Sets the protocol for S3 Backend per request
Resets the protocol even if an exception occurs
Preserves the default :use_ssl setting if you are running from the console
Doesn't require the around_filter to be universal since it always resets it to the original state after each request

Resources