PayPal Custom Payment Page with SSL image sources from S3 WITHOUT certificate problems? - ruby-on-rails

On the PayPal payment page customization you can insert your own URI for the logo and header background. Obviously it's bad news to insert a non 443 (https) URL, but it also appears that insertion of a secure S3 link is also a major fail.
Initially I inserted my own cname alias to it cdn.mydomain.com and then after seeing an error message from Firefox, thought I needed to pass the actual AWS S3 full path to the bucket which is: https://cdn.mydomain.com.s3.amazonaws.com/paypal/my-logo.png
Even after doing this, I'm seeing the same problem with Firefox:
I am figuring that there is a different way to refer to a bucket via a path vs the subdomain for the bucket, but I'm not readily finding the answer. Or is there an altogether different way to handle this so that there's no certificate issue?

Use a subdomain/bucketname that doesn't have a dot in it. The cert is only good for *.s3.amazonaws.com, not *.*.s3.amazonaws.com.

Related

Can Amazon Echo OAuth take an authorization url that is a registered domain name?

I am trying to set up OAuth with the Amazon Echo. Unfortunately, I get the error,
Error: A URL must be between 9 and 2000 characters.
When I try to set up OAuth/account linking on the Amazon Developer portal. It seems to be an issue with my authorization url I am giving Amazon.
I substituted out the actual url for privacy.
When I visit the url (a web page that is served through a Flask application), everything works fine. My thought is that since the url is not a registered domain name, maybe Amazon won't let me use that url.
When I looked at Amazon's documentation their example URL has a similar length to mine which is why I think the error may be a bit misleading.
What could be causing this error and how can I go about fixing it? Thank you :)
I believe it is failing since your page is not served via HTTPS. The documentation states, along with other items, 'It must be served over HTTPS.'

Customize S3 403 message

I have a website hosted on Heroku, and using Ruby on Rails with the paperclip gem.
I am trying to prevent hotlinking to all my files in my S3 bucket, so I have everything on private and only allow user to access using an expiring URL
I want to provide a more user-friendly page when user tries to reuse an expired URL. Currently it is showing the message below:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>300</X-Amz-Expires>
<Expires>2016-04-15T19:41:33Z</Expires>
<ServerTime>2016-04-15T19:41:39Z</ServerTime>
<RequestId>D5DD935553A2CF88</RequestId>
<HostId>
55+rFtFbksDMyBWf5cWwgJ+aWvJKwe5umSXgTEWYKgfoT5QR5sbJY9fRNFIiBAqd35OR2MoiCzQ=
</HostId>
</Error>
Is there a way to customize the error page on S3?
S3 offers custom error pages through the web site endpoints -- but not the REST endpoints... but signed URLs only work on the REST endpoints, and not the web site endpoints.
So, no, there is not a way to directly solve this using only S3.
One option is to use CloudFront, which offers the ability to replace the standard error pages with a custom static page, but the error content is lost and all you have is a static page. You also have to use the CloudFront URL signing mechanism, which is different than S3 (though it also has some advantages, such as wildcard support in a signed URL).
In this answer to a question that is similar, but not a complete duplicate I demonstrated the way I've used an XSL transform to "style" the S3 error XML, by modifying the XML returned to the browser, injecting a link to the XSL stylesheet, and letting the browser do the rest of the work... see the screen shots.
I'm quite pleased with the solution, though it has what some people would consider a drawback -- it requires all of the S3 requests be served via a proxy server running HAProxy in EC2. There's a small additional cost for the EC2 instance, but no added cost for the bandwidth, since the transfer from S3 into EC2 is free, and the transfer from EC2 to the Internet is the same price as transfer from S3 to the Internet. With this setup, the S3 signed URLs still work. The additional advantages in my application us that this allows me to use my SSL certs with S3 static content (although this capability is also available through CloudFront), and the fact that the proxy's access logs are in real-time.

Cloudflare + Heroku SSL

I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.

Switching between http and https for images located on a sub-domain

My ASP.NET MVC3 site, www.mysite.com, pulls images from images.mysite.com. When I'm not logged into my site and using SSL, it works flawlessly. However, when logged in, it get the
Only secure content is displayed.
message in IE9. I understand that. What's the best way to deal with switching URL's for my images? Should I check to see if I'm currently using SSL and point my images to https://images.mysite.com, otherwise http://images.mysite.com?
EDIT: This is an e-commerce site, so most of the time the site is browsed unsecured. But after login, I still need to pull some of those same images, and of course if they browse back to a regular catalog page, it would need to access images. Perhaps I will just have to always use https://images.mysite.com. Just seemed like overkill.
I believe the problem only happens when you're in a secure page accessing content over http. So, for pages that can be seen both in http or https, might be as easy as always using https to get the images, regardless if you're in http or https.
You will always get that message if you are pulling content from a non-SSL site when viewing over SSL. If you site is mostly SSL protected, just always pull images from https://images.mysite.com as you do not get the error if you pull SSL content into a non-SSL site.
Otherwise, you will need to know which pages are only viewable over SSL and which ones are not, and link appropriately.
Lastly, if you site is available over both, you will probably need to look at the HTTPS server variable to determine if you are on SSL or not and use this to determine your link (http or https).
Did you try prefixing with ~instead of ../ or /?
This worked for me.

Rails implementation for securing S3 documents

I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.

Resources