As far I understand, custom origin server with cloudfront only works if cloudfront is able to access files from my website url:
eg: www.domain.com/hello.html
However, my website has a login requirement in order to view hello.html. How can I have the login mechanism and still cache my real hello.html page in cloudfront using custom origin server?
I am using Ruby on Rails btw, but this is applicable to other stacks as well.
I'm pretty sure this is not possible. As you said, CloudFront has be to able to access the file to serve and cache it. I never saw an option to tell CloudFront to use a password to access the file.
An idea: maybe you can check in your Rails app, before you require the user to enter a password, if the request comes from CloudFront (I'm sure there are some headers indicating that) and, if so, bypass the login requirement?
Edit:
It says in the docs:
Do not configure your origin server to request client authentication.
One thing I'm pretty sure set though is the User Agent. Check for user_agent =~ /cloudfront/iand bypass authentication?
Related
I am implementing a cloudfront solution and would like to test / run it on my staging server, however staging is "protected" from the outside world by basic_auth.
I have tried entering a URL with the basic_auth username / password in it e.g user:pass#example-staging.com but CloudFront rejects this URL.
How can I allow Cloudfront / an origin to access my staging server?
(I am hosting on heroku, using rails 4)
Because of the way web content caching works, most HTTP request headers are not forwarded from CloudFront to the origin server by default, including the Authorization header needed for basic auth.
You'll need to whitelist the Authorization header in the appropriate cache behavior(s).
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesWhitelistHeaders
I have a Rails app with carrierwave uploaders configured to use carrierwave-aws on an S3 bucket.
The permissions for that bucket WERE bad, but hopefully I fixed them and now uploads seem to work fine (and I checked the permissions of a single file, public read is checked)
The Rails app is using cloudfront, which has been configured to handle "normal" assets (css, js, etc.) and with carrierwave-aws.
However I am still getting 401 errors, and worse, when this happens a HTTP Basic auth popu appears on screen, asking for a password for my distribution
"NetworkError: 401 Unauthorized - https://xxxxxxx.cloudfront.net/uploads/user/avatar/xxxxxx/thumb_avatar.jpg"
The above error triggers an HTTP Basic auth windows asking for the user/pw for xxx.cloudfront.net
If this is linked, it turns out I indeed do have this kind of auth on my Rails website itself (before we move on to production).
On CloudFront, I have configured two origins : my Rails server (and css/js are ok so I guess this ons is fine) and The S3 Bucket (don't know how I can really test this one though)
So
How can I check My Rails -> Carrierwave-aws -> CloudFront pipeline is working fine ? (Uploads are fine, I just can't read from the browser after an upload)
How can I disable HTTP Basic Auth from the website in case a 401 error appears ?
EDIT : I setup Basic Auth in Rails ApplicationController
def authenticate
if ENV["HTTP_BASIC_AUTH"] == "true"
authenticate_or_request_with_http_basic do |username, password|
username == "wxx" && password == "xxx!" or
end
end
end
A 401 HTTP response is, of course, supposed to trigger a browser pop-up prompt. If you don't want that, your solution is not to require auth in your application.
But, it seems like the solution that would be most helpful to you at this point would be to go ahead and enable pass-through of the browser's attempt to send credentials back to the origin server. To do this, CloudFront needs to forward the Authorization: header to your origin. By default, this request header (like most request headers) us discarded by CloudFront and not sent to the origin.
Whitelist this header in the appropriate cache behavior so that CloudFront will forward it and your access control mechanism should work as expected.
Remember that changes to CloudFront distributions take a few minutes. Wait for the distribution to return to the deployed status before testing.
I am preparing to work on a project where I need to display a dashboard from an online application. Unfortunately, the use of an API is currently not possible. The dashboard can be embedded in an iFrame. However, when it is displayed it will prompt the user viewing the dashboard to login to an account.
I have one paid account to this service. Are there any rails gems to login to the service before the iFrame is processed?
Or would a proxy within my rails app be a better route to go?
Any pointers are appreciated!
Neither a Rails gems nor a proxy within your rails will work and they same have the same limitation.
They are both running on the back-end, server side.
The authentication you need is client side.
Unless you mean proxy the ENTIRE thing, the auth request and all subsequent requests and user interactions with this dashboard. That should work but (see below)
The way authentication works (pretty much universally) is: once you log in to any system, it stores a cookie on your browser and then the browser sends that cookie for every subsequent request.
If you authenticate on the backend, that cookie will be sent to your rails code and will die there, and the users browser will never know about it.
Also - it is not possible to do the auth server side and capture the cookie and then have the user browse the site with their browser directly, for two reasons:
Sometimes auth cookies use information about the browser or HTTP client to encrypt the cookie, so sending the same cookie from a different client wont work
You can not tell a browser to send a cookie to a domain different than your own.
So your options are, off the top of my head right now:
If there is a login page that accepts form submissions from other domains, you could try to simulate a form submission directly to that sites "after login" page. (The page the user gets directed to once they fill up the login form). Any modern web framework as XSRF protection (Cross Site Request Forgery protection) and will disallow this approach for security reasons.
See if the auth this site uses has any kind of OAUTH, Single Sign On (SSO) or similar type of authentication integration that you can do. (Similar to an API, so you may have already explored this option)
Proxy all requests to this site through your server. You will have to rewrite the entire HTML so that all images, CSS, stylesheets, and all other assets are also routed through the proxy or else the URLs are rewritten in the HTML to not be relative. You might hit various walls if a site wasn't designed for this use case. From things like the site using relative URL's for assets that you aren't proxying, the site referencing non-relative URL's causing cross-domain errors, etc. Note its really hard to re-write every single last assets reference, its not only the HTML you're worried about, Javascript can have URL's in it too, and CSS can as well.
You could write a bookmarklet or a browser extension that logs the user into the site.
Have everyone install Lastpass
Have everyone install the TamperMonkey browser extension (and others like it for other browser), and write a small User Script to run custom javascript automatically to log the user in on that site
Scrape that site for the info you need and serve it on your own site.
OK I'm out of ideas. :)
I'm getting ready to have an SSL cert installed on my hosting.
It is my understanding that (and correct me if I'm wrong...):
Once the hosting guys install the cert, I will be able to browse my site on Http or Https (nothing will stop me from continuing to use Http)?
The only thing I need to do, is add logic (in the case of MVC, Controller attributes/filters) to force certain pages, of my choosing, to redirect to Https (for instance, adding a [RequiresHttps] attribute sparingly).
Do I have to worry about doing anything extra with these things to make sure I'm using SSL properly? I'm not sure if I need to change something with logic having to do with:
Cookies
PayPal Express integration
Also, I plan on adding [RequiresHttps] only on the shopping cart, checkout, login, account, and administration pages. I wish to leave my product browsing/shopping pages on Http since I heard there is more overhead for using Https. Is this normal/acceptable/ok?
One more question... I know ASP.NET stores some login information in the form of an Auth cookie. It is okay that a user logs in within an Https page, but then can go back and browse in an Http page? I'm wondering if that creates a security weakness since the user is logged in and browsing in Http again. Does that ruin the point of using SSL?
I'm kind of a newb at this... so help would be appreciated.
Starting with your questions, on one, (1) yes nothing will stop you to use for the same pages http ether https.
and (2) Yes you need to add your logic on what page will be show only as https and what as http. If some one wondering, why not show all as https the reason is the speed, when you send them as https the page are bigger and the encode/decode is take a little bit more, so if you do not need https, just switch it to http.
Switching Between HTTP and HTTPS Automatically is a very good code to use for the implementation of switching logic fast and easy.
Cookies
When the cookie have to do with the credential of the user then you need to force it to be transmitted only with secure page. What this mean, mean that if you set a cookie with https, this cookie is NOT transmitted on non secure page, so is stay secure and a man in the middle can not steal it. The tip here is that this cookie can not be read on http pages - so you can know that the user is A, or B only on secure page.
Cart - Products
Yes this is normal : to leave the products and the cart on unsecured connection because the information is not so special. You start the https page when you be on user real data, like name, email, address etc.
Auth cookie
If you set it as secure only, then this cookies not show/read/exist on unsecured page. It is an issue if you not make it secure only.
Response.Cookies[s].Secure = true;
Few more words
What we do with secure and non secure page is that we actually split the user data in two parts. One that is secure and one that is not. So we use actually two cookies, one secure and one not secure.
The not secure cookie is for example the one that connect all the products on the cart, or maybe the history of the user (what products see) This is also that we do not actually care if some one get it because even a proxy can see from the url the user history, or what user see.
The secure cookie is the authentication, that keep some critical information for the user. So the non secure cookie is with the user everywhere on the pages, the secure is only on check out, on logged in, etc.
Related
MSDN, How To: Protect Forms Authentication in ASP.NET 2.0
Setting up SSL page only on login page
Can some hacker steal the cookie from a user and login with that name on a web site?
1) Yes, you are right.
2) Yes. You can optionally handle HTTP 403.4 code (SSL required) more gracefully, by automatically redirecting the client to the HTTPS version of the page.
As for authentication cookies, I've found this MSDN article for you. Basically, you can set up your website (and the client's browser) to only transmit authentication cookie via HTTPS. This way it won't be subject to network snooping over unencrypted channel.
Of course, this is only possible if all of your [Authorize] actions are HTTPS-only.
I have a website (www.mydomain.com) that is secured with an SSL certificate. It is an ASP.NET website and I have forced certain pages via code to be required to use the https:// prefix. If they don't it will redirect them to the https:// equivalent. Is this a good practice? Is there an easier way to do this? Not every single page requires SSL.
Also, when the users use my URL in the form of mydomain.com instead of www.mydomain.com they get a certificate error because the certificate was registered for www.mydomain.com. Should I use the same approach as I am with the http:// and https:// issue I mentioned above? Or is there a better way of handling this?
Your approach sounds fine. In my current project, I force HTTPS when a user goes to my login page, (Based on a config flag which lets me test locally without dealing with needing a cert). This allows me to access other pages unsecured which is handy.
I have a couple places where our server grabs the output of other pages (rendering to html to PDF and fetching dynamic images for example). Because of our environment, our server can't resolve it's public name, so if we were to force ssl at the site we'd have to add, our internal IP address (or fake the domain name).
As for your second question you have two options to handle the www.example.com vs example.com. You can buy a certificate that allows you to have multiple domain names. These are known as UCC certificates.
Your second option is to redirect example.com to www.example.com or the other way around. Redirecting is a great option if want your content to be indexed by google or other search engines. Since they will see www.example.com and example.com as two seperate sites. This means that links to your sites will be split reducing your overall page rank.
You can configure sites in IIS to require a Cert but that would A) generate an error if someone isn't visiting with https and B) require all pages to use https. So, that won't work. You could put a filter on IIS that checks all requests and redirects them as https calls if they are on your encryption list. The obvious drawback here is the need to update your list of pages every time a new page is added (e.g. from an XML file or database) and restart the filter.
I think that you are probably correct in building code into the pages that require https that redirects to an https version if they arrive via http. As far as your cert error goes, you could redirect with a full path (that includes the www) instead of a relative path to fix this problem. If you have any questions about how to detect whether the call uses https OR how to get the full path of the current request please let me know. Both are pretty straightforward but I've got sample code if you need it.
UPDATE - Josh, the certs that handle multiple subdomains are called wildcard certs. The problem is that they are quite a bit more expensive than standard certs.
UPDATE 2: One other thing to consider is to use a Master page or derived class for the pages that need SSL. That way, instead of duplicating the code in each page you can just declare it as type SSLPage (or use the corresponding Master page) and have the Master/Parent class handle the redirect. Again, you'll need to do some URL processing if you take this approach but it is pretty trivial.
Following is something that can help you:
If it is fine to display all your website pages with https:// then you can simply update your code to use https:// and set two bindings in IIS. One is for http and another is for https. In this way, your website can be accessible through any of the protocol.
Your visitors are receiving a name mismatch error because the common name used in your SSL certificate is www.mydomain.com. Namecheap is providing RapidSSL certificates through which you can secure both names under single SSL. You can purchase this SSL for www.mydomain.com and it will automatically secure mydomain.com (i.e. without www).
Another option is you can write a code to redirect your visitors to www.mydomain.com website even if they browse mydomain.com.