I face the same situation, except that my framework is Ruby on Rails 4.2.6 (Ruby version 2.2.4)
I have do exactly the solution told, but when I try to login, always redirect to root page.(still not logined)
And I checked server log, login status was 200 success.
another clue is that when I go to the page which not enable
before_action :authenticate_user!
everything works fine. (domain not redirect to elb domain)
I think the problem is in the login part, but still not find the exact bug and solution.
How to make ec2 catch the host we expected (example.com), not elb host (elb.example.com)
Configure the CloudFront Cache Behavior settings to whitelist the Host header for forwarding. You may also need to whitelist one or more cookies, and possibly query strings. CloudFront forwards minimal headers by default, and no query parameters or cookies.
As a rule, the more things you forward, the lower your cache hit ratio... but obviously certain things must be forwarded unless the site is entirely static.
Related
We're breaking our monolithic Rails application in to microservices. Our services are hosted on AWS and are behind ALBs. We cannot use host based routing as we are multi-tenant via subdomain, and it would be an SSL nightmare to maintain the required certs for each tenant/environment/service combination. So we are using path-based API routing with rules on the load balancer. A request looks like this:
Client -> www.example.com/api/:service_name/the_rest_of_the_path -> ALB -> route to rails service by name of :service_name
Because ALB cannot modify the path of a request before it sends it on to the serive, when it reaches the Rails services the path is still /api/:service_name/the_rest_of_the_path . This means in order to route to the proper controllers/actions in this case, we'd need to actually create a rails scope on namespace of /api/:service_name . This would work in theory but it has two drawbacks.
Firstly it means local developers have to deal with ALB/client specific concerns -- the path used for external service/cluster routing for ALB.
The second is that it couples the application to that path. If the load balancer decided the path should be /:service_name/the_rest_of_the_path instead then it would mean changing the application code in conjunction with the load balancer rules to accommodate it. It's not optimal and I'd prefer to avoid it if at all possible.
I thought then perhaps we could introduce a webserver to the mix, in between the load balancer and the application layer. I worked on a proof of concept for this and had it stripping out /api/:service_name before it got to the service -- leaving the Rails app with just "the_rest_of_the_path" which is all it cares about. Great! Perfect! Or so I thought.
It works well enough to route initial requests to, It however falls flat when any sort of redirects or links are used by taking the current path (as Rails sees it) in to consideration.
In the event /api/:service_name is stripped off before it hits the service, any subsequent links or redirects made from the Rails server itself naturally do not include it in there any longer. You may be on www.example.com/api/:service_name/foo/bar but Rails only thinks you're at /foo/bar. When it tries to tack something on to the path for a redirect or link like /foo/bar/baz, it loses the thing that identifies what service to send it to so the route dies at the load balancer.
This has particularly been an issue with Omniauth/Oauth2 flows for us. Omniauth wants to live at /auth/:provider by default. If the request path is actually /api/:service_name/auth/:provider then it won't match and the Oauth flow wont initiate. Further if there is a failure with the Oauth flow, Omniauth will hard redirect to www.example.com/auth/failure -- which of course does not resolve as the LB does not know where to route the request to.
If we provide a path_prefix to Omniauth as /api/:service_name/auth then it wont match when testing locally at /auth and it won't initiate the flow there.
We won't have control over all of the gems we use and where they redirect to so my question is: Is there a proper way of hanging Rails API microservices off a path on a load balancer, and not have to pull teeth to preserve the necessary prefix in all routes and links and redirects? Something that is essentially a global base href that we can set there, but not set locally so that we can continue to develop at localhost:3000/path instead of remembering to use (and coupling with) an LB path like localhost:3000/api/:service_name/path ?
So i just joined a company that is changing DNS name. So, in order to redirect traffic from www.oldsite.com to newsite.com, i needed to do a redirection (301) on the DNS register. Of course, also removed oldsite.com from the heroku's app settings. For the bare oldsite.com, i needed to create a second rails app called oldsite-redirect so when https://oldsite.com is requested, it's redirected to https://newsite.com via javascript (window.location...).
I forgot to mention that somehow, all the http://oldsite.com requests were redirected in the browser to https://oldsite.com.
So on this second app, i loaded the oldsite's crts (otherwise https:// were raising a "domain doesn't match warning"), made the arrangements in DNS Config Panel, and now it's redirected without any problems. https://oldsite.com redirects to https://newsite.com and i didn't have to buy the SSL-Endpoint heroku's addon. So, why people buy it if you can serve https content without buying it?
Btw, i have been googling and yahooing and i haven't find such answer.
Also do you have any opinions/suggestion on my monkeypatch redirection?
Thanks in advance for your clarification.
I have a complex web app at example-app.com, hosting fully on AWS using ELB and Route 53 for DNS. It's a Rails app.
I'm running an experiment that I'm using in the rails app, at example-app.com/test. I want to set up new-domain-app.com, to point at example-app.com/test, and have the URL cloacked to always be new-domain-app.com. It's a single page site, so it shouldn't require any navigation.
I'm having a lot of trouble figuring out how to set up my DNS on Route 53 to accomplish this. Does anyone have good ideas on what this Route 53 configuration should look like?
AWS offers a very simple way to implement this -- with CloudFront. Forget about the fact that it's marketed as a CDN. It's also a reverse proxy that can prepend a fixed value onto the path, and send a different hostname to the back-end server than the one typed into the browser, which sounds like what you need.
Create a CloudFront web distribution.
Configure the new domain name as an alternate domain name for the distribution.
For the origin server, put your existing hostname.
For the origin path, put /test -- or whatever string you want prefixed onto the path sent by the browser.
Configure the cache behavior as needed -- enable forwarding of the query string or cookies if needed and any headers your app wants to see, but not Host.
Point your new domain name at CloudFront... But before you do that, note that your CloudFront distribution has a dxxxexample.cloudfront.net hostname. After the distribution finishes setting up (the "In Progress" status goes away, usually in 5 to 20 minutes) your site should be accessible at the cloudfront.net hostname.
How this works: When you type http://example.com into the browser, CloudFront will add the origin path onto the path the browser sends, so GET / HTTP/1.1 becomes GET /test/ HTTP/1.1. This configuration just prefixes every request's path with the string you specified as the origin path, and sends it on to the server. The browser address bar does not change, because this is not a redirect. The host header sent by the browser is replaced with the hostname of the origin server when the request is sent to the origin.
What you are trying to do is not possible. Route53 is a DNS system, and you can not configure a hostname (e.g. new-domain-app.com) to point to URL (e.g. http://example-app.com/test) using DNS.
However, you are probably using a wrong tool for the job. If example-app.com/test is indeed a simple, static, single page site, then you do not need to host it inside Rails app. Instead, you can host it on AWS S3 bucket, and then you can point new-domain-app.com to that bucket using Route53.
See the following for details:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
DNS knows about Domains, not url's. DNS simply converts names to IP addresses.
You can't do what you are asking for just using DNS and ELB, however, what you can do is have a seperate VHOST for new-domain-app.com that points to your example-app.com site and accomplishes what you want using some sort of redirection rule that only fires for new-domain-app.com.
I'm not sure that this qualifies as an SO question, and more likely is a serverfault question. Specifics about your webserver and OS platform would be helpful in getting more specific advice.
So here's some details:
You already have example-app.com setup and working
You create a CNAME entry pointing new-domain-app.com to example-app.com or you can make an A record pointing to the same IP. If you already have example-app.com pointing to a different IP address, then use a subdomain (test.example-app.com) to isolate it.
Setup a new vhost on your server that basically duplicates the existing vhost for new-domain-app.com. The only thing you need to change is the server name configuration.
Why does this work? Because HTTP 1.1 included the HOST header that browsers send along, and web servers use in vhosting to determine which virtual host to route an incoming request to. When it sees that the client browser wanted "example-app.com" it routes the request to the appropriate vhost.
Rather than having to do some fancy proxying, which certainly can be used to get to a similar result, you can just add a redirection rule that looks for requests for the host example-app.com and redirects those to example-app.com. In apache that uses mod_rewrite which people often utilize by putting rules in the ubiquitous .htacess file, but can also be done in nginx and other common web servers. The specifics are slightly different for each.
I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.
One of my rails apps (using passenger and apache) is changing server hosts. I've got the app running on both servers (the new one in testing) and the DNS TTL to 5 minutes. I've been told (and experienced something like this myself) by a colleague that sometimes DNS resolvers slightly ignore the TTL and may have the old IP cached for some time after I update DNS to the new server.
So, after I've thrown the switch on DNS, what I'd like to do is hack the old server to issue a forced redirect to the IP address of the new server for all visitors. Obviously I can do a number of redirects (301, 302) in either Apache or the app itself. I'd like to avoid the app method since I don't want to do a checkin and deploy of code just for this one instance so I was thinking a basic http url redirect would work. Buuttt, there are SEO implications should google visit the old site etc. etc.
How best to achieve the re-direct whilst maintaining search engine niceness?
I guess the question is - where would you redirect to? If you are redirecting to the domain name, the browser (or bot) would just get the same old IP address and end up in a redirect loop.
If you redirect to an IP address.. well, that's not going to look very user friendly in someone's browser.
Personally, I wouldn't do anything. There may be some short period where bots get errors trying to access your site, but it should all work itself out in a couple days without any "SEO damage"
One solution might be to use Mod_Proxy instead of a rewrite to proxy traffic to the new host. This way you shouldn't see any "SEO damage".
I used rinetd to redirect the IP traffic from the old server to the new one on IP level. No web server or virtual hosts config needed. Runs very smoothly and absolutely transparent to any client.