On production, my app opens an iFrame of content hosted on my S3. Because of the CORS issues that comes up, I chose to mask my S3 Host to appear as a subdomain of my app with a CNAME. So by going to :
host.mywebsite.com
It's actually going to my S3 bucket also called host.mywebsite.com but it bypasses all my CORS issues because my site now believes it is local.
In order to test that it works though, I want to also set this up on my local. On my local, I use Rail's POW which allows me to use subdomains and host my local server. Would it be possible to somehow alias a specific subdomain as my remote S3 address?
In this way going to :
https://host.mylocalVersionOfMyWebsite.dev/
..would actually access the remote S3 host but as my local site. This way, my local site, mylocalApp.dev would think that it's coming from the same domain, thus not having any CORS issues..
Using just Rails, I'm limited to just redirects. And some Rack Middleware gives the ability to write 301's but no aliases. I've also considered using /etc/hosts, but havn't had such luck.
Related
There is an FTP server that I can connect to on my development machine using FileZilla or the Rails app I'm working on. But as soon as I deploy the app to Heroku, the exact same connection parameters time out. My best guess is that the server blocks IP ranges that include Heroku, or dynamic IPs in general. It is not a configuration problem because the deployed app can connect to other FTP servers without issue.
To get around this problem, I'm trying to use a QuotaGuard static URL as a proxy, the add-on for which I've already provisioned and have an ENV variable for. The problem is that this static URL is in the form http://username:password#subdomain.domain.com:9293.
How can I use this to handle an FTP connection?
Current code (works locally, times out on Heroku):
Net::FTP.open(host, username, password) do |ftp|
ftp.chdir(some_directory)
# some logic here about which files to download
end
I've checked the Ruby docs for Net::FTP and Net::HTTP for more information. FTP only seems able to use a SOCKS proxy, but HTTP seems more flexible. Could I use the static URL as a SOCKS proxy by ignoring the http:// prefix? Could I restructure the logic so that I can GET each FTP URL I need via HTTP?
I've also looked into using ProxyChainRB to do this but so far not having any luck since I'm running into the same issue of passing the proxy into an FTP connection.
Are there existing libraries that do this? Is there maybe a simpler solution I'm not seeing here?
I have a complex web app at example-app.com, hosting fully on AWS using ELB and Route 53 for DNS. It's a Rails app.
I'm running an experiment that I'm using in the rails app, at example-app.com/test. I want to set up new-domain-app.com, to point at example-app.com/test, and have the URL cloacked to always be new-domain-app.com. It's a single page site, so it shouldn't require any navigation.
I'm having a lot of trouble figuring out how to set up my DNS on Route 53 to accomplish this. Does anyone have good ideas on what this Route 53 configuration should look like?
AWS offers a very simple way to implement this -- with CloudFront. Forget about the fact that it's marketed as a CDN. It's also a reverse proxy that can prepend a fixed value onto the path, and send a different hostname to the back-end server than the one typed into the browser, which sounds like what you need.
Create a CloudFront web distribution.
Configure the new domain name as an alternate domain name for the distribution.
For the origin server, put your existing hostname.
For the origin path, put /test -- or whatever string you want prefixed onto the path sent by the browser.
Configure the cache behavior as needed -- enable forwarding of the query string or cookies if needed and any headers your app wants to see, but not Host.
Point your new domain name at CloudFront... But before you do that, note that your CloudFront distribution has a dxxxexample.cloudfront.net hostname. After the distribution finishes setting up (the "In Progress" status goes away, usually in 5 to 20 minutes) your site should be accessible at the cloudfront.net hostname.
How this works: When you type http://example.com into the browser, CloudFront will add the origin path onto the path the browser sends, so GET / HTTP/1.1 becomes GET /test/ HTTP/1.1. This configuration just prefixes every request's path with the string you specified as the origin path, and sends it on to the server. The browser address bar does not change, because this is not a redirect. The host header sent by the browser is replaced with the hostname of the origin server when the request is sent to the origin.
What you are trying to do is not possible. Route53 is a DNS system, and you can not configure a hostname (e.g. new-domain-app.com) to point to URL (e.g. http://example-app.com/test) using DNS.
However, you are probably using a wrong tool for the job. If example-app.com/test is indeed a simple, static, single page site, then you do not need to host it inside Rails app. Instead, you can host it on AWS S3 bucket, and then you can point new-domain-app.com to that bucket using Route53.
See the following for details:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
DNS knows about Domains, not url's. DNS simply converts names to IP addresses.
You can't do what you are asking for just using DNS and ELB, however, what you can do is have a seperate VHOST for new-domain-app.com that points to your example-app.com site and accomplishes what you want using some sort of redirection rule that only fires for new-domain-app.com.
I'm not sure that this qualifies as an SO question, and more likely is a serverfault question. Specifics about your webserver and OS platform would be helpful in getting more specific advice.
So here's some details:
You already have example-app.com setup and working
You create a CNAME entry pointing new-domain-app.com to example-app.com or you can make an A record pointing to the same IP. If you already have example-app.com pointing to a different IP address, then use a subdomain (test.example-app.com) to isolate it.
Setup a new vhost on your server that basically duplicates the existing vhost for new-domain-app.com. The only thing you need to change is the server name configuration.
Why does this work? Because HTTP 1.1 included the HOST header that browsers send along, and web servers use in vhosting to determine which virtual host to route an incoming request to. When it sees that the client browser wanted "example-app.com" it routes the request to the appropriate vhost.
Rather than having to do some fancy proxying, which certainly can be used to get to a similar result, you can just add a redirection rule that looks for requests for the host example-app.com and redirects those to example-app.com. In apache that uses mod_rewrite which people often utilize by putting rules in the ubiquitous .htacess file, but can also be done in nginx and other common web servers. The specifics are slightly different for each.
I'm trying to setup my heroku app to have an static IP using QuotaGuard (I know proximo is the other option, but it's pretty expensive).
I added the heroku QuotaGuard Static addon and got the two IPs it generates as well as the proxy url.
What is my next step? (aka how do I tell my Rails app to use the proxy provided by QuotaGuard)
I see they have ruby code samples using REST-client and HTTParty, but do I put that somewhere like in the application.rb??
Most likely a bit too late to answer this question, but still.
Like you said, the first step to configuring QuotaGuard Static is provisioning the addon on Heroku (either via the Web Interface or the Heroku CLI). From there, you are able to get your two outbound IPs, and your proxy URL. The two IPs you were given should be whitelisted on whichever remote service you are trying to access.
As you mentioned, the documentation gives you a couple of samples using Rest Client for Ruby on Rails. This snippet should pretty much go anywhere you want to access whichever resource you need to access via the static IP Addresses. Assuming you want to access a Web Service hosted on an Amazon EC2 instance with elastic IP 1.2.3.4, your would write:
RestClient.proxy = ENV["QUOTAGUARDSTATIC_URL"]
res = RestClient.get("http://1.2.3.4/yourWebService")
And from there process the response stored in res appropriately. This code would go in say whichever controller's method you'll be using to access the remote web service. In this case, you also need to add the Rest Client to your controller, so at the top of that file you shoud also add require "rest-client" . Don't forget to add the rest-client gem to your Gemfile.
Summing up, basically the snippets from the documentation go wherever it is you want to use the proxy to access a remote service requiring a fixed, whitelisted set of IP addresses.
Source: https://devcenter.heroku.com/articles/quotaguardstatic
I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.
I have hosted a RoR app on Amazon EC2 instance. Instance has public IP but no elastic IP is assigned. Application is pointed to a domain using Dreamhost.
We use Amazon S3 to store audio files uploaded through web application and load these files back to site and play in player.
This is where I am facing weird issue, sometimes files play fine but sometimes it gives error saying
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin http://XX.XXX.XX.XXX is therefore not allowed access.
But at the same time if I copy paste S3 url in browser outside my application it loads.
Why error gives IP address instead of saying mydoamin.com is therefore not allowed access?
I am guessing the issue is because of some domain/IP configurations.
An elastic IP on amazon is an IP which is reserved to you. Without it, every time you stop and start your instance, a different IP will be set to it.
You don't have to use elastic IP, you could, for example point your domain to an ELB (elastic load balancer) CNAME, which will remain constant as it load balances between one or more instances of your application.
I'm not sure this has anything to do with the error given, which is explained in this answer:
Site B uses Access-Control-Allow-Origin to tell the browser that the
content of this page is accessible to certain domains. By default,
site B's pages are not accessible to any other domain; using the ACAO
header opens a door for cross-domain access by specific domains.
Site B should serve its pages with
Access-Control-Allow-Origin: http://sitea.com
It seems that the problematic link is an absolute path with the explicit IP, I have no idea why this should happen, look at the source of the page from which the link fails, and try to figure it out.