How to use FTP via a proxy in Rails? - ruby-on-rails

There is an FTP server that I can connect to on my development machine using FileZilla or the Rails app I'm working on. But as soon as I deploy the app to Heroku, the exact same connection parameters time out. My best guess is that the server blocks IP ranges that include Heroku, or dynamic IPs in general. It is not a configuration problem because the deployed app can connect to other FTP servers without issue.
To get around this problem, I'm trying to use a QuotaGuard static URL as a proxy, the add-on for which I've already provisioned and have an ENV variable for. The problem is that this static URL is in the form http://username:password#subdomain.domain.com:9293.
How can I use this to handle an FTP connection?
Current code (works locally, times out on Heroku):
Net::FTP.open(host, username, password) do |ftp|
ftp.chdir(some_directory)
# some logic here about which files to download
end
I've checked the Ruby docs for Net::FTP and Net::HTTP for more information. FTP only seems able to use a SOCKS proxy, but HTTP seems more flexible. Could I use the static URL as a SOCKS proxy by ignoring the http:// prefix? Could I restructure the logic so that I can GET each FTP URL I need via HTTP?
I've also looked into using ProxyChainRB to do this but so far not having any luck since I'm running into the same issue of passing the proxy into an FTP connection.
Are there existing libraries that do this? Is there maybe a simpler solution I'm not seeing here?

Related

Can I alias a localhost subdomain to a remote S3 Host?

On production, my app opens an iFrame of content hosted on my S3. Because of the CORS issues that comes up, I chose to mask my S3 Host to appear as a subdomain of my app with a CNAME. So by going to :
host.mywebsite.com
It's actually going to my S3 bucket also called host.mywebsite.com but it bypasses all my CORS issues because my site now believes it is local.
In order to test that it works though, I want to also set this up on my local. On my local, I use Rail's POW which allows me to use subdomains and host my local server. Would it be possible to somehow alias a specific subdomain as my remote S3 address?
In this way going to :
https://host.mylocalVersionOfMyWebsite.dev/
..would actually access the remote S3 host but as my local site. This way, my local site, mylocalApp.dev would think that it's coming from the same domain, thus not having any CORS issues..
Using just Rails, I'm limited to just redirects. And some Rack Middleware gives the ability to write 301's but no aliases. I've also considered using /etc/hosts, but havn't had such luck.

How to setup QuotaGuard Static for a Rails app hosted on heroku?

I'm trying to setup my heroku app to have an static IP using QuotaGuard (I know proximo is the other option, but it's pretty expensive).
I added the heroku QuotaGuard Static addon and got the two IPs it generates as well as the proxy url.
What is my next step? (aka how do I tell my Rails app to use the proxy provided by QuotaGuard)
I see they have ruby code samples using REST-client and HTTParty, but do I put that somewhere like in the application.rb??
Most likely a bit too late to answer this question, but still.
Like you said, the first step to configuring QuotaGuard Static is provisioning the addon on Heroku (either via the Web Interface or the Heroku CLI). From there, you are able to get your two outbound IPs, and your proxy URL. The two IPs you were given should be whitelisted on whichever remote service you are trying to access.
As you mentioned, the documentation gives you a couple of samples using Rest Client for Ruby on Rails. This snippet should pretty much go anywhere you want to access whichever resource you need to access via the static IP Addresses. Assuming you want to access a Web Service hosted on an Amazon EC2 instance with elastic IP 1.2.3.4, your would write:
RestClient.proxy = ENV["QUOTAGUARDSTATIC_URL"]
res = RestClient.get("http://1.2.3.4/yourWebService")
And from there process the response stored in res appropriately. This code would go in say whichever controller's method you'll be using to access the remote web service. In this case, you also need to add the Rest Client to your controller, so at the top of that file you shoud also add require "rest-client" . Don't forget to add the rest-client gem to your Gemfile.
Summing up, basically the snippets from the documentation go wherever it is you want to use the proxy to access a remote service requiring a fixed, whitelisted set of IP addresses.
Source: https://devcenter.heroku.com/articles/quotaguardstatic

Remote IP is 127.0.0.1 returned when using SSL / HTTPS

When using https, the request.remote_ip returns 127.0.0.1. This prevents geocode lookup.
Is there a way to get the correct remote IP?
I have seen a few possible workarounds:
request.env['REMOTE_ADDR']
request.env['HTTP_X_FORWARDED_FOR']
which return 10.102.1.1
request.env[‘HTTP_X_REAL_IP’]
which returns ""
It turns out this is a limitation of the way the server at ninefold is set up.
"Since our Rails stack is Apache Passenger, the client side IP headers are actually stripped off when they pass through the HA Proxy load balancer. In the CItrix implementation of this service, we are unable to pass those headers through to the rails app. At this stage its not possible to access the remote user's IP address."
As a possible work around, you could use a service like Fastly to do your load balancing, then point it directly at your app servers' IPs to bypass HAProxy on Ninefold. You'd get a nice, fast CDN in the process too.

Cloudflare + Heroku SSL

I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.

https URL redirecting to external site

Hi I have a website that I will be developing in the future.
Upon looking at the current website I noticed something weird that I have never seen before and also Google'd and found nothing.
If you go to: http://www.smartrainer.com.au you get the normal site
But, if you go to: https://www.smartrainer.com.au you get redirected to another website and are also given an SSL warning beforehand (in Chrome)
The site is hosted on a UNIX / PHP server and the .htaccess file currently has nothing that would suggest that it's redirecting to this other website.
Any help or insight would be appreciated with this, because I've never heard of this or seen this before.. The client also has no idea why it would be directing to that company that we've never heard of
Thanks!
It sounds like you're using a shared hosting server.
In plain HTTP, the server can know which host the client is requesting using the Host header in the request (this is based on the URL). Apache Httpd supports this with what it calls Name-based virtual hosts.
The HTTPS configuration is separate from the HTTP configuration in Apache Httpd (and presumably a number of other servers). Having virtual hosts (typically on a shared host) for the HTTP configuration doesn't mean that the same configuration is replicated for HTTPS.
HTTPS presents another problem: choosing which certificate to send before being able to see the Host header. Indeed, the server needs to send the client a certificate with the correct name during the SSL/TLS handshake, which happens before any HTTP traffic is sent (so before the Host header can be read). To overcome this problem, some hosts will set up a certificate valid for multiple host names (typically multiple Subject Alternative Names, or sometimes wilcards), others will use Server Name Indication (which isn't supported by all clients).
To get your server to host your site for HTTPS, you'd need:
To make sure the certificate it serves is valid for your host name (otherwise, there will be a warning message).
That the virtual hosts (or equivalent) it serves are configured for your host too.
In your case it seems that (a) your server is serving a single certificate that is not valid for your host and (b) your host isn't configured for HTTPS anyway, since you're falling back to what's probably the default host.
You may solve this issue by redirecting HTTPS URL to HTTP URL from your .htaccess. This error might because of shared hosting. If you cannot solve this issue from your .htaccess than you may also contact your hosting provider on this issue.

Resources