I want to enforce HTTPS for a Spring Boot application to be hosted at Pivotal CloudFoundry, and I think most of the applications would want this today. The common way of doing it, as I know, is using
http.requiresChannel().anyRequest().requiresSecure()
But this is causing a redirect loop. The cause, as I understand by refering to posts like this, is that the load balancer converts back https to http. That means, it has to be done at the load balancer level.
So, is there some option to tell CloudFoundry to enforce HTTPS for an application? If not, shouldn't this be a feature request? And, what could be a good way to have this today?
Update: Did any of you from Cloud Foundry or Spring Security team see this post? I think this is an essential feature before one can host an application on CloudFoundry. Googling, I found no easy solution but to tell the users to use https instead of http. But, even if I tell so, when an anonymous user tries to access a restricted page, Spring Security is redirecting him back, to the http login page.
Update 2: Of course, we have the x-forwarded-proto header as many answers suggest, but I don't know how hard it would be to customize the features of Spring Security to use that. Then, we have other things like Spring Social integrating with Spring Security, and I just faced an issue there as well. I think either Spring Security and tons of other other frameworks will need to come out with solutions to use x-forwarded-proto, or CloudFoundry needs to have some way to handle it transparently. I think the later would be far convenient.
Normally, when you push a WAR file to Cloud Foundry, the Java build pack will take that and deploy it to Tomcat. This works great because the Java build pack can configure Tomcat for you and automatically include a RemoteIpValve, which is what takes the x-forwarded-* headers and reconfigures your request object.
If you're using Spring Boot and pushing as a JAR file, you'll have an embed Tomcat in your application. Because Tomcat is embedded in your app, the Java build pack cannot configure it for the environment (i.e. it cannot configure the RemoteIpValve). This means you need to configure it. Instructions for doing that with Spring Boot can be found here.
If you're deploying an web application as a JAR file but using a different framework or embedded container, you'll need to look up the docs for your framework / container and see if it has automatic handling of the x-forwarded-* headers. If not, you'll need to manually handle that, like the other answers suggest.
You need to check the x-forwarded-proto header. Here is a method to do this.
public boolean isSecure (HttpServletRequest request) {
String protocol = request.getHeader("x-forwarded-proto");
if (protocol == null) {
return false;
}
else if (protocol.equals("https")) {
return true;
}
else {
return false;
}
}
Additionally, I have created an example servlet that does this as well.
https://hub.jazz.net/git/jsloyer/sslcheck
git clone https://hub.jazz.net/git/jsloyer/sslcheck
The app is running live at http://sslcheck.mybluemix.net and https://sslcheck.mybluemix.net.
Requests forwarded by the load balancer will have an http header called x-forwarded-proto set to https or http. You can use this to affect the behavior of your application with regard to SSL termination.
Related
There is an FTP server that I can connect to on my development machine using FileZilla or the Rails app I'm working on. But as soon as I deploy the app to Heroku, the exact same connection parameters time out. My best guess is that the server blocks IP ranges that include Heroku, or dynamic IPs in general. It is not a configuration problem because the deployed app can connect to other FTP servers without issue.
To get around this problem, I'm trying to use a QuotaGuard static URL as a proxy, the add-on for which I've already provisioned and have an ENV variable for. The problem is that this static URL is in the form http://username:password#subdomain.domain.com:9293.
How can I use this to handle an FTP connection?
Current code (works locally, times out on Heroku):
Net::FTP.open(host, username, password) do |ftp|
ftp.chdir(some_directory)
# some logic here about which files to download
end
I've checked the Ruby docs for Net::FTP and Net::HTTP for more information. FTP only seems able to use a SOCKS proxy, but HTTP seems more flexible. Could I use the static URL as a SOCKS proxy by ignoring the http:// prefix? Could I restructure the logic so that I can GET each FTP URL I need via HTTP?
I've also looked into using ProxyChainRB to do this but so far not having any luck since I'm running into the same issue of passing the proxy into an FTP connection.
Are there existing libraries that do this? Is there maybe a simpler solution I'm not seeing here?
I'm having great difficulty working out what's going on with a Grails 2.2.5 application which uses the Shiro plugin (v.1.2.1). This is on a system which has been working fine for a couple of years. It sits behind an nginx remote proxy server, which has hitherto been listening on ports 80/443. We've just moved the test rig, and it now shares a server with an Apache installation which has those ports, so we have nginx listening on ports 8070 for http and 8443 for https. It's largely working, but there are some puzzling problems with redirects when a user is not authenticated, and these problems seem to be coming from Shiro (although I'm having difficulty being certain).
Basically what's happening is that when an unauthenticated user goes to 'https://myapp.com:8443/admin/', the Grails application is issuing a redirect which takes them to 'https://myapp.com:8443/auth/login?targetUri=%2F' - i.e., the context has been stripped out. It should be 'https://myapp.com:8443/admin/auth/login?targetUri=%2F', and is so on the live server, which uses the standard ports (80/443). In fact, when I look at the Location header in the response, what it's actually responding with is 'http://myapp.com:8070/auth/login?targetUri=%2F' (i.e., with the http port, which is no problem as nginx is handling SSL).
Because my code, in AuthController.groovy, doesn't actually get involved until it receives the /auth/login request, this problem doesn't seem to be coming from anywhere in my code, and must be coming from the Shiro plugin. But why would the non-standard port be causing this problem (stripping out the context)? And more importantly, what can I do to solve it?
I think I may have been wrongly ascribing blame to Shiro or the Shiro plug-in here. I have solved the problem, which seems to be caused by a quirk in Grails itself.
In version 1 of Grails, one had to set grails.serverURL correctly in Config.groovy in order for redirects to work properly, but with version 2 that was no longer necessary, and in fact this property was commented out in a newly created app.
However, whatever was done doesn't seem to play nicely with non-standard ports. If the server port is something like 8070 or 8443, as I was using the redirect is incorrectly formed.
I have resolved the issue by reinstating grails.serverURL and making sure it's configured correctly. Now redirects work as they should again.
I am hosting an ASP.NET MVC 4 site on AppHarbor (which uses Amazon EC2), and I'm using CloudFlare for Flexible SSL. I'm having a problem with redirect loops (310) when trying to use RequireHttps. The problem is that, like EC2, CloudFlare terminates the SSL before forwarding the request onto the server. However, whereas Amazon sets the X-Forwarded-Proto header so that you can handle the request with a custom filter, CloudFlare does not appear to. Or if they do, I don't know how they are doing it, since I can't intercept traffic at that level. I've tried the solutions for Amazon EC2, but they don't seem to help with CloudFlare.
Has anyone experienced this issue, or know enough about CloudFlare to help?
The X-Forwarded-Proto header is intentionally overridden by AppHarbor's load balancers to the actual scheme of the request.
Note that while CloudFlare's flexible SSL option may add slightly more security, there is still unencrypted traffic travelling over the public internet from CloudFlare to AppHarbor. This arguably defies the purpose of SSL for anything else than appearances and reducing the number of attack vectors (like packet sniffing on the user's local network) - i.e. it may look "professional" to your users, but it actually is still insecure.
That's less than ideal particularly since AppHarbor supports both installing your own certificates and includes piggyback SSL out of the box. CloudFlare also recommends using "Full SSL" for scenarios where the origin servers/service support SSL. So you have a couple of options:
Continue to use the insecure "Flexible SSL" option, but instead of inspecting the X-Forwarded-Proto header in your custom RequireHttps filter, you should inspect the scheme attribute of the CF-Visitor header. There are more details in this discussion.
Use "Full SSL" and point CloudFlare to your *.apphb.com hostname. This way you can use the complimentary piggyback SSL that is enabled by default with your AppHarbor app. You'll have to override the Host header on CloudFlare to make this work and here's a blog post on how to do that. This will of course make requests to your app appear like they were made to your *.apphb.com domain - so if for instance you automatically redirect requests to a "canonical" URL or generate absolute URLs you'll likely have to take this into consideration.
Upload your certificate and add a custom hostname to AppHarbor. Then turn on "Full SSL" on CloudFlare. This way the host header will remain the same and your application will continue to work without any modifications. You can read more about the SSL options offered by AppHarbor in this knowledge base article.
This is interesting.
Just I recently had a discussion with one of our clients, who asked me about "flexible" SSL and suggested that we (Incapsula) also offer such option.
After some discussion we both came to the conclusion that such a feature would be misleading, since it will provide the end-user with a false sense of security while also exposing the site owner to liability claims.
Simply put, the visitor on one of "flexible" SSL connection may feel absolutely safe behind the encryption and will be willing provide sensitive data, not knowing that the 'server to cloud' route is not encrypted at all and can be intercepted (i.e. by backdoor shells).
It was interesting to visit here and see others reach the same conclusion. +1
Please know that as website owner you may be liable for any unwanted exposure such setup may cause.
My suggestion is to do the responsible thing and to invest in SSL certificate or even create a self signed one (to use for encryption of 'cloud to server' route).
Or you could just get a free one year SSL cert signed by StartCom and upload that to AppHarbor.
Then you can call it a day and pat yourself on the back! That is until future you one year from now has to purchase a cert =).
Reading this article on nginx website, I'm interested in using X-Accel-Redirect header in the way that Apache or Lighttpd users might use the X-Sendfile header to help with the serving of large files.
Most tutorials I've found require you to modify the nginx config file.
Can I modify the nginx config file on Heroku and if so, how?
Secondly,
I found this X-Accel-Redirect plugin on github which looks like it removes the need to manually alter the nginx config file - it seems to let you add the redirect location in your controller code - does anyone know if this works on heroku? I can't test it out until tonight.
NB - I have emailed both Heroku support and goncalossilva to ask them the same questions but I have no idea when they will get back to me. I will post back with whatever it is they tell me though.
Although Heroku seem to be using Nginx for their reverse-proxy component, the thing about a platform-as-a-service stack like this is that no individual tenant has to (nor even gets to) configure or tune distinct elements of the stack for any given application.
Requests in and out could be routed through any number of different elements to and from your Rails app so it's their platform infrastructure (and not any particular tenant) that manages all of the internal configuration and behavior. You give up the fine-grained control for the other conveniences offered by a PaaS such as this.
If you really need what you've described then I'd suggest you might need to look elsewhere for Rails app hosting. I'd be surprised if their answer would be anything else but no.
How can I future-proof my client URL links to my server for future HTTPS migration?
I have a .net winforms client talking to my ruby on rails backend. If I move the website in the future I want to make sure that my API links from the client don't have to change.
Or is this something a hosting provider can let you configure.
Oh, and when I do migrate I will not want any non HTTPS to occur.
PS1 - I am not talking about moving servers here, just upgrading the existing web application server with a certificate and moving to HTTPS only traffic
Place a base url as a config parameter in your client application, then run all new links through a getLinkURL(String relativeDestination) method which will give you a full url.
If you're worried about clients that haven't been updated making non-http requests, in your http (non-secure) vhost just Redirect 301 / https:// on your server.
If I understand the question correctly, I think you can solve this by using relative links everywhere; unless there's a reason you can't do that?
I think you need to look into DNS and how it works. It's not going to protect you against an HTTP to HTTPS migration but would allow you to move servers without re-engineering your code. Ideally I think you'd look to have a config setting in your code to switch from HTTP to HTTPS (and back) when necessary.