I'm trying to use Juggernaut 2 on a website that uses HTTPS. I don't need Juggernaut itself to use https per se.
So, I'm trying to load the required application.js from Juggernaut's own webserver at port 8080 through http in the layout of my rails app.
That works fine.
Then I notice Juggernaut trying to read socket.io from port 8080 through https, and ofcourse failing since it's own webserver uses http and not https.
So I either need to make Juggernaut's own webserver at 8080 use https or I need to get juggernaut to load everything it needs from port 8080 through http.
I could ofcourse locate its application.js and hardcode http usage in there, but is there a better way to solve this ?
With some searching I found this solution:
<script type="text/javascript" charset="utf-8">
var jug = new Juggernaut({protocol: 'http', host: 'www.mysite.com', port: '8080', secure: false});
</script>
This will let Juggernaut load socket.io through the host,protocol and port that you specify.
You could also host the socket.io and juggernaut js file on your own site and reference them that way through https.
That way your users won't have warnings about insecure content on a secure site.
The downside of course is that you will need to keep them up to date them whenever you upgrade juggernaut.
Related
I have a backend server (powered by Rails), whose APIs are used by a HTML5 frontend that runs on a Node simple development server.
Both are on the same host: my machine.
When I login from the frontend to the backend, rails sent me the session cookie. I can see it in the response headers, the problem is that browsers do not save it.
Policies are right, If I serve the same frontend directly from the rails app cookies are set right.
The only difference I can see is that when the frontend run on Node server, It runs on the port 8080 and rails is on the port 3000. I knew that cookies are not supposed to be port specific, so I am missing what is happening here.
Any thoughts? solutions?
(I need to be able to keep the setup this way, so to have the frontend served from Node and the backend on rails on different ports)
You're correct that cookies are port agnostic, and that the browser will send the same cookies to myapp.local:3000 as myapp.local:8080--except not through XMLHttpRequest (XHR, a.k.a., AJAX) when doing a cross-site request (CORS).
Solution: The request can be told to include cookies and auth headers by setting withCredentials to true on any XMLHttpRequest object. See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
Or if using the Fetch API, set the option credentials: 'include'. See: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
Alternative: since you tagged webpack-dev-server in your question, you might be interested in proxying requests to your Rails API through the webpack-dev-server to avoid any CORS issues in the first place. This is done in your weback.config:
proxy: {
'/some/path': {
target: 'https://other-server.example.com',
secure: false
}
}
See: https://webpack.js.org/configuration/dev-server/#devserverproxy
I have hosted my website http://www.example.com, and it works fine.
when I try to access it by https://www.example.com, my browser says it is unable to connect?
Is this normal? (Is it a DNS issue or a rails app)
This probably isn't a Rails issue, but it's hard to say without more information. The most likely explanation is that your server probably isn't configured to have port 443 open, which is the default port for https connections.
If you are on Amazon EC2, you'll need to manually open port 443 in the EC2 security group configuration.
I have a pretty unique predicament here. I'm using Twilio and need to test my Twiml response on my local machine. The goto solution for that is ngrok, but the problem is that the site I'm working on relies on subdomains for proper routing. There is no mysite.com, only sub.mysite.com. In the local environment I've modified hosts to redirect sub.mysite.dev to 127.0.0.1, but I haven't a clue how to solve this over a tunnel. Any thoughts?
I'm the creator of ngrok.
You can still make this work with ngrok, you'll just need to decide on a few subdomains up front that you want to use for testing. ngrok lets you forward multiple tunnels via the configuration file (https://ngrok.com/usage#config) Example configuration file:
tunnels:
one.mysite:
proto:
http: 80
two.mysite:
proto:
http: 80
three.mysite:
proto:
http: 80
This will forward
one.mysite.ngrok.com -> 127.0.0.1:80
two.mysite.ngrok.com -> 127.0.0.1:80
three.mysite.ngrok.com -> 127.0.0.1:80
It's not a wildcard (ngrok doesn't support wildcards at the moment), but having a few subdomains setup should be good enough for testing, I imagine.
In production my Grails app is running on Tomcat port 8080 and sits behind an Apache proxy; port 80. All functions work except for authentication. When a user tries logging into the app Spring Security appends :8080 to the target URL and the connection times out since the request can't be routed.
I have already set the following in config.groovy but no success:
grails.serverURL = "http://domain.tld/${appName}"
grails.plugins.springsecurity.portMapper.httpPort = "80"
grails.plugins.springsecurity.portMapper.httpsPort = "443"
The issue occurs when I try with either built-in authentication or OpenID. The app had been working well for over 6 months before my hosting provider made changes by plugged a hole and started blocking port 8080 from the outside.
I just need Spring Security to write the URL without :8080
Any help is appreciated, thanks!
UPDATE
In the end the issue was with the host I was using. Something to do with Apache ProxyPass. The same application works fine on the new production VPS. Thanks for the input guys.
Add the following to your Config.groovy file:
grails.server.port.http = 80
I would consider change tomcat to port 80 and forward all apache requests to port 80. Pay attention to use x-forward proto header tag by spring security.
Already answered in the question
mod_rewrite not working to secure grails application
The tomcat configuration change works like a charm.
I have a Ruby on Rails website at which I force all connections to be SSL. I need all connections from that site to use HTTPS as well. Also, Google Chrome will automatically switch to HTTPS even if I connect to another port.
This means that I cannot connect to
http://www.mysite.com:8080
I have to serve the juggernaut js file over https. But that doesn't work since Juggernaut doesn't want to use https instead of http at its internal webserver. So I copied the application.js file from the juggernaut folder /usr/local/lib/node_modules/juggernaut/public/application.js into my rails folder public/juggernaut and changed the following line in my HTML code:
to
Now I seem to be able to at least initiate a Juggernaut object. The problem arises when I start to actually do some listening. I get this error:
Not found: https://www.mysite.com:8080/socket.io/1/?t=1340749304426&jsonp=0
So either I need to
a) be able to change it so I can actually have Juggernauts webserver use https instead of http. This is preferable.
or
b1) fix Juggernaut so it doesn't try to access socket.io over port 8080 and
b2) add socket.io to my server, preferably under the www.mysite.com/juggernaut folder instead of the root.
Any ideas?
Thanks!
Might be a little late but I was able to get it to work using this. (My juggernaut is hosted on heroku)
var jug = new Juggernaut({
secure: true,
host: 'yourHostHere',
port: 443,
transports: ['xhr-polling','jsonp-polling']
});