(This is a follow up to rails app fast on server, but slow when accessed from another machine.)
I have a Rails web app that's incredibly slow when I access via its hostname, but runs at normal speeds when I access via its IP address (or via localhost, if I access it on the same server machine it's running on). This makes me think the problem is with the DNS. (Also, all these machines are running on the same corporate intranet.)
However, when I ping the hostname from a terminal, the ping seems to run fine. Does the fact that pinging works suggest that the problem is not with the DNS? (I don't really know much about DNS or servers and networking, so I'm kind of floundering around a bit here.)
Update to add: I also ran a simple "Hello world" Sinatra app, and this also runs super slowly when accessed via hostname (but not when accessed via IP address).
Fast ping from your terminal suggests that DNS between you and DNS server was fine and that network between you and server is fine.
This still does not help with the DNS on your server. Do you have any network operations that your server performs? If so, you need to make sure the network is reachable.
I suggest you get a simple "hello world" Rails application deployed there and see if it is Rails issue related (server wide) or your application related (very easy to do).
The other suggestion is to profile your Rails app and see which operation is taking the time to complete.
Your ping command is probably using cached DNS instead of hitting the server every time. Google for "flushdns" to find the right syntax to purge your cache for your particular operating system, then try it. You'll need to do this every time if you want to use ping to see about DNS response.
Related
I'm sure this is possible somehow, but never really had a need to do so previously. I have a bunch of Docker containers that run on an UBUNTU host. One of the containers is an NGINX server that serves as a webserver and reverse proxy. What I would like to do is setup some sort of 'switch' or mechanism on the host or preferably another device that does something like described below. The server does have production and development versions of the web applications, so I really just want to set a header if it is running but in maintenance.
Set up something on the host other device that indicates or detects the state of the NGINX server.
a. Running normally.
b. Running but undergoing maintenance (would have to be manually set into that mode)
c. Not running.
Depending upon the state in 1 it would do the following:
a. Just pass the request through to the server.
b. Pass through the request, but possibly set a header or something to indicate it is in maintenance mode.
c. Redirect the request to an external URL, basically the public facing page for the business.
Not really sure how to approach that since it seems that I would need an HTTP listener on the host, or possibly on a router or firewall or other device (we have a Fortigate and a Watchguard). That would check the HTTP request, and then take the appropriate action based upon what "mode" we are in. The UBUNTU host is pretty much bare bones, i.e. without Apache or another web server because everything is pretty much in the Docker package.
If I were to set that up on the Debian host, seems like we could just have an environment variable that defaults to PROD and then just set it to DEV when we are working on the code, and then set it back to PROD when we are done, and then the process would.
ping the DOCKER NGINX instance to see if it is running (i.e. status code).
if it is running and PROD, just forward on.
if it is running and DEV, set a header to indicate so and forward on.
if it is not running, redirect externally.
If the server is completely down though, that would fail.
Any ideas as to how to actually do that ? Ideally the 'processor' would not reside on the UBUNTU server at all and would always be running.
I have ssh'd into my rasberry pi and built a rails application.
Now how do I load the rails app from another machine?
I have tried IP:port in a web browser, but this fails.
Can I use ssh from a web browser to load the rails server process?
Are there gems I need to install to do this?
Is there any good documentation that I have missed?
SOLUTION
use ngrok to tunnel https://medium.com/#karimbutt/using-ngrok-to-create-a-publicly-accessible-web-facing-raspberry-pi-server-35deef8c816a#.sraso7zar
Maybe the problem is with the IP address you're trying to use. Servers don't necessarily forward their public IP traffic to localhost automatically.
Perhaps you could configure the IP address somehow, I don't know (others might?). Alternatively, you have a use a "local tunnel" service like ngrok or localtunnel. What these do is create a public URL for your localhost (i.e. your "loopback" address), so anyone can access it.
I spoke with a Ngrok author via email. He ensured me that I shouldn't need to expect any downtime from the service or to have to manually restart it. Although keep in mind that if you're on the free plan, whenever you restart Ngrok you're going to get a different URL. He also described it as kind of like a "souped up SSH -R"
Problem:
My rails app (on my local machine) only responds to requests sent from the same machine to localhost, 127.0.0.1, or my internal ip address. When I try to hit it using my internet ip or from any other machine, inside or outside of my network, it just times out. I'm on Mac OS 10.9.1, ruby 1.9.3, rails 4.0.0.
I've done a lot of searching but all I can find is problems where people didn't forward their ports or bind the right ip.
Here are the areas I've investigated:
Ports -
I've tried several different ports. I configured my router to forward every port I tried but got the same result. I thought maybe there was a problem with the router so I built a simple server in Java and bound all the same ports I was binding with my rails app. Sure enough, when I hit the Java app using my internet ip it worked just fine so the router/firewall/port forwarding isn't the problem. Also, I run an apache server on port 80 and that has never had any problems. I turned apache off and tried port 80 for my rails app but that didn't fix the problem.
Rails Server -
I started with WEBrick and I thought that perhaps there was some setting inside that blocked external requests. I searched google extensively and found nothing on that matter. Just to be safe I installed Thin and got the exact same result I did with WEBrick. One interesting thing is that when the rails server is started, the external request takes a long time to time-out, but the server console displays no output at all. However if I try to send the same request w/out starting the server at all it fails immediately.
User Permissions -
I started the server with root (i'm starting to just shoot in the dark here) and it had no effect.
Environment -
I was starting in development environment originally because I'm developing but just for fun I tried starting in production and it also made no difference.
PLEASE HELP ME SMART PEOPLE
Update:
I installed the app on my Ubuntu machine and it doesn't have this problem! So that suggests the problem may have something to do with Mac OS.
SOLVED:
It turns out that in the System Preferences -> Security & Privacy -> Firewall in Mac OS, it was somehow set to block incoming connections to Ruby 1.9.3. I must have accidentally set that some time ago.
The problem is you are probably trying to request the page from your local machine (or any computer on your local network, behind your firewall) to your public IP expecting a result... not unless you setup routes through your firewall for this (and not usually available on a consumer level router... linksys, dlink, etc)
So forward port 80 if you are using something like pow, or 3000 for web bricks default port to your local machine
Then have someone outside your local lan request your external (public) IP
This may be related: Rails 3.1 on Ubuntu 11.10 under VirtualBox very slow
Your mention of slowness combined with the use of webrick makes me think you've got some reverse-DNS lookup awfulness going on. A quick first step is hacking /etc/hosts to bypass this lookup.
The situation I dealt with on Ubuntu was solved in the short-term by hacking /etc/hosts. You could do this quick hack in order to see if it is indeed just webrick's reverse-DNS lookup. Edit /etc/hosts and add a line for the external user's IP address, something like this:
156.123.48.55 TestPerson
Replace the IP address with the tester's IP address. Since you said you can get the external request to hit an Apache server on port 80, you can grab their IP address from the Apache access logs if necessary, otherwise just ask the person testing.
You could also try a different web server, such as unicorn, which may help out. Add "gem unicorn-rails" to your Gemfile, run bundle install, and then (according to their docs), rails server will just use unicorn directly.
With any local server, you'll need to correctly configure port forwarding on your firewall. Like said by CaptChrisD, tests must be done by an external IP/browser (if you own a server, ssh on it, then w3m to test).
I already had same symptom (server started => timeout, server stopped => fail) and the origin was an issue with firewall configuration. I think it is your problem.
With MacOS, Pow is really awesome: installation is easy, no configuration required (no /etc/hosts…). Moreover, they give you a hook for external access to your virtualhosts (but you still need port forwarding on your firewall).
Otherwise, there is other solutions like Forward to do it without firewall configuration (30-days free trial).
Hope this helps!
I am not very experienced, but I have played around with rails a little in the past. When I did it was easy to test the app without actually exposing anything to the internet, since I could just point my browser to localhost. But this app will be getting input from a cellphone so I think it needs to be exposed. What I did so far was to push it to heroku and test there, but that does not seem like a good solution at all since every time I make a change i have push it. I am thinking I have to open a port on my router so and expose the server, which I think I can figure out how to do fairly quickly. Any suggestions on how to try to keep this as safe as possible? Or is there a better solution that I am missing?
The way you could test it if your server and your cell phone are on the same network is just find the local IP address on your machine running the server. You would then go into the browser of the cell phone and type the IP of your browser 'colon' the port the server is listening to (most likely 3000 if a rails server).
So for example if the servers IP was 192.168.0.1 it would be 192.168.0.1:3000
Since you are doing this on an app just put in 192.168.0.1 for the IP of the connection and 3000 for the port. Or if using a url 192.168.0.1:3000 (just like the browser)
HttpClient client = new DefaultHttpClient();
HttpPost post = new HttpPost("https://192.168.0.1:3000");
A very simple way is to use Pow in combination with xip.io.
The former is a local webserver that will run any Rack application behind the scenes for you.
Installing is as simple as:
$ curl get.pow.cx | sh
and linking your app in:
$ ln -s <path-to-app> ~/.pow/myapp
Your app is now accessible at http://myapp.dev/ locally.
Assuming your computer's IP is 10.0.1.1 and your cell phone is on the same Wifi network, your app will be accessible on the phone from http://myapp.10.0.1.1.xip.io.
Caveat: you'll be getting Wifi performance, not cellular performance.
I've got everything working with ferret and acts_as_ferret for development (or localhost DRb), but I can't get my multiple host deployment working. All of the remote systems get ECONNREFUSED when accessing the port. On the ferret server, the daemon is listening on localhost only despite the configuration listing the FQDN as the host.
I also tried switching to a UNIX socket to share data between the ferret DRb daemon and the app code but it too gets ECONNREFUSED. (The socket is available to all of the machines via an NFS mount).
Is there a better way to do this or should I be looking for another search indexer? Thanks.
I did figure out that if the address is changed to druby://0.0.0.0:port that it would listen on all ips on the DRb server; however, it doesn't provide any protection against bad code injection into the DRb process.
Basically don't use ferret. I'm on to Xapian with acts_as_xapian for RoR. It supports multiple processes reading but only one writing, so it's an offline index. However, I will be able to make use of sharing the same index between multiple servers via the shared file system (NFS).
Check out Pitfalls of acts_as_ferret, with DrbServer to the rescue
http://www.subelsky.com/2007/03/pitfalls-of-actsasferret-with-drbserver.html
worked pretty well for me. The only thing I'd add is be sure to set the host value to where you're ferret is running.