Ruby on rails (based on Mephisto) - Unable to contact server - ruby-on-rails

I am completely new to ruby and I inherited a ruby system for a product catalogue. Most of my users are able to view everything as they should but overseas users (specifically Mexico) cannot contact the server once logged in. They are an active user. I'm sorry I cannot be more specific, and the system is private so I cannot grant access.
Has anyone had any issues similar to this before? Is it a user-end issue or a system error?

Speaking as somebody who regularly ends up on your user's side of the fence, the number one culprit for this symptom is "Clueless administrator". There are many, many sites which generically block either large blocks of IP space or which geolocate and carve out big portions of the world.
For example, a surprising number of American blogs block Asian countries (including Japan) out of a misplaced effort to avoid DDOS attacks (which actually probably originated in Russia or China but, hey, this species of administrator isn't very good on fine tuning solutions). I have to hop over to my American proxy server to access those sites.
So the first thing I'd do to diagnose your problems is to see whether your Mexican users are making it to the server at all, or whether they're being blocked somewhere earlier (router? firewall? etc). Then, to determine whether the problem is on your end or their end, I'd try to replicate the issue with you proxying your connection through a Mexican proxy and repeating the actions they took to cause the issue.
The fact that they get blocked after logging in could indicate that you have https issues , for example with an HTTPS accelerator installed [1], or it could be that your frontend server is properly serving up the static content but doing the checking on dynamic requests only.
[1] We've seen some really weird bugs at work caused by a malfunctioning HTTPS accelerator.

If it's working for everyone else then it would appear that the problem is not with Ruby or Rails working, since they are...
My first thought would be to check for a network issue: are the Mexican users all behind the same proxy server and/or firewall?
Is login handled within the Rails application or via some other resource? Can you see any evidence that requests from Mexican users are reaching your web server at all?

Login is handled by the rails app. Am currently trying to hunt down the logs, taking some time as again I am new to this system.
Cheers guys

Maybe INS is cracking down on cyber-immigration.

Related

Are there any situations (e.g. failures) when browser clears cookies on its own?

We have two sites with different subdomains. Sometimes our employees lose their cookies (they are just gone) on both domains at the same time so they get logged out.
I don't really see how our app can be responsible, because we have different server configurations (and for each site there're multiple servers btw). I guess only nginx versions (1.10.3) are the same. Plus this does not explain why do they get logged out on both sites at the same time.
If it helps, we use rails (3/5), unicorn (4.8.3/5.3.0), on older app sessions are stored in redis and in the new one in cookies.
So I wonder maybe there're some browser (security) policies when it clears cookies. Maybe on some ssl connection error, ip changes or whatever.
I understand that this is not definitive problem description but it seems like magic to us atm so I hope that someone encountered something like this.
P.S. btw we tried to ask one of our employees to use firefox instead of Chrome (that is used by all of them) but it does not seem to be making any difference (he wasnt logged out for a week but then he was like every 20 minutes)

Is this dangerous to my Rails App?

We are hosted on Heroku, and have the NewRelic add on. Every day I check the errors, and almost every day this error comes up.
Action and Type
Middleware/Rack/Rack::MethodOverride#call
EOFError
Message
bad content body
This is a Rails Application, and so I figure it's not doing anything in particular other than returning a 440 response status because there is nothing at the url they are trying to access.
URL
/wp-admin/admin-ajax.php
Through some google-fu I found an article pertaining to this being a brute force attack on wordpress sites.
My specific question is:
Do I worry about this?
I inherited the site and am not sure if this is just something that happens, and if it is something that rails applications don't have to worry about? It seems fairly targeted towards wordpress, but I can't find any documentation on whether I should be doing more to stop this.
Other frequently pinged urls that don't exist on my application
/sites/all/libraries/elfinder/php/connector.minimal.php
/license.php
/tiny_mce/plugins/tinybrowser/upload_file.php
Any enlightenment on the subject would be great. Stack trace available if needed. Thanks in advance, overflowers.
As long as you don't have a route configured to handle those requests you then only have to worry about getting spammed these requests and losing network resources. They'll recieve a 404 Not Found error when they try to reach it and so there is nothing they can really do except slow your site if they spam requests. If they do it often you can ban their IP address.

Verifying Googlebot in Rails

I am looking to implement First Click Free in my rails application. Google has this information on how to verify a if a googlebot is viewing your site here.
I have been searching to see if there is anything existing for Rails to do this but I have been unable to find anything. So firstly, does anyone know of anything? If not, could anyone point me in the right direction of how to go about implementing what they have suggested in that page about how to verify?
Also, in that solution, it has to do a lookup every time to try and detect google, that seems like its going to be a big performance hit if I have to do it every page load? I could cache the IP if it has been verified in the past but Google have stated that their IP's change so at some point it may no longer belong to them. Although it probably doesn't happen regularly so it may not be that big of an issue.
Many thanks!!
Check out the browser gem: https://github.com/fnando/browser
What I'd do is use the
browser.bot?
method to check if your site is being accessed by a bot or not. If you care about the Googlebot specifically, you could check if
browser.name
includes googlebot. Keep in mind that this gem just checks the user agent sent by the client's browser, which could of course be spoofed. Sounds like that isn't a huge concern for your purposes.
I've built a Ruby gem for that recently, it's called "legitbot".
You may learn if a Web request comes from a supported bot using
bot = Legitbot.bot(userAgent, ip)
"legitbot" does this looking into User-agent and searching for a bot signature, i.e. how bots identify themselves. This doesn't guarantee that the Web request IP really comes from e.g. Googlebot. To make sure it is, call
bot.detected_as # => "Google"
bot.valid? # => true
bot.fake? # => false
Supported bots are Googlebot, Yandex bots, Bing, Baidu, DuckDuckGo.

Are any HTTP log files saved on my system by default?

I have an application hosted on Amazon EC2 on a Ubuntu machine, written in Ruby (on Rails), deployed with Capistrano and running on Nginx.
Last friday one module of my application has crashed and nobody in the company noticed until this morning. We spent some money with Facebook and Google ads and received a few hundreds of visits, but nobody created an account due to this bug.
I wonder if this configuration is saving the HTTP requests and its bodies somewhere in a log file. But we didnt explicitly set it, so it would only happen if any of these technologies do it by default.
Do you know whether there is such log or not?
Nope, that wouldn't be anywhere in a usable form (I'm inferring you want to try to create the accounts from request bodies in log files). You'll have the requests themselves in your nginx logs, and the rails logs will contain more info about the request, but as a matter of security, by default, any sensitive information (e.g. passwords) would be scrubbed from them. You may still be able to get some info from them.
To answer your question a little more specifically, the usual place for these logs on your system would be:
/var/log/nginx/
/path/to/your/rails/app/log/production.log
On a separate note, I would recommend looking into an error reporting service like Honeybadger, Airbrake, Raygun, Appsignal, or others so that you don't have silent failures like this moving forward.

How to block an ip address from linode?

I have rails application running in linode server .Some guy is continuously spamming (writing bullsh*t in my site).Can anybody tell how to block that person ip.Any other help would be appreciated.
I would suggest you don't only look into blocking that one person but rather into making sure this can't happen again.
Spams usually originate from bots that randomly try to fill in their marketing message into input fields on pages they encounter.
You block one, another will find your page and continue.
The only ways to prevent this kind of automated spam I know of are either using some sort of CAPTCHA or by securing your site through authentication.
There are some very nice captcha gems like reCaptcha or look around in the captcha category on Ruby Toolbox and you should be up and running soon.
If it's really a person that is annoying you with writing bad stuff to your site, while not ideal an IP block is easily set up through apache. Just put the following into your VirtualHost file inside the <Directory> node and then enable the mod_authz_host module through a2enmod authz_host
Deny from 192.168.205
You can do this in the web server config file, here is the example for Nginx:
http://www.cyberciti.biz/faq/linux-unix-nginx-access-control-howto/

Resources