I have an application hosted on Amazon EC2 on a Ubuntu machine, written in Ruby (on Rails), deployed with Capistrano and running on Nginx.
Last friday one module of my application has crashed and nobody in the company noticed until this morning. We spent some money with Facebook and Google ads and received a few hundreds of visits, but nobody created an account due to this bug.
I wonder if this configuration is saving the HTTP requests and its bodies somewhere in a log file. But we didnt explicitly set it, so it would only happen if any of these technologies do it by default.
Do you know whether there is such log or not?
Nope, that wouldn't be anywhere in a usable form (I'm inferring you want to try to create the accounts from request bodies in log files). You'll have the requests themselves in your nginx logs, and the rails logs will contain more info about the request, but as a matter of security, by default, any sensitive information (e.g. passwords) would be scrubbed from them. You may still be able to get some info from them.
To answer your question a little more specifically, the usual place for these logs on your system would be:
/var/log/nginx/
/path/to/your/rails/app/log/production.log
On a separate note, I would recommend looking into an error reporting service like Honeybadger, Airbrake, Raygun, Appsignal, or others so that you don't have silent failures like this moving forward.
Related
We have two sites with different subdomains. Sometimes our employees lose their cookies (they are just gone) on both domains at the same time so they get logged out.
I don't really see how our app can be responsible, because we have different server configurations (and for each site there're multiple servers btw). I guess only nginx versions (1.10.3) are the same. Plus this does not explain why do they get logged out on both sites at the same time.
If it helps, we use rails (3/5), unicorn (4.8.3/5.3.0), on older app sessions are stored in redis and in the new one in cookies.
So I wonder maybe there're some browser (security) policies when it clears cookies. Maybe on some ssl connection error, ip changes or whatever.
I understand that this is not definitive problem description but it seems like magic to us atm so I hope that someone encountered something like this.
P.S. btw we tried to ask one of our employees to use firefox instead of Chrome (that is used by all of them) but it does not seem to be making any difference (he wasnt logged out for a week but then he was like every 20 minutes)
i deployed a rails app on a EC2 instance and on this morning when i clicked on a section of the app redirected to this http://testp2.czar.bielawa.pl/.
i would like to know if this is a malware or what?
because this link is not part of the app.
thanks
Yes its type malware, you might not be the actual target is rather that your server might be used for source of spam, port scanning and DDoS attacks.
There is pretty extensive abuse list for
http://testp2.czar.bielawa.pl/
See here.http://www.abuseipdb.com/report-history/185.25.151.159
For getting rid of this follow the great instructions from serverfault below.
https://serverfault.com/questions/218005/how-do-i-deal-with-a-compromised-server
Alternatively if there is nothing important there just delete the EC2 and start again
I have rails application running in linode server .Some guy is continuously spamming (writing bullsh*t in my site).Can anybody tell how to block that person ip.Any other help would be appreciated.
I would suggest you don't only look into blocking that one person but rather into making sure this can't happen again.
Spams usually originate from bots that randomly try to fill in their marketing message into input fields on pages they encounter.
You block one, another will find your page and continue.
The only ways to prevent this kind of automated spam I know of are either using some sort of CAPTCHA or by securing your site through authentication.
There are some very nice captcha gems like reCaptcha or look around in the captcha category on Ruby Toolbox and you should be up and running soon.
If it's really a person that is annoying you with writing bad stuff to your site, while not ideal an IP block is easily set up through apache. Just put the following into your VirtualHost file inside the <Directory> node and then enable the mod_authz_host module through a2enmod authz_host
Deny from 192.168.205
You can do this in the web server config file, here is the example for Nginx:
http://www.cyberciti.biz/faq/linux-unix-nginx-access-control-howto/
I use sendmail to send emails from my application. I always send the emails from SOME_NAME#MY_DOMAIN.com but they always endup in spam folder.
I know that I should do some stuff on the DNS side to make my emails be marked as non-spam, but I don't know what they are.
I am a newbie and this is my first time setting up a production server, a domain and everything else myself. I appreciate if someone helps me.
What sort of environment are you deploying to?
This frequently happens to applications deployed to cloud services, like Amazon or RackSpace. Their entire IP blocks are registered as spam at services like Spamhaus, which is a sensible precaution or else we'd be getting even more spam than usual. You should enter your server's IP address in that field to see if you're listed as a spammer.
If you are, you can request to Spamhaus that the block be lifted. Getting in touch with Amazon's support stuff also helps. Finally, you can get around the issue entirely by using a sendmail service of some sort -- Amazon SES is pretty good, and there's even a Gem out there that provides integration to Rails apps.
I am completely new to ruby and I inherited a ruby system for a product catalogue. Most of my users are able to view everything as they should but overseas users (specifically Mexico) cannot contact the server once logged in. They are an active user. I'm sorry I cannot be more specific, and the system is private so I cannot grant access.
Has anyone had any issues similar to this before? Is it a user-end issue or a system error?
Speaking as somebody who regularly ends up on your user's side of the fence, the number one culprit for this symptom is "Clueless administrator". There are many, many sites which generically block either large blocks of IP space or which geolocate and carve out big portions of the world.
For example, a surprising number of American blogs block Asian countries (including Japan) out of a misplaced effort to avoid DDOS attacks (which actually probably originated in Russia or China but, hey, this species of administrator isn't very good on fine tuning solutions). I have to hop over to my American proxy server to access those sites.
So the first thing I'd do to diagnose your problems is to see whether your Mexican users are making it to the server at all, or whether they're being blocked somewhere earlier (router? firewall? etc). Then, to determine whether the problem is on your end or their end, I'd try to replicate the issue with you proxying your connection through a Mexican proxy and repeating the actions they took to cause the issue.
The fact that they get blocked after logging in could indicate that you have https issues , for example with an HTTPS accelerator installed [1], or it could be that your frontend server is properly serving up the static content but doing the checking on dynamic requests only.
[1] We've seen some really weird bugs at work caused by a malfunctioning HTTPS accelerator.
If it's working for everyone else then it would appear that the problem is not with Ruby or Rails working, since they are...
My first thought would be to check for a network issue: are the Mexican users all behind the same proxy server and/or firewall?
Is login handled within the Rails application or via some other resource? Can you see any evidence that requests from Mexican users are reaching your web server at all?
Login is handled by the rails app. Am currently trying to hunt down the logs, taking some time as again I am new to this system.
Cheers guys
Maybe INS is cracking down on cyber-immigration.