Questions about Unicorn::ClientShutdown - ruby-on-rails

Can anyone help me better understand Unicorn::ClientShutdown errors? I see them occasionally via my web app's error logs and I have no idea what's causing them, how I can replicate the issue, or whether it's safe to ignore it altogether.
From the documentation (https://www.rubydoc.info/gems/unicorn/2.0.0/Unicorn/ClientShutdown), it seems like this has something to do with interrupted sockets, but I'm not sure exactly what that means or how it relates to my app.
I believe I've only ever seen this on POST requests, and the error has almost exclusively been associated with a simple POST request that tracks page views (and is by far the most common non-GET request made by the web app).
Thanks in advance for your help!

Related

Parts of https://jenkins.io is down. How do I report it?

So https://www.jenkins.io/ has been down for at least most of the afternoon. The main page is accessible, but blog posts and plugins etc. aren't available. I get a 503 that looks like this:
I figured I'd try again later, but since it was still down I thought I'd better report it. So I went to their JIRA to report the issue at https://issues.jenkins-ci.org/, which seems to be up, but when trying to log in I get a 502 response, with the following error message:
I went to their GitHub, but they have issues disabled there. I'm running out of options, so I figured I'd ask here to see if there is someone that knows how to get in touch with someone that knows how to fix it. I found a few tweets about it, but no responses from anyone that seems to be able to do anything about it.
After the issue was resolved and I was able to log in to JIRA, I found a way to report the issue and apparently a few people already did. If this happens again, you can go to https://github.com/jenkins-infra/jenkins.io/issues/ and report the issue there.
Another place to check is the jenkins-infra channel on freenode, as that's where they are discussing the issue during investigation.
In case you were curious, it seems like this outage was due to a problem with the Kubernetes cluster where it was hosted. I don't know any more details than that.
I hope this might help someone in the future.

What is the `Rmch-Securitycookie` HTTP header?

I'm seeing some weird POST errors to my Rails 5 site. They are in the form of POST https://www.example.com/pages/:5054, with the weird :5054 at the end. I also do not have any route that POSTs to /pages/ or /pages/:id. I've got no idea what is causing these weird POSTs. The referrer-URLs are from my site, so it is not, as far as I can see, some weird bot or some such.
The only common denominator I see is the presence of the Rmch-Securitycookie: header on the bad requests. While I do not know if this is the root cause of my issue, it's a start at least. I'm thinking it's a bad extension or some monitoring software or some such. Google turns up nothing; has anyone encountered this header and knows what it is?

How to test/check use of NSURLCache?

I have added the two lines of code to my project as described in this blog post.
However, as nothing different happens in the app itself, is there a way to test this behaviour and check it is working as expected? Is there a profile in instruments? Is there something I can print to the log in Xcode when a response is cached/cached response is used?
It just seems so simple to implement, I want to make sure it is actually having an effect on my app and that I won't have to implement a caching system myself.
Many thanks,
Sam
go offline right after you got the page and then reload the web view. the caching should take effect and the page should load I'd think
You can make sure so by analyzing HTTP traffic through a third party software like Charles.
Whenever app will make a request, HTTP Analyzer will pick the request made. If it is using cache, then same will be reflected in it.
(Check the contents of response to confirm the same)

how do I block my rails app from being hit by bots?

I'm not even sure I'm using the right terminology, whether this is actually bots or not. I didn't want to use the word 'spam' because it's not like I have comments or posts that are being created/spammed. It looks more like something is making the same repeated request to my domain, which is what made me think it was some kind of bot.
I've opened my first rails app to the 'public', which is a really a small group of users, <50 currently. That was last Friday. I started having performance issues today, so I looked at the log and I see tons of these RoutingErrors
ActionController::RoutingError (No route matches "/portalApp/APF/pages/business/util/whichServer.jsp" with {:method=>:get}):
They are filling up the log and I'm assuming this is causing the slowdown. Note the .jsp on the end and this is a rails app, so I've got no urls remotely like this in my app. I mean, the /portalApp I don't even have, so I don't know where this is coming from.
This is hosted at Dreamhost and I chatted with one of their support people, and he suggested a couple sites that detail using htaccess to block things. But that looks like you need to know the IP or domain that the requests are coming from, which I don't.
How can I block this? How can I find the IP or domain from the request? Any other suggestions?
Follow up info:
After looking at the access logs, it looks like it's not a bot. Maybe I'm not reading the logs right, but there are valid url requests (generated from within my Flex app) coming from the same IP. So now I'm wondering if it's some kind of plugin generating the requests, but I really don't know. Now I'm wondering if it's possible to block a certain url request, based on a pattern, but I suppose that's a separate question.
Old question, but for people who are still looking for alternatives I suggest checking out Kickstarter's rack-attack gem. Allows not only blacklisting and whitelisting, but also throttling.
These page seems to offer some good advice:
Here
The section on blocking by user agent may be something you could look at implementing. Is there anyway you can get the useragent from the bot from your logs? If so look for the unique aspect of the useragent that probably identifies the bot and add the following to .htaccess replacing the relevant bits
BrowserMatchNoCase SpammerRobot bad_bot
Order Deny,Allow
Deny from env=bad_bot
Its detail on that link in more detail and of course, if you can't get the useragent from your logs then this will be of no use to you!
You can also update your public/robots.txt file to allow/disallow robots.
http://www.robotstxt.org/wc/robots.html

How do I validate OAuth requests?

I'm trying to use OAuth with Twitter, and I have my head wrapped around the pieces that need to be put in place to get a request token. But, it's not working. And the error message that I'm getting back isn't terribly helpful.
Luckily I found a tester but again, the error message there isn't terribly helpful. "Invalid signature." Ok, great. But since there are several steps involved (truth be told, all of which confuse the hell out of me) in generating the signature, I'm at a loss.
Is there another tool out there that might be more helpful? Maybe one where I can see what the data should be at each step (check that the request concatenation is right, check that the initial signing is right - i'm using HMAC-SHA1, check that the base 64 is right, etc).
Yes. Run, do not walk, to Hueniverse - one of the neatest pieces of Javascript you'll see!

Resources