Accessing response status after it is sent - ruby-on-rails

I'm working on a feature to store requests made to specific endpoints in my app.
after_action :record_user_activity
def record_user_activity
return unless current_user
return if request.url =~ /assets|packs/
#session.navigation.create!(
uri: request.url,
request_method: request.method,
response_status: response.code.to_i,
access_time: Time.now
)
end
The problem is that, even if we get an error response, when getting the response.code at this point (after_action), the response code is still a 2xx. I imagine it's probably because the server hasn't yet faced whatever problem it may face during the data access process.
How can I properly store the status code that was actually sent to the user?

The rails logs already store the requests and responses. If you just need to track all user requests, you can simply add the user id to your logs. Unless there's a specific reason to keep this data in the DB, which would grow exponentially with the amount of user activity, add this to either config/application.rb or config/environments/production.rb
MyAppName::Application.configure do
#...whatever you already have in configs
# then add the following line
config.log_tags = [
-> request { "user-#{request.cookie_jar.signed[:user_id]}" }
]
end
Then you can tail the logs and use grep, or write some other processes to parse over the logs with other analytics. There are many tools available for this type of work. But here's a basic example
tail -f logs/production.log | grep user-101
# this would show all log requests from user with id 101
If you still need this however, you may wanna try prepend_after_action instead, see
http://api.rubyonrails.org/classes/AbstractController/Callbacks/ClassMethods.html#method-i-prepend_after_action

Related

Allauth user_logged_in signal does not trigger in a TestCase

I'm using allauth for user management in my Django web app and I have functionality which finds session data stored when the user was not logged in and stores it when they next log in on the same session.
It is triggered on login via the standard allauth signal:
#receiver(user_logged_in)
def update_on_login(request, user, **kwargs):
print('Signal triggered')
# Do some stuff with session data here, e.g.:
for session_key in request.session.items():
...
This works perfectly when logging in via a web browser, but in TestCase functions the signal is not triggered at all when logging in with:
self.client = Client()
logged_in = self.client.login(username=self.username, password=self.password)
(I've also tried force_login and force_authenticate, but info on these indicates that they actually skip all real validation and just assume the user is logged in).
I think I've understood that client.login doesn't work because the test client doesn't really handle the request in the same way as a web browser would, but I can't say I really understand it.
I've also seen some indication that RequestFactory might be able to help in someway, but I haven't been able to trigger the signal with this either:
request = RequestFactory().post(reverse('account_login'), { 'username': self.username, 'password': self.password })
request.user = AnonymousUser()
response = login(request)
I've seen some comments about needing to call middleware as well, but pretty out of date and nothing clear enough for me to understand and try.
Any suggestions to point me in the right direction would be much appreciated.
Thanks in advance.

Find the right URL(EndPoint) for creating a ServiceNow Change Request type = Emergency

I do not know the difference between these two end points:
a) /api/sn_chg_rest/v1/change/emergency
b) /api/now/table/change_request?sys_id=-1&sysparm_query=type=emergency
b) once submitted changes to "normal" response type
Issue: Unable to submit a request of type Emergency, Standard, OR Expedited.
Things I have Tried: url = 'https://xxxx.service-now.com/api/now/table/change_request?sys_id=-1&sysparm_query=type=expedited <<changes to normal, the site only allows edits into emergency or normal once submitted with this link>>
url = 'https://xxxx.service-now.com/api/sn_chg_rest/v1/change/emergency <<This one seems to be working only for emergency & normal, also the user is locked into emergency and normal even when logged in to edit type manually once submitted via script >>
Outcome of the current code below in conjuction with the "Things I have Tried" There is a CHG#XXX created but no matter what the Key:xxxxxx "sys-pram_query=type=xxxxxx" changes to (i.e. "Normal", "Expedited", "Emergency", "Standard") looks like this ---> ("sys-pram_query=type= Emergency","sys-pram_query=type= Expedited","sys-pram_query=type= Standard") the type on the ServiceNow-site defaults to "Normal" once the code below runs creating the request using the POST Method.
#Need to install requests package for python
#easy_install requests
import requests
# Set the request parameters
url = 'https://xxxx.service-now.com/api/now/table/change_request?sysparm_fields=type'
# Eg. User name="admin", Password="admin" for this code sample.
user = 'admin'
pwd = 'admin'
# Set proper headers
headers = {"Content-Type":"application/json","Accept":"application/json"}
# Do the HTTP request
response = requests.post(url, auth=(user, pwd), headers=headers ,data="{\"type\":\"Emergency\"}")
# Check for HTTP codes other than 200
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json())
exit()
# Decode the JSON response into a dictionary and use the data
data = response.json()
print(data)
Alternative Options for url THAT MAY NOT WORK = 'https://xxxx.service-now.com/api/now/table/"optionsA" OR "B" OR "C" is as follows:
A) POST /sn_chg_rest/change/standard/{standard_change_template_id}
B) POST api/sn_chg_rest/change/normal
C) POST Versioned URL /api/sn_chg_rest/{version}/change/emergency
link for A, B , C above : https://developer.servicenow.com/dev.do#!/reference/api/orlando/rest/change-management-api#changemgmt-POST-emerg-create-chng-req
Resources:
https://docs.servicenow.com/bundle/paris-it-service-management/page/product/change-management/task/t_AddNewChangeType.html
https://developer.servicenow.com/dev.do#!/reference/api/orlando/rest/change-management-api
API_URL="/api/sn_chg_rest/v1/change/emergency"
this Might have worked, going to confirm.
Yup this works ! unable to submit Standard OR Expedited. But that might be a setting that needs to be enabled (Not sure). Looking into it further. Some progress.

Preventing rapid-fire login attempts with Rack::Attack

We have been reading the Definitive guide to form based website authentication with the intention of preventing rapid-fire login attempts.
One example of this could be:
1 failed attempt = no delay
2 failed attempts = 2 sec delay
3 failed attempts = 4 sec delay
etc
Other methods appear in the guide, but they all require a storage capable of recording previous failed attempts.
Blocklisting is discussed in one of the posts in this issue (appears under the old name blacklisting that was changed in the documentation to blocklisting) as a possible solution.
As per Rack::Attack specifically, one naive example of implementation could be:
Where the login fails:
StorageMechanism.increment("bad-login/#{req.ip")
In the rack-attack.rb:
Rack::Attack.blacklist('bad-logins') { |req|
StorageMechanism.get("bad-login/#{req.ip}")
}
There are two parts here, returning the response if it is blocklisted and check if a previous failed attempt happened (StorageMechanism).
The first part, returning the response, can be handled automatically by the gem. However, I don't see so clear the second part, at least with the de-facto choice for cache backend for the gem and Rails world, Redis.
As far as I know, the expired keys in Redis are automatically removed. That would make impossible to access the information (even if expired), set a new value for the counter and increment accordingly the timeout for the refractory period.
Is there any way to achieve this with Redis and Rack::Attack?
I was thinking that maybe the 'StorageMechanism' has to remain absolutely agnostic in this case and know nothing about Rack::Attack and its storage choices.
Sorry for the delay in getting back to you; it took me a while to dig out my old code relating to this.
As discussed in the comments above, here is a solution using a blacklist, with a findtime
# config/initilizers/rack-attack.rb
class Rack::Attack
(1..6).each do |level|
blocklist("allow2ban login scrapers - level #{level}") do |req|
Allow2Ban.filter(
req.ip,
maxretry: (20 * level),
findtime: (8**level).seconds,
bantime: (8**level).seconds
) do
req.path == '/users/sign_in' && req.post?
end
end
end
end
You may wish to tweak those numbers as desired for your particular application; the figures above are only what I decided as 'sensible' for my particular application - they do not come from any official standard.
One issue with using the above that when developing/testing (e.g. your rspec test suite) the application, you can easily hit the above limits and inadvertently throttle yourself. This can be avoided by adding the following config to the initializer:
safelist('allow from localhost') do |req|
'127.0.0.1' == req.ip || '::1' == req.ip
end
The most common brute-force login attack is a brute-force password attack where an attacker simply tries a large number of emails and passwords to see if any credentials match.
You should mitigate this in the application by use of an account LOCK after a few failed login attempts. (For example, if using devise then there is a built-in Lockable module that you can make use of.)
However, this account-locking approach opens a new attack vector: An attacker can spam the system with login attempts, using valid emails and incorrect passwords, to continuously re-lock all accounts!
This configuration helps mitigate that attack vector, by exponentially limiting the number of sign-in attempts from a given IP.
I also added the following "catch-all" request throttle:
throttle('req/ip', limit: 300, period: 5.minutes, &:ip)
This is primarily to throttle malicious/poorly configured scrapers; to prevent them from hogging all of the app server's CPU.
Note: If you're serving assets through rack, those requests may be counted by rack-attack and this throttle may be activated too quickly. If so, enable the condition to exclude them from tracking.
I also wrote an integration test to ensure that my Rack::Attack configuration was doing its job. There were a few challenges in making this test work, so I'll let the code+comments speak for itself:
class Rack::AttackTest < ActionDispatch::IntegrationTest
setup do
# Prevent subtle timing issues (==> intermittant test failures)
# when the HTTP requests span across multiple seconds
# by FREEZING TIME(!!) for the duration of the test
travel_to(Time.now)
#removed_safelist = Rack::Attack.safelists.delete('allow from localhost')
# Clear the Rack::Attack cache, to prevent test failure when
# running multiple times in quick succession.
#
# First, un-ban localhost, in case it is already banned after a previous test:
(1..6).each do |level|
Rack::Attack::Allow2Ban.reset('127.0.0.1', findtime: (8**level).seconds)
end
# Then, clear the 300-request rate limiter cache:
Rack::Attack.cache.delete("#{Time.now.to_i / 5.minutes}:req/ip:127.0.0.1")
end
teardown do
travel_back # Un-freeze time
Rack::Attack.safelists['allow from localhost'] = #removed_safelist
end
test 'should block access on 20th successive /users/sign_in attempt' do
19.times do |i|
post user_session_url
assert_response :success, "was not even allowed to TRY to login on attempt number #{i + 1}"
end
# For DOS protection: Don't even let the user TRY to login; they're going way too fast.
# Rack::Attack returns 403 for blocklists by default, but this can be reconfigured:
# https://github.com/kickstarter/rack-attack/blob/master/README.md#responses
post user_session_url
assert_response :forbidden, 'login access should be blocked upon 20 successive attempts'
end
end

Mechanize - Receiving Errno::EMFILE: Too many open files - socket(2) after a day

I'm running an application that uses mechanize to fetch some data every so often from an RSS feed.
It runs as a heroku worker and after a day or so I'm receiving the following error:
Errno::EMFILE: Too many open files - socket(2)
I wasn't able to find a "close" method within mechanize, is there anything special I need to be doing in order to close out my browser sessions?
Here is how I create the browser + read information:
def mechanize_browser
#mechanize_browser ||= begin
agent = Mechanize.new
agent.redirect_ok = true
agent.request_headers = {
'Accept-Encoding' => "gzip,deflate,sdch",
'Accept-Language' => "en-US,en;q=0.8",
}
agent
end
end
And actually fetching information:
response = mechanize_browser.get(url)
And then closing after the response:
def close_mechanize_browser
#mechanize_browser = nil
end
Thanks in advance!
Since you manually can't close each instance of Mechanize, you can try invoking Mechanize as a block. According to the docs:
After the block executes, the instance is cleaned up. This includes closing all open connections.
So, rather than abstracting Mechanize.new into a custom function, try running Mechanize via the start class method, which should automatically close all your connections upon completion of the request:
Mechanize.start do |m|
m.get("http://example.com")
end
I ran into this same issue. The Mechanize start example by #zeantsoi is the answer that I ended up following, but there is also a Mechanize.shutdown method if you want to do this manually without their block.
There is also an option that you can add a lambda on post_connect_hooks
Mechanize.new.post_connect_looks << lambda {|agent, url, response, response_body| agent.shutdown }

Optimal way to structure polling external service (RoR)

I have a Rails application that has a Document with the flag available. The document is uploaded to an external server where it is not immediately available (takes time to propogate). What I'd like to do is poll the availability and update the model when available.
I'm looking for the most performant solution for this process (service does not offer callbacks):
Document is uploaded to app
app uploads to external server
app polls url (http://external.server.com/document.pdf) until available
app updates model Document.available = true
I'm stuck on 3. I'm already using sidekiq in my project. Is that an option, or should I use a completely different approach (cron job).
Documents will be uploaded all the time and so it seems relevant to first poll the database/redis to check for Documents which are not available.
See this answer: Making HTTP HEAD request with timeout in Ruby
Basically you set up a HEAD request for the known url and then asynchronously loop until you get a 200 back (with a 5 second delay between iterations, or whatever).
Do this from your controller after the document is uploaded:
Document.delay.poll_for_finished(#document.id)
And then in your document model:
def self.poll_for_finished(document_id)
document = Document.find(document_id)
# make sure the document exists and should be polled for
return unless document.continue_polling?
if document.remote_document_exists?
document.available = true
else
document.poll_attempts += 1 # assumes you care how many times you've checked, could be ignored.
Document.delay_for(5.seconds).poll_for_finished(document.id)
end
document.save
end
def continue_polling?
# this can be more or less sophisticated
return !document.available || document.poll_attempts < 5
end
def remote_document_exists?
Net::HTTP.start('http://external.server.com') do |http|
http.open_timeout = 2
http.read_timeout = 2
return "200" == http.head(document.path).code
end
end
This is still a blocking operation. Opening the Net::HTTP connection will block if the server you're trying to contact is slow or unresponsive. If you're worried about it use Typhoeus. See this answer for details: What is the preferred way of performing non blocking I/O in Ruby?

Resources