How to disable Rack-Mini-Profiler temporarily? - ruby-on-rails

I'm using rack mini profiler in rails just fine, but during some coding sessions especially where I'm working on a lot of different client side code, it gets in the way. (mainly in my client side debugging tools network graphs, etc.)
I'm trying to turn it off with a before filter, that also serves to see if the user is authorized to see the profile anyway, but "deauthorize" doesn't seem to do anything for me. Here's my code called as a before filter:
def miniprofiler
off = true
if off || !current_user
Rack::MiniProfiler.deauthorize_request
return
elsif current_user.role_symbols.include?(:view_page_profiles)
Rack::MiniProfiler.authorize_request
return
end
Rack::MiniProfiler.deauthorize_request
end
I also know there is a setting "Rack::MiniProfiler.config.authorization_mode" but I can't find docs on what the possible settings are, and not seeing it used in the code? Right now its telling me :allow_all, but :allow_none doesn't do anything either.
Even if I can just temporarily set a value in the dev environment file and restart the server, that would serve my purposes.

Get latest and type:
http://mysite.com?pp=disable
When you are done type
http://mysite.com?pp=enable
See ?pp=help for all the options:
Append the following to your query string:
pp=help : display this screen
pp=env : display the rack environment
pp=skip : skip mini profiler for this request
pp=no-backtrace : don't collect stack traces from all the SQL executed (sticky, use pp=normal-backtrace to enable)
pp=normal-backtrace (*) : collect stack traces from all the SQL executed and filter normally
pp=full-backtrace : enable full backtraces for SQL executed (use pp=normal-backtrace to disable)
pp=sample : sample stack traces and return a report isolating heavy usage (experimental works best with the stacktrace gem)
pp=disable : disable profiling for this session
pp=enable : enable profiling for this session (if previously disabled)
pp=profile-gc: perform gc profiling on this request, analyzes ObjectSpace generated by request (ruby 1.9.3 only)
pp=profile-gc-time: perform built-in gc profiling on this request (ruby 1.9.3 only)

You can also use Alt + p to toggle on Windows/Linux and Option + p on MacOS.

If you want the profiler to be disabled initially, and then activate on demand... add a pre-authorize callback in an initializer file like:
Rack::MiniProfiler.config.pre_authorize_cb = lambda {|env| ENV['RACK_MINI_PROFILER'] == 'on'}
then in your application controller, add a before_filter that looks for the pp param
before_filter :activate_profiler
def activate_profiler
ENV['RACK_MINI_PROFILER'] = 'on' if params['pp']
ENV['RACK_MINI_PROFILER'] = 'off' if params['pp'] == 'disabled'
end
your environment will not have RACK_MINI_PROFILER set initially, but if you want to turn it on, you can tack ?pp=enabled onto your url. Then you can disable again later (the pp=disabled will only turn it off for the current session, but setting the ENV variable to off will kill it entirely until you force it back on).

Related

Microsoft.Web.RedisSessionStateProvider not saving value

Problem
A session value is not being retrieved in a Razor view and is causing faulty logic.
Environment
Redis sentinel with sentinel on web servers but only a single redis master and single redis slave. The redis connection string is pointing to both master and slave.
Code
In a controller before the view:
var fooLocal = fooMapper.Map(fooDbCall.GetFromDb(fooValue));
if (fooLocal != null)
{
Session["FooSession"] = fooLocal.fooProperty;
}
else
{
Session["FooSession"] = false;
}
In the view
#if (fooRazorVal == 123)
{
// show some stuff
}
else if (!((bool?)Session["FooSession"] ?? false) && (fooRazorVal2 == 456))
{
// show error message
}
else
{
// show other stuff
}
Result
The error message is shown even when an account in question has been walked back through the code and database to verify it should not be false let alone null. Other session values are stored and retrieved fine or else you wouldn't even make it this far in my process.
Investigation
As I mentioned, all other code bits and the database have been verified. I added a logging class and there are lots of entries like so:
[Info]GetItemFromSessionStore => Session Id: ctps3urcqwm0tpezo5bbmqzj, Session provider object: 4686063 => Can not lock, Someone else has lock and lockId is 636901606595110722
[Info]GetItemFromSessionStore => Session Id: ctps3urcqwm0tpezo5bbmqzj, Session provider object: 26422156 => Lock taken with lockId: 636901606595110722
[Info]GetItemFromSessionStore => Session Id: ctps3urcqwm0tpezo5bbmqzj, Session provider object: 4686063 => Can not lock, Someone else has lock and lockId is 636901606595110722
However, given the sheer number of them, I'm wondering if this is actually an error or the RedisSessionStateProvider working as intended. I did see that it uses SETNX to acquire locks. Unfortunately, I'm not well versed enough in redis semantics to know if this is causing an issue.
I did see a note on the Redis docs about this being an old approach and to use RedLock instead. However, as I understand RedLock, a single master/single slave setup is not sufficient although it does support retries so maybe it would work anyway. I'm also curious if I should roll a simple custom provider that lets StackExhange's ConnectionMultiplexer work without extra locks or custom scripts and if I do need locks to use one of the C# libraries for RedLock.
By design, Redis keys are locked during update, you don't need to lock them. Indeed, Redis uses a single thread to process commands, so each operation is atomic. Other clients are blocked during the processing of a given command, that's why you mustn't perform queries with a long execution time and you are getting this error.
To prevent that one must implement distributed lock. Distributed locks are a very useful primitive in many environments where different processes must operate with shared resources in a mutually exclusive way.
Here are the different implementation for different language.
Implementations
Here are a few links to implementations already available that can be used for reference.
• Redlock-rb (Ruby implementation). There is also a fork of Redlock-rb that adds a gem for easy distribution and perhaps more.
• Redlock-py (Python implementation).
• Aioredlock (Asyncio Python implementation).
• Redlock-php (PHP implementation).
• PHPRedisMutex (further PHP implementation)
• Redsync.go (Go implementation).
• Redisson (Java implementation).
• Redis::DistLock (Perl implementation).
• Redlock-cpp (C++ implementation).
• Redlock-cs (C#/.NET implementation).
• RedLock.net (C#/.NET implementation). Includes async and lock extension support.
• ScarletLock (C# .NET implementation with configurable datastore)
• node-redlock (NodeJS implementation). Includes support for lock extension.
See if this helps.

How to enable SQL logging in Aqueduct 3?

It would be very useful for me to see in the terminal what requests are executed and how long they take.
Logging of HTTP requests works fine, but I did not find a similar function for SQL.
Is there a way to enable logging globally using config.yaml or in prepare() of ApplicationChannel?
Looks like i found dirty hack solution:
Future prepare() async {
logger.onRecord.listen((rec) => print("$rec ${rec.error ?? ""} ${rec.stackTrace ?? ""}"));
logger.parent.level = Level.FINE;
...
}
We need to set log level higher then default INFO. All SQL queries log their requests on FINE level.
I expected that this setting should be able to load from a config.yaml, but I did not find anything similar.
More about log levels can be find here

rails activerecord transaction blocks don't seem to not commit

I've a rails app that I'm crashing on purpose.. it's local and I'm just hitting ctrl + c and killing it mid way through processing records..
To my mind the records in the block shouldn't have been committed.. Is this a postgres "error" or a rails "error", or a dave ERROR?
ActiveRecord::Base.transaction do
UploadStage.where("id in (#{ids.join(',')})").update_all(:status => 2);
records.each do |record|
record.success = process_line(record.id, klas, record.hash_value).to_s[0..250]
record.status = 1000
record.save();
end
end
I generate my ids by reading out all the records where the status is 1.
Nothing but this function sets the status to 1000..
If the action crashes for what ever reason, I'd expect there to be no records in the database with status = 2...
This is not what I'm seeing though. Half the records have status 1000, the other half have status 2.. .
Am I missing something?
How can I make sure there are no 2's if the app crashes?
EDIT:
I found this link http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/
As I suspected and as confirmed by dave's update, it looks like ActiveRecord will commit a half-finished transaction under some circumstances when you kill a thread. Woo, safe! See dave's link for detailed explanation and mitigation options.
If you're simulating hard crash (host OS crash or plug-pull), control-C is absolutely not the right approach. Use Control-\ to send a SIGQUIT, which is generally not handled, or use kill -KILL to hard-kill the process with no opportunity to do cleanup. Control-C sends SIGINT which is a gentle signal that's usually attached to a clean shutdown handler.
In general, if you're debugging issues like this, you should enable detailed query logging and see what Rails is doing. Use log_statement = 'all' in postgresql.conf then examine the PostgreSQL logs.

cherrypy serve multiple requests / per connection

i have this code
(on the fly compression and stream)
#cherrypy.expose
def backup(self):
path = '/var/www/httpdocs'
zip_filename = "backup" + t.strftime("%d_%m_%Y_") + ".zip"
cherrypy.response.headers['Content-Type'] = 'application/zip'
cherrypy.response.headers['Content-Disposition'] = 'attachment; filename="%s"' % (zip_filename,)
#https://github.com/gourneau/SpiderOak-zipstream/blob/3463c5ccb5d4a53fc5b2bdff849f25bae9ead761/zipstream.py
return ZipStream(path)
backup._cp_config = {'response.stream': True}
the problem i faced is when i'm downloading the file i cant browse any other page or send any other request until the download done...
i think that the problem is that cherrypy can't serve more than one request at a time/ per user
any suggestion?
When you say "per user", do you mean that another request could come in for a different "session" and it would be allowed to continue?
In that case, your issue is almost certainly due to session locking in cherrypy. You can read more about it is the session code. Since the sessions are unlocked late by default, the session is not available for use by other threads (connections) while the backup is still being processed.
Try setting tools.sessions.locking = 'explicit' in the _cp_config for that handler. Since you’re not writing anything to the session, it’s probably safe not to lock at all.
Good luck. Hope that helps.
Also, from the FAQ:
"CherryPy certainly can handle multiple connections. It’s usually your browser that is the culprit. Firefox, for example, will only open two connections at a time to the same host (and if one of those is for the favicon.ico, then you’re down to one). Try increasing the number of concurrent connections your browser makes, or test your site with a tool that isn’t a browser, like siege, Apache’s ab, or even curl."

ActiveResource timeout not functioning [duplicate]

This question already has an answer here:
Overriding/Modifying Rails Class (ActiveResource)
(1 answer)
Closed 3 years ago.
I'm trying to contact a REST API using ActiveResource on Rails 2.3.2.
I'm attempting to use the timeout functionality so that if the resource I'm contacting is down I can fail quickly - I'm doing this with the following:
class WorkspaceResource < ActiveResource::Base
self.timeout = 5
self.site = "http://mysite.com/restAPI"
end
However, when I try to contact the service when I know it isn't available, the class only times out after the default 60 seconds. I can see from the error stack that the timeout error does indeed come from an ActiveResource class in my gem folder that has the proper functions to allow timeout settings, but my set timeout never seems to work.
Any thoughts?
So apparently the issue is not that timeout is not functioning. I can run a server locally, make it not return a response within the timeout limit, and see that timeout works.
The issue is in fact that if the server does not accept the connection, timeout does not function as I expected it to - it doesn't function at all. It appears as though timeout only works when the server accepts the connection but takes too long to respond.
To me, this seems like an issue - shouldn't timeout also work when the server I'm contacting is down? If not, there should be another mechanism to stop a bunch of requests from hanging...anyone know of a quick way to do this?
The problem
If you're running on Ruby 1.8.x then the problem is its lack of real system threads.
As you can read first hereand then here, there are systemic problems with timeouts in Ruby. An interesting discussion but for you in particular some comments suggest that the timeout is effectively ignored and defaults to 60 seconds - exactly what you are seeing.
Solutions ...
I have a similar issue with our own product when trying to send emails - if the email server is down the thread blocks. For me the solution was to spin the request off on a separate thread and therefore my main request-processing thread doesn't block.
There are non-blocking libraries out there for Ruby but perhaps you could take a look first at this System Timeout Gem.
An option open to anyone using Rails behind a proxy like nginx would be to set the upstream timeout to a lower number - that way you'll get notified if the server is taking too long. I'd only do this if I were really stuck for a solution.
Last but not least, it's possible that running Rails 2.3.2 on top of Ruby 1.9.1 will fix the issue.
Alternatively, you could try to catch these connection errors and retry once (after certain period of time) just to make sure the connection is really out.
retried = false
begin
#businesses = Business.find(:all, :params => { :shop_domain => #shop.domain })
retried = false
rescue ActiveResource::TimeoutError => ex
#raise ex
rescue ActiveResource::ConnectionError, ActiveResource::ServerError, ActiveResource::ClientError => ex
unless retried
sleep(((ex.respond_to?(:response) && ex.response['Retry-After']) || 5).to_i)
retried = true
retry
else
# raise ex
end
end
Inspired by this solution from Shopify for paginating a large number of records. https://ecommerce.shopify.com/c/shopify-apis-and-technology/t/paginate-api-results-113066

Resources