Rails: Mute logger on a single thread - ruby-on-rails

For the application I am making, an old-school Multi-User-Dungeon, this seems to be the best way to execute NPC-behaviour: Each NPC keeps track when it changed state for the last time, and when their perform method is called while enough time(which might differ per NPC and current NPC state) has passed since then, the NPC will change state again.
A background Thread loops over the table of all NPCs and calls the perform method on each of them.
My problem is this: This background thread queries ActiveRecord multiple times per second. In the development environment, all of these queries are printed to STDOUT. This makes it impossible to see what else is going on.
I would like to mute or hide the logging messages that this specific thread makes. How can this be done in Rails (4.2)?

Adding a unique thread variable to your background-thread could help to identify it. Then you could conditionally log or not.
Take a look at the RoR Docs about threads

Related

Rails / ActiveRecord: avoid two threads updating model at the same time with locks

Let's say I need to be sure ModelName can't be updated at the same time by two different Rails threads; this can happen, for example, when a webhooks post to the application tries to modify it at the same time some other code is running.
Per Rails documentation, I think the solution would be to use model_name_instance.with_lock, which also begins a new transaction.
This works fine and prevents simultaneous UPDATES to the model, but it does not prevent other threads from reading that table row while the with_lock block is running.
I can prove that with_lock does not prevent other READS by doing this:
Open 2 rails consoles;
On console 1, type something like ModelName.last.with_lock { sleep 30 }
On console 2, type ModelName.last. You'll be able to read the model no problem.
On console 2, type ModelName.update_columns(updated_at: Time.now). You'll see it will wait for the 30 seconds lock to expire before it finishes.
This proves that the lock DOES NOT prevent reading, and as far as I could tell there's no way to lock the database row from being read.
This is problematic because if 2 threads are running the same method at the EXACT same time and I must decide to run the with_lock block regarding some previous checks on the model data, thread 2 could be reading stale data that would be soon be updated by thread 1 after it finishes the with_lock block that is already running, because thread 2 CAN READ the model while with_lock block is in progress in thread 1, it only can't UPDATE it because of the lock.
EDIT: I found the answer to this question, so you can stop reading here and go straight to it below :)
One idea that I had was to begin the with_lock block issuing a harmless update to the model (like model_instance_name.update_columns(updated_at: Time.now) for instance), and then following it with a model_name_instance.reload to be sure that it gets the most updated data. So if two threads are running the same code at the same time, only one would be able to issue the first update, while the other would need to wait for the lock to be released. Once it is released, it would be followed with that model_instance_name.reload to be sure to get any updates performed by the other thread.
The problem is this solution seems way too hacky for my taste, and I'm not sure I should be reinventing the wheel here (I don't know if I'm missing any edge cases). How does one assure that, when two threads run the exact same method at the exact same time, one thread waits for the other to finish to even read the model ?
Thanks Robert for the Optimistic Locking info, I could definitely see me going that route, but Optimistic locking works by raising an exception on the moment of writing to the database (SQL UPDATE), and I have a lot of complex business logic that I wouldn't even want to run with the stale data in the first place.
This is how I solved it, and it was simpler than what I imagined.
First of all, I learned that pessimistic locking DOES NOT preventing any other threads from reading that database row.
But I also learned that with_lock also initiates the lock immediately, regardless of you trying to make a write or not.
So if you start 2 rails consoles (simulating two different threads), you can test that:
If you type ModelName.last.with_lock { sleep 30 } on Console 1 and ModelName.last on Console 2, Console 2 can read that record immediately.
However, if you type ModelName.last.with_lock { sleep 30 } on Console 1 and ModelName.last.with_lock { p 'I'm waiting' } on Console 2, Console 2 will wait for the lock hold by console 1, even though it's not issuing any write whatsoever.
So that's a way of 'locking the read': if you have a piece of code that you want to be sure that it won't be run simultaneously (not even for reads!), begin that method opening a with_lock block and issue your model reads inside it that they'll wait for any other locks to be released first. If you issue your reads outside it, your reads will be performed even tough some other piece of code in another thread has a lock on that table row.
Some other nice things I learned:
As per rails documentation, with_lock will not only start a transaction with a lock, but it will also reload your model for you, so you can be sure that inside the block ModelName.last is on it's most up-to-date state, since it issues a .reload on that instance.
That are some gems designed specifically to block the same piece of code running at the same time in multiple threads (which I believe the majority of every Rails app is while in production environment), regardless of the database lock. Take a look at redis-mutex, redis-semaphore and redis-lock.
That are many articles on the web (I could find at least 3) that state that Rails with_lock will prevent a READ on the database row, while we can easily see with the tests above that's not the case. Take care and always confirm information testing it yourself! I tried to comment on them warning about this.
You were close, you want optimistic locking instead of pessimist locking: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Optimistic.html .
It won't prevent reading an object and submitting a form. But it can detect that the form was submitted when the user was seeing stale version of the object.

grpc iOS stream, send only when GRXWriter.state is started?

I'm using grpc in iOS with bidirectional streams.
For the stream that I write to, I subclassed GRXWriter and I'm writing to it from a background thread.
I want to be as quick as possible. However, I see that GRXWriter's status switches between started and paused, and I sometimes get an exception when I write to it during the paused state. I found that before writing, I have to wait for GRXWriter.state to become started. Is this really a requirement? Is GRXWriter only allowed to write when its state is started? It switches very often between started and paused, and this feels like it may be slowing me down.
Another issue with this state check is that my code looks ugly. Is there any other way that I can use bidirectional streams in a nicer way? In C# grpc, I just get a stream that I write freely to.
Edit: I guess the reason I'm asking is this: in my thread that writes to GRXWriter, I have a while loop that keeps checking whether state is started and does nothing if it is not. Is there a better way to do this rather than polling the state?
The GRXWriter pauses because the gRPC Core only accepts one write operation pending at a time. The next one has to wait until the first one completes. So the GRPCCall instance will block the writer until the previous write is completed, by modifying its state!
In terms of the exception, I am not sure why you are getting the problem. GRXWriter is more like an abstract class and it seems you did your own implementation by inheriting from it. If you really want to do so, it might be helpful to refer to GRXBufferedPipe, which is an internal implementation. In particular, if you want to avoid waiting in a loop for writing, writing again in the setter of GRXWriter's state should be a good option.

Accessing thread from other rails controller action

I'm working on an application, but at the moment I'm stuck on multithreading with rails.
I have the following situation: when some action occurs (it could be after a user clicks a button or when a scheduled task fires off), I'm starting a separate thread which parses some websites until the moment when I have to receive the SMS-code to continue parsing. At this moment I make Thread.stop.
The SMS-code comes as a POST request to some of my controllers. So I want to pass it to my stopped thread and continue its job.
But how can I access that thread?
Where is the best place to keep a link to that thread?
So how can I handle multithreading? There may be a situation when there'll be a lot of threads and a lot of SMS requests, and I need to somehow correlate them.
For all real purposes you can't, but you can have that other thread 'report' its status.
You can use redis-objects to create either a lock object using redis as its flag, create some type of counter, or just true, false value store. You can then query redis to see the corresponding state of the other thread, and exit if needed.
https://github.com/nateware/redis-objects
The cool part about this is it not only works between threads, but between applications.

Letting something happen at a certain time with Rails

Like with browser games. User constructs building, and a timer is set for a specific date/time to finish the construction and spawn the building.
I imagined having something like a deamon, but how would that work? To me it seems that spinning + polling is not the way to go. I looked at async_observer, but is that a good fit for something like this?
If you only need the event to be visible to the owning player, then the model can report its updated status on demand and we're done, move along, there's nothing to see here.
If, on the other hand, it needs to be visible to anyone from the time of its scheduled creation, then the problem is a little more interesting.
I'd say you need two things. A queue into which you can put timed events (a database table would do nicely) and a background process, either running continuously or restarted frequently, that pulls events scheduled to occur since the last execution (or those that are imminent, I suppose) and actions them.
Looking at the list of options on the Rails wiki, it appears that there is no One True Solution yet. Let's hope that one of them fits the bill.
I just did exactly this thing for a PBBG I'm working on (Big Villain, you can see the work in progress at MadGamesLab.com). Anyway, I went with a commands table where user commands each generated exactly one entry and an events table with one or more entries per command (linking back to the command). A secondary daemon run using script/runner to get it started polls the event table periodically and runs events whose time has passed.
So far it seems to work quite well, unless I see some problem when I throw large number of users at it, I'm not planning to change it.
To a certian extent it depends on how much logic is on your front end, and how much is in your model. If you know how much time will elapse before something happens you can keep most of the logic on the front end.
I would use your model to determin the state of things, and on a paticular request you can check to see if it is built or not. I don't see why you would need a background worker for this.
I would use AJAX to start a timer (see Periodical Executor) for updating your UI. On the model side, just keep track of the created_at column for your building and only allow it to be used if its construction time has elapsed. That way you don't have to take a trip to your db every few seconds to see if your building is done.

Delaying event handling in Flash

I'd like to delay the handling for some captured events in ActionScript until a certain time. Right now, I stick them in an Array when captured and go through it when needed, but this seems inefficient. Is there a better way to do this?
Well, to me this seems a clean and efficient way of doing that.
What do you mean by delaying? you mean simply processing them later, or processing them after a given time?
You can always set a timout to the actual processing function in your event handler (using flash.utils.setTimeout), to process the event at a precise moment in time. But that can become inefficient, since you may have many timeouts dangeling about, that need to be handled by the runtime.
Maybe you could specify your needs a little more.
edit:
Ok, basically, flash player is single threaded - that is bytecode execution is single threaded. And any event, that is dispatched, is processed immediatly, i.e. dispatchEvent(someEvent) will directly call all registered handlers (thus AS bytecode).
Now there are events, which actually are generated in the background. These come either from I/O (network, userinput) or timers (TimerEvents). It may happen, that some of these events actually occur, while bytecode is executed. This usually happens in a background thread, which passes the event (in the abstract sense of the term) to the main thread through a (de)queue.
If the main thread is busy executing bytecode, then it will ignore these messages until it is done (notice: nearly any bytecode execution is always the implicit consequence of an event (be it enter frame, or input, or timer or load operation or whatever)). When it is idle, it will look in all queues, until it finds an available message, wraps the information into an ActionScript Event object, and dispatches it as previously described.
Thus this queueing is a very low level mechanism, that comes from thread-to-thread communication (and appears in many multi-threading scenarios), and is inaccessible to you.
But as I said before, your approach both is valid and makes sense.
Store them into Vector instead of Array :p
I think it's all about how you structure your program, maybe you can assign the captured event under the related instance? So that it's all natural to process the captured event with it instead of querying from a global vector

Resources