This is more of a theoretical question, and I'm stuggling to find anything that mentions it outside my lecture notes.
In a the 3 state process model for process management, you have 3 states, running, blocked and ready. So my question is when can a state transition of blocked to running occur, without the process first passing through the ready queue?
Thanks hope it makes sense :)
I'm not sure what, if any, specific domain you are asking about. However, I can translate this to a general manufacturing domain, based on concepts learned through APICS CPIM certification.
If you think of a production line, it may have one of three states:
RUNNING: Product is being produced and the line and its dependencies are operational.
BLOCKED: Product is not coming off the line because something (ie, a machine down) is blocking output.
READY: The line is operational, but product is not coming off the line because there is no product to be produced.
Let's imagine now that a line is RUNNING. Product is flowing from the start of the line to the end of the line. Let's now say that a machine on that line breaks down. The line is now BLOCKED, however there is still product on the line, some of which is probably not finished. As soon as that machine comes back on or gets replaced, the line immediately goes to RUNNING (and not READY) as there is already product in place and in queue. Now, if while the machine was down, the product on the line was removed, then when the machine came back online, the line as a unit would be READY.
It may also be worth noting that APICS actually defines 5 states for production. They are QUEUE, SETUP, RUN, WAIT, MOVE.
Related
Erlang world not use try-catch as usual. I'm want to know how about performance when restart a process vs try-catch in mainstream language.
A Erlang process has it's small stack and heap concept which actually allocate in OS heap. Why it's effective to restart it?
Hope someone give me a deep in sight about Beam what to do when invoke a restart operation on a process.
Besides, how about use gen_server which maintain state in it's process. Will cause a copy state operate when gen_server restart?
Thanks
I recommend having a read of https://ferd.ca/the-zen-of-erlang.html
Here's my understanding: restart is effective for fixing "Heisenbug" which only happens when the (Erlang) process is in some weird state and/or trying to handle a "weird" message.
The presumption is that you revert to a known good state (by restarting), which should handle all normal messages correctly. Restart is not meant to "fix all the problems", and certainly not for things like bad configuration or missing internet connection. By this definition we can see it's very dangerous to copy the state when crash happened and try to recover from that, because this is defeating the whole point of going back to a known state.
The second point is, say this process only crashes when handling an action that only 0.001% (or whatever percentage is considered negligible) of all your users actually use, and it's not really important (e.g. a minor UI detail) then it's totally fine to just let it crash and restart, and don't need to fix it. I think it can be a productivity enabler for these cases.
Regarding your questions in the OP comment: yes just whatever your init callback returns, you can either build the entire starting state there or source from other places, totally depend on the use case.
Let's say I need to be sure ModelName can't be updated at the same time by two different Rails threads; this can happen, for example, when a webhooks post to the application tries to modify it at the same time some other code is running.
Per Rails documentation, I think the solution would be to use model_name_instance.with_lock, which also begins a new transaction.
This works fine and prevents simultaneous UPDATES to the model, but it does not prevent other threads from reading that table row while the with_lock block is running.
I can prove that with_lock does not prevent other READS by doing this:
Open 2 rails consoles;
On console 1, type something like ModelName.last.with_lock { sleep 30 }
On console 2, type ModelName.last. You'll be able to read the model no problem.
On console 2, type ModelName.update_columns(updated_at: Time.now). You'll see it will wait for the 30 seconds lock to expire before it finishes.
This proves that the lock DOES NOT prevent reading, and as far as I could tell there's no way to lock the database row from being read.
This is problematic because if 2 threads are running the same method at the EXACT same time and I must decide to run the with_lock block regarding some previous checks on the model data, thread 2 could be reading stale data that would be soon be updated by thread 1 after it finishes the with_lock block that is already running, because thread 2 CAN READ the model while with_lock block is in progress in thread 1, it only can't UPDATE it because of the lock.
EDIT: I found the answer to this question, so you can stop reading here and go straight to it below :)
One idea that I had was to begin the with_lock block issuing a harmless update to the model (like model_instance_name.update_columns(updated_at: Time.now) for instance), and then following it with a model_name_instance.reload to be sure that it gets the most updated data. So if two threads are running the same code at the same time, only one would be able to issue the first update, while the other would need to wait for the lock to be released. Once it is released, it would be followed with that model_instance_name.reload to be sure to get any updates performed by the other thread.
The problem is this solution seems way too hacky for my taste, and I'm not sure I should be reinventing the wheel here (I don't know if I'm missing any edge cases). How does one assure that, when two threads run the exact same method at the exact same time, one thread waits for the other to finish to even read the model ?
Thanks Robert for the Optimistic Locking info, I could definitely see me going that route, but Optimistic locking works by raising an exception on the moment of writing to the database (SQL UPDATE), and I have a lot of complex business logic that I wouldn't even want to run with the stale data in the first place.
This is how I solved it, and it was simpler than what I imagined.
First of all, I learned that pessimistic locking DOES NOT preventing any other threads from reading that database row.
But I also learned that with_lock also initiates the lock immediately, regardless of you trying to make a write or not.
So if you start 2 rails consoles (simulating two different threads), you can test that:
If you type ModelName.last.with_lock { sleep 30 } on Console 1 and ModelName.last on Console 2, Console 2 can read that record immediately.
However, if you type ModelName.last.with_lock { sleep 30 } on Console 1 and ModelName.last.with_lock { p 'I'm waiting' } on Console 2, Console 2 will wait for the lock hold by console 1, even though it's not issuing any write whatsoever.
So that's a way of 'locking the read': if you have a piece of code that you want to be sure that it won't be run simultaneously (not even for reads!), begin that method opening a with_lock block and issue your model reads inside it that they'll wait for any other locks to be released first. If you issue your reads outside it, your reads will be performed even tough some other piece of code in another thread has a lock on that table row.
Some other nice things I learned:
As per rails documentation, with_lock will not only start a transaction with a lock, but it will also reload your model for you, so you can be sure that inside the block ModelName.last is on it's most up-to-date state, since it issues a .reload on that instance.
That are some gems designed specifically to block the same piece of code running at the same time in multiple threads (which I believe the majority of every Rails app is while in production environment), regardless of the database lock. Take a look at redis-mutex, redis-semaphore and redis-lock.
That are many articles on the web (I could find at least 3) that state that Rails with_lock will prevent a READ on the database row, while we can easily see with the tests above that's not the case. Take care and always confirm information testing it yourself! I tried to comment on them warning about this.
You were close, you want optimistic locking instead of pessimist locking: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Optimistic.html .
It won't prevent reading an object and submitting a form. But it can detect that the form was submitted when the user was seeing stale version of the object.
I have an application written in Delphi 5, which runs fine on most (windows) computers.
However, occasionally the program begins to load (you can see it in task manager, uses about 2.5-3 MB of memory), but then stalls for a number of minutes, sometimes hours.
If you leave it long enough, the formshow event will eventually occur and the application window will pop up, but it seems like some other application or windows setting is preventing it from initially using all the memory it needs to run (approx. 35-40 MB).
Also, on some of my client's workstations, if they have MS Outlook running, they can close it and my application will pop up. Does anyone know what is going on here, and/or how to fix it?
Since nobody has given a better answer I'll take a stab at how to solve this:
There's something in your initialization that is locking it up somehow. Without seeing your code I do not know what it is so I'll only address how to go about finding it:
You need to log what you accomplish during startup. If you have any kind of screen showing I find the window title useful for this but it sounds like you don't--that means you need to write the log to a file. Let it get lost, kill the task and see where it got.
Note that this means you need to cleanly write your data despite an abnormal program termination. How to go about this:
A) Append, write your line, close.
B) Write your line, then flush the file handle.
C) Initially write your file to consist of a large number of blanks--ensure this is larger than the actual log will be. Write your line. In case of abnormal termination it will retain the original larger file size.
I would write a timestamp on every log item so you can see if it's just processing something too slowly.
If examining the log shows you where the problem is, fine. If, as usually happens, it's not enough you put a bunch more logging between the last item that did get logged and the next one that didn't--I've been known to log every line when hunting a cryptic problem that only happened on someone else's system.
If finding the line isn't enough to pinpoint the problem also dump the value of relevant variables.
Finally, if such intense scrutiny makes the bug go away start looking for an uninitialized variable. (While a memory stomp is also an option I doubt it's the culprit here.)
I found a post about how to kill the program itself one year ago. It suggested writing some values in registry or windows directory or a location in disk when it runs first time. When it tries to run for the second time, the program just check the value in that location, if not match, it terminates itself.
This is simple and a little naive as any realtime anti-virus application would easily watch what value and where your program wrote in a disk. And in a true sense, that method did not 'kill' itself, the program just lies thare and sleeps intact and complete, only because of lack of trigger.
Is there a method that, in true meaning, kills itself such as deleting itself permanently, disemboweling itself, disrupting classes or functions or fragmenting itself?
Thank you.
+1 to this question.
It is so unfortunate that people often tend to vote down, if somebody asks questions that are related to tricky ways of doing things! Nothing illegal but at times this qustion may sound to other people that this method is unnecessary. But there are situations where one wants to delete itself (self) once it is executed.
To be clear - it is possible to delete the same exe once it is executed.
(1) As indicated in the earlier answer, it is not possible for an exe to get deleted once it is executed from disk. Because OS simply doesn't allow that.
(2) However, at this point, to achieve this, what we need to do is, just execute the EXE in momory! It is pretty easy and the same EXE could be easily deleted from disk once it is executed in memory.
read more on this unconventional technique here:
execute exe in memory
Please follow above post and see how you can execute an exe in momory stream; or you can even google it and find out yet another way. There are numerous examples that shows how to execute an exe in memory. Once it is executed, you can safely delete it from disk.
Hope this throws some light into your question.
An application cannot delete itself off the disk directly, because while the application is running the disk file is 'open' - hence it cannot be deleted.
See if MoveFileEx with the MOVEFILE_DELAY_UNTIL_REBOOT fits your requirement.
If you can't wait for a reboot, you'll have to write a second application (or batch file) that runs when the first application closes to wait for the first application to complete closing and then delete it.
It's chicken and egg though - how do you delete the second application/batch file? It can't delete itself. But you could put it in the %temp% directory and then use MoveFileEx() to delete it next time the machine is rebooted.
Like with browser games. User constructs building, and a timer is set for a specific date/time to finish the construction and spawn the building.
I imagined having something like a deamon, but how would that work? To me it seems that spinning + polling is not the way to go. I looked at async_observer, but is that a good fit for something like this?
If you only need the event to be visible to the owning player, then the model can report its updated status on demand and we're done, move along, there's nothing to see here.
If, on the other hand, it needs to be visible to anyone from the time of its scheduled creation, then the problem is a little more interesting.
I'd say you need two things. A queue into which you can put timed events (a database table would do nicely) and a background process, either running continuously or restarted frequently, that pulls events scheduled to occur since the last execution (or those that are imminent, I suppose) and actions them.
Looking at the list of options on the Rails wiki, it appears that there is no One True Solution yet. Let's hope that one of them fits the bill.
I just did exactly this thing for a PBBG I'm working on (Big Villain, you can see the work in progress at MadGamesLab.com). Anyway, I went with a commands table where user commands each generated exactly one entry and an events table with one or more entries per command (linking back to the command). A secondary daemon run using script/runner to get it started polls the event table periodically and runs events whose time has passed.
So far it seems to work quite well, unless I see some problem when I throw large number of users at it, I'm not planning to change it.
To a certian extent it depends on how much logic is on your front end, and how much is in your model. If you know how much time will elapse before something happens you can keep most of the logic on the front end.
I would use your model to determin the state of things, and on a paticular request you can check to see if it is built or not. I don't see why you would need a background worker for this.
I would use AJAX to start a timer (see Periodical Executor) for updating your UI. On the model side, just keep track of the created_at column for your building and only allow it to be used if its construction time has elapsed. That way you don't have to take a trip to your db every few seconds to see if your building is done.