Should Proto.Actor ActorSystem.ShutdownAsync() stop all actors? - proto.actor

I was expecting the ActorSystem.ShutdownAsync() stop all actors before finishing returned Task, but this doesn't seem to happen. Does anyone know how I should terminate the ActorSystem correctly? I'd like all the actors to get called IDisposable.Dispose().. Thank you for any ideas!

Related

Is there a way to exit a thread in a Parallel.foreach loop which is stuck

I am using OmniThread Parallel.foreach(). There are instances where the loop takes a long time or gets stuck.
I would like to know, is it possible to timeout each process in the Parallel.foreach() loop?
In short: Nope, there isn't.
Unless you program the timeout handling in your 'thread body' code (what gets called in the execute).
eg my database engine allows sending a CancelProcessing call to running queries from a different thread that runs the query, this would 'cleanly' end the running subthread.
'Dirty' end of the subthreads:
I added a FR to Omnithread's github site to add an (Dirty) Terminate method to the IOmniParallel interfaces (and alikes). Which has is drawback because killing subthreads will probably leave you with memory/resource leaks.
Meanwile you might use this dirty shutdown solution/workaround wich actually comes down fixing a similar problem (I had a deadlock in my parallel processed routine, so my parallel.Waitfor never returned true, and worse my IOmniParallelTask interface variable was never released causing the calling thread to block as well.

Erlang: check on which scheduler a process is running?

I could use erlang:trace/3 to keep track of which scheduler the processes are running at any given time and then use the timestamp+pid to get the scheduler id but is there a simpler/more efficient way?
perhaps something like self/0 that returns the scheduler ID instead of the process ID
You are may be looking for erlang:system_info(scheduler_id).

erlang supervisor restart strategy

I would like to start several processes as children of a given supervisor. The restart strategy is one_for_one For my needs, every process which terminates should be restarted after a given amount of time (e.g. 20 seconds).
How can this be done? Maybe with a delay in the init or in the terminate functions in combination with:
Shutdown = brutal_kill | integer() >=0 | infinity
Is there a better way to achieve this?
Don't use init/1 for this. While init is running, the supervisor is blocked. It is better to start up the process right away, but only let it register itself for operations like this after it has waited for 20 seconds. You could use a simple erlang:send_after(..) call in the init to trigger this startup delay.
I don't like the termination thing either. Perhaps have a close-down state in which you linger for a bit before termination. This could perhaps make sure that nobody else runs while you are doing. I'd recommend that if you are in control of when to close down. Simply enter this state and then await a timer trigger like the above. Note though that this solution will only free up external resources after the grace period (files, ETS tables, sockets) - unless explicitly freed.

delayed_job Won't Process My Queue?

I am using the delayed_jobs gem but using it against 2 queues. I have mapped my models against the correct queues (dbs) to establish correct connections.
The jobs get entered in fine - however, delayed_jobs will process one queue fine but not the other. I am trying to manually force it to process the email queue but it simply won't.
Is there a way to config/force it to? Or pass it the correct backend to process?
See below I am counting jobs - getting a correct count. However, if I try to 'work_off' the queue its showing 0 success/fails.
Pretty sure because its hitting the wrong queue. Any ideas?
Delayed::Worker::Email::Job.count
=> 12032
Delayed::Worker.new(:backend => Email::Job).work_off
=> [0, 0]
I ended up just going with one queue. This seemed to work best and save the headache of juggling two. Would be cool if DJ would eventually support multi-backends/queues.

is with_scope threadsafe?

Poking around in the rails code, I ran across with_scope.
From what I can tell, it takes the scope type and conditions, merges them into existing conditions for that scope type, yields to the block, then gets rid of the additional scope.
So my first thought is in a multithreaded environment (like jruby on rails), what happens if while thread 1 is executing the block, thread 2 decides to do a Model.find :all? It seems to me like a race condition waiting to happen.
Am I missing something?
So the trick in here is that if you trace deep enough, the scopes are getting set through Thread.current[method], which will execute method but only in the scope of the current thread. I didn't even know that was possible for ruby... guess you learn something new every day

Resources