How to run a exe Asynchronously with two arguments in ruby? - ruby-on-rails

exe should run when i am open the page. Asynchronous process need to run.
Is there any way for run exe Asynchronously with two arguments in ruby?
i have tried ruby commands - system() , exec() but its waiting the process complete. i need to start exe with parameter no need to wait for the process complete
is any rubygems going to support for my issue?

You can use Process.spawn and Process.wait2:
pid = Process.spawn 'your.exe', '--option'
# Later...
pid, status = Process.wait2 pid
Your program will be executed as a child process of the interpreter. Besides that, it will behave as if it had been invoked from the command line.
You can also use Open3.popen3:
require 'open3'
*streams, thread = Open3.popen3 'your.exe', '--option'
# Later...
streams.each &:close
status = thread.value
The main difference here is that you get access to three IO objects. The standard input, output and error streams of the process are redirected to them, in that order.
This is great if you intend to consume the output of the program, or communicate with it through its standard input stream. Text that would normally be printed on a terminal will instead be made available to your script.
You also get a thread which will wait for the program to finish executing, which is convenient and intuitive.

exec switches control to a new process and never returns. system creates a subprocess and waits for it to finish.
What you probably want to do is fork and then exec to create a new process without waiting for it to return. You can also use the win32ole library which might give you more control.

Related

GNU Parallel: Halt on success -or- failure

Is it possible to set a -halt condition (or multiple -halt conditions?) such that all jobs will be halted if any of them fail, regardless of the exit code?
I want to monitor for an event (that I just triggered, separately, on a load balanced service). I can identify if the event passed or failed by viewing the logs, but I have to view logs on multiple servers at once. Perfect! Parallel it! I have an extra requirement though: I want to return success or failure based on the log result.
So I want to stop the parallel jobs if any of them detect the event (i.e. "-halt now") but I don't know if the detect will return zero or non-zero (that's the point: I'm trying to find out that information) so neither "--halt now,success=1" nor "--halt now,fail=1" is correct, I need to figure out a way to do something like "--halt now,any=1")
I did a look through the source and, well, my perl Kung-fu is inadequate to tackle this (and it looks like exitstatus is used in many different places in the source, so it's difficult for me to figure out if this would be feasible or not.)
Note that ,success=1 and ,fail=1 both work perfectly (given the corresponding exit status) but I don't know if it will be success or fail before I run parallel.
The GNU Parallel manpage says:
--halt now,done=1
exit when one of the jobs finishes. Kill running jobs.
Source: https://www.gnu.org/software/parallel/man.html (search for --halt - it's a big page)
If you (as a human) are viewing the logs, why not use Ctrl-C?
If you simply want all jobs to be killed when the first finishes, then append true to your command to force it to become a success:
parallel -Sserver{1..10} --halt now,success=1 dosomething {}\;true ::: 1..100

What happen if i erased process dictionary of gen_server module?

I was playing with the process dictionary inside a gen_server module, i called get() function and i get something like this.
{'$ancestors',[main_server,<0.30.0>]},
{'$initial_call',{child_server,init,1}}]
what happen if i erased the process dictionary, what would go wrong ?
i erased it and every thing worked fine, even
calling a function that generates an exception in the child_server the main_server still can get the exit signal.
$ancestors is used only in the initialization stage, to get the parent's PID, which is used to catch the EXIT message coming from the parent, so that the terminate stuff can get executed. Erasing this key when the server is up and running makes no difference.
$initial_call, on the other hand, is used in the crash report by proc_lib to dump the MFA info.
A quick grep in the OTP source tree can certainly help.
I think some debug functions may use process dictionary, for example erlang:process_info/2

Erlang: Is it ok to write application without a supervisor?

I don't need a supervisor for some specific application I develop. Is it ok to not to use one?
The doc says about the start/2 that
"should return {ok,Pid} or {ok,Pid,State} where Pid is the pid of the
top supervision"
so I'm not sure if it is OK not to start a supervisor and to return some invalid pid (I tried and nothing bad happened)
Returning an {ok, self()} or something similar works fine until you start doing release upgrades. At that point, you'll need to use a supervisor with an empty child list. (The application and supervisor behaviours don't have colliding callback functions, so you can put both in the same module.)
Just to make sure: you are doing some kind of initialisation in your application module's start callback function, right? If not, you can just remove the mod directive from the .app file and the callback won't even be called, and thus there will be no supervisor, real or fake.

How to properly run a Symfony task in the background from an action?

$path=sfConfig::get('sf_app_module_dir')."/module/actions/MultiTheading.php";
foreach($arr as $id)
{
if($id)
passthru ("php -q $path $id $pid &");
}
when when i running action script is running sequenctly despite "&".
Please help
There are two common methods to achieve what you want.
Both involve creating a table in your database (kind of a to-do list). Your frontend saves work to do there.
The first one is easier, but it's only ok if you don't mind a slight latency. You start by creating a symfony task. When it wakes up (every 10/30/whatever minutes) it check that table if it has anything to do, simply exists if not. Otherwise it does what it needs to, then marks them as processed.
The second one is more work and more error-prone, but can work instantly. You create a task, that daemonizes itself when started (forks, forks again, and sets the parent pid to zero), then goes to sleep. If you have some work to do, you wake it up by sending a signal. Daemonizing and signal sending/receiving can be done with php's pcntl_* functions.

stop the kernell32 event

how can i stop or freeze the kernel32 event? for example stop the copy file??
You can't stop the kernel from copying files. If you want to stop the user from copying files, then you need to write a hook that implements the ICopyHook interface.
I'm not sure what exactly you want to do, but if you are using CopyFile winapi, then you should look at CopyFileEx
You can pass there lpProgressRoutine - pointer to your function, and then return from it PROGRESS_CANCEL when you want stop your file copy operation.
Also, starting from Vista, you can cancel sync. IO operations from different thread by CancelSynchronousIo, so you should be able to stop CopyFile operation.

Resources