autohotkey focus no title - focus

I have been trying to get focus for an windows, in a software installation process. The window is without a title. The script keeps failing. Can someone let me know if there is something I can change on my script?
This should work to click the ok button:
Sleep, 4000.
Send, {control down}
MouseClick, Left, 300, 185,
Send, {Control up}
The results are that it opens Google Chrome next to the Windows start menu, instead of clicking on the determined spot on the open window in the middle of the desktop.
My full script is below:
#NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases.
;#Warn ; Enable warnings to assist with detecting common errors.
SendMode Input ; Recommended for new scripts due to its superior speed and reliability.
SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory.
Sleep, 1000 ;language selection and next.
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 500
Sleep, 2000
Send, {Enter}
Sleep, 1000 ;directory and installation.
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 500
Send, {tab}
Sleep, 5000
Send, {Enter}
Sleep, 500
Send, {tab}
Sleep, 5000 ;for installation wait time.
Send, {Enter} ;finish.
Sleep, 7000
Run "myexecutable.exe"
Sleep, 4000 ;focus attempt 2.
Send, {control down}
MouseClick, Left, 300, 185,
Send, {Control up} ;for association OK.

You can use mouse clicks in relative mode:
CoordMode, Mouse, Window
This will make the coordinates of the click relative to the active window instead of the screen.
So calculate the new coordinates for the position on the window you need to click on, and make sure that window is active.
To get the window handle, use the executable name of the program. For example to activate a window, instead of the parameter WinTitle use ahk_exe and the process name:
WinActivate, ahk_exe myexecutable.exe

Related

erlang - monitor send information to shell and not to the gen_server

I'm trying to connect a gen_server to another gen_server and during the connect the servers need to monitor each other and know when the server has crashed, either the entire node or the server process. after im doing the first start_link and one of the servers crashes the other server gets a message from the monitor in the code (handle_info function is activated), but when it happens for the second time the monitor sends the information directly to the shell (the message does not go through the handle_info and goes directly to the shell only visible using flush() inside the shell) and the server that was suppose to be alerted from the monitor doesn't receive any message.
my code in the sending side:
handle_call({connect, Node, Who}, _From, _State) ->
case Who of
cdot -> ets:insert(address, {cdot, Node}), ets:insert(address,
{Node, cdot}), monitor_node(Node, true);
cact -> ets:insert(address, {cact, Node}), ets:insert(address,
{Node, cdot}), monitor_node(Node ,true);
ctitles -> ets:insert(address, {ctitles, Node}),
ets:insert(address, {Node, cdot}), monitor_node(Node, true);
_-> ok
end,
[{_, Pid2}] = ets:lookup(?name_table3, pidGui),
Pid2 ! {db, "Node "++ atom_to_list(Who) ++ " connected"}, %print to
gui witch node was connected
{reply, {{node(), self()}, connected}, node()};
and the one in the receiving side is:
connect() ->
{{Node, Pid}, Connected} = gen_server:call(server_node(), {connect,
node(), cact}),
monitor_node(Node, true),
monitor(process, Pid),
Connected.
please can anyone tell me why this is happening?
the same happens for either node or process monitoring
If you get the second monitor message in the shell, it is because you call the connect function in the shell context.
Check how you call this function, it must be done in the server context, it means inside a handle_call, handle_cast or handle_info function.
after im doing the first start_link and one of the servers crashes the
other server gets a message from the monitor in the code, but when it
happens for the second time
It sounds like you are starting a new server after a server crashes. Do you call monitor() on the new server Pid?
A monitor is triggered only once, after that it is removed from
both monitoring process and the monitored entity. Monitors are fired
when the monitored process or port terminates, does not exist at the
moment of creation, or if the connection to it is lost. In the case
with connection, we lose knowledge about the fact if it still exists
or not. The monitoring is also turned off when demonitor/1 is called.

What does it mean that `gen_server` dodges auto-connections on sends but not suspends?

The gen_server implementation has this fun little function:
do_send(Dest, Msg) ->
case catch erlang:send(Dest, Msg, [noconnect]) of
noconnect ->
spawn(erlang, send, [Dest,Msg]);
Other ->
Other
end.
The entry for erlang:send/3 says of the noconnect option
If the destination node would have to be auto-connected before doing the send, noconnect is returned instead.
The function here avoids the delay in setting up a connection between nodes by forcing a spawned process to do the waiting. Clever!
There's another option to erlang:send/3, nosuspend:
If the sender would have to be suspended to do the send, nosuspend is returned instead.
Per, erlang:send_nosuspend/2 the sender will be suspended if the connection is overloaded. Why would not gen_server wish to pull the same trick to avoid suspension of the sending process?
It does this when Dest is on another erlang node. It first tries to send the message without forcing a connection to be set-up if the nodes aren't connected, the [noconnect] option. If this can be done then erlang:send/3 sends the message. If this can't be done then we spawn a process which does a send which waits for the connection to be set up. Setting up a connection between two nodes can take time. This is, of course, so we don't sit and wait unnecessarily for the send.
EDIT:
The gen_server doesn't handle the nosuspend case at all, it just worries about the case where sending a message to a remote process could take time because of the need to wait for a connection to be set up. In which case a process is spawned so we can go on. This does not change the semantics. The nosuspend does a more complex handling of eventual network problems which would probably need more complex handling than should be provided in a standard API.

wxLua - How do I implement a Cancel button?

I have a wxLua Gui app that has a "Run" button. Depending on selected options, Run can take a long time, so I would like to implement a "Cancel" button/feature. But it looks like everything in wxLua is working on one Gui thread, and once you hit Run, pressing Cancel does nothing, the Run always goes to completion.
Cancel basically sets a variable to true, and the running process regularly checks that variable. But the Cancel button press event never happens while Running.
I have never used co-routines; if the Run process regularly yields to a "Cancel check" process, will the Cancel event happen then?
Or is there another way?
(the following assumes that by "Run" you mean a long running operation in the same process and not running an external process using wxExecute or wxProcess.)
"Cancel" event is not triggered because by executing your Run logic you have not given a chance to the UI to handle the click event.
To avoid blocking the UI you need to do something like this. When you click Run button create a co-routine around the function you want to run:
coro = coroutine.create(myLongRunningFunction)
Your Run event is completed at this point. Then in EVT_IDLE event you will be resuming this coroutine as long as it's not complete. It will look something like this:
if coro then -- only if there is a coroutine to work on
local ok, res = coroutine.resume(coro, additional, parameters)
-- your function either yielded or returned
-- you may check ok to see if there was an error
-- res can tell you how far you are in the process
-- coro can return multiple values (just give them as parameters to yield)
if coroutine.status(coro) == 'dead' then -- finished or stopped with error
coro = nil
-- do whatever you need to do knowing the process is completed
end
end
You will probably need to request more IDLE event for as long as your process is not finished as some operating systems will not trigger IDLE events unless there is some other event triggered. Assuming your handler has event parameter, you can do event:RequestMore(true) to ask for more IDLE events (RequestMore).
Your long-running process will need to call coroutine.yield() at the right time (not too short as you will be wasting time to switch back and forth and not too long for users to notice delays in the UI); you probably need to experiment with this, but something timer-based with 100ms or so between calls may work.
You can check for Cancel values either in your IDLE event handler or in the long-running function as you do now. The logic I described will give your application UI a chance to process Cancel event as you expect.
I don't use WXWidgets, but the way I implement cancel buttons in my lua scripts which use IUP is to have a cancel flag, which is set when the button is pressed and the progress display is checked for during the run.
Usage is like this
ProgressDisplay.Start('This is my progress box',100)
for i=1,100 do
ProgressDisplay.SetMessage(i.." %")
fhSleep(50,40) -- Emulate performing the task
ProgressDisplay.Step(1)
if ProgressDisplay.Cancel() then
break
end
end
ProgressDisplay.Reset()
ProgressDisplay.Close()
If you want to see the definition for the ProgressDisplay see:
http://www.fhug.org.uk/wiki/doku.php?id=plugins:code_snippets:progress_bar

Does erlang:disconnect_node/2 immediately stop queued messages?

If I sent a lot of messages to a remote node and immediately call erlang:disconnect_node/2 to drop the connection, is there a chance some messages don't get through the wire? In other words, does that method perform a brutal disconnection, regardless of waiting messages?
No, even with two local nodes!
Setup: I got a node a#super, on witch a dummy receive-print loop runs, registered with a. On another node, I run
(b#super)1> [{a, a#super} ! X || X <- lists:seq(0,10000)], erlang:disconnect_node(a#super).
That is, many messages, and then a brutal disconnection.
Result: the receiver printed the full 10001 messages only once over 10 runs.
So, you definitely do not have any guarantee the receiver got all the messages. You should use another technique (novice at erlang, sorry), or use an ack message before the disconnect.

Debugging Erlang heart timeouts

I use the heart program to restart an Erlang node when it becomes unresponsive. However, I am finding it hard to understand why the node freezes. SASL logs don't show any errors, and my own logs don't seem to show anything remarkable happening at those times. Can anybody give advice on debugging this sort of thing?
By default the heart program issues a SIGKILL to kill off the unresponsive VM so it can quickly start a new one. This makes getting any useful information about the VM pretty much impossible. Something I've tried in the past is to patch the heart program to avoid the hard kill and instead get the VM to create a crash dump and a coredump. I used a patch like this (this one is for Erlang/OTP R14B02):
--- erts/etc/common/heart.c.orig 2011-04-17 12:11:24.000000000 -0400
+++ erts/etc/common/heart.c 2011-04-17 12:12:36.000000000 -0400
## -559,10 +559,11 ##
int res;
if(heart_beat_kill_pid != 0){
pid = (pid_t) heart_beat_kill_pid;
- res = kill(pid,SIGKILL);
+ res = kill(pid,SIGUSR1);
+ sleep(4);
for(i=0; i < 5 && res == 0; ++i){
sleep(1);
- res = kill(pid,SIGKILL);
+ res = kill(pid,i < 2 ? SIGQUIT : SIGKILL);
}
if(errno != ESRCH){
print_error("Unable to kill old process, "
As you can see, with this patch heart will first issue a SIGUSR1 to try to get the VM to create a crash dump. Since this can take awhile, heart then sleeps for 4 seconds. You might have to increase this sleep time if you're not getting full crash dumps. After that, heart then tries twice to issue a SIGQUIT with the hope of getting a coredump, and if that fails, issues a SIGKILL.
Note that this patch will slow down heart's VM restart due to the time required to wait for the crash dumps and coredumps. If you use it in production, be aware of this limitation.
You could try to call erlang:halt/1 from your HEART_COMMAND thus creating a crash dump from the unresponsive node.
You can try using the erl_call tool with e.g. -a erlang halt 123.
If the erlang node can't respond to this is also interesting information.
Did you try increasing `HEART_BEAT_TIMEOUT? Maybe the node is just bogged down a bit an misses the timeout but doesn't freeze.
If you have any idea of why it is freezing you could try to trace the module using dbg.
http://www.erlang.org/doc/man/dbg.html
In short try
dbg:tracer(), dbg:p(all,c), dbg:tpl(Module, Function, x).
If you want to stop this tracing issue
dbg:ctpl()
See documentation for more info.
Note: Change Module and Function to whatever you want to trace, leave x as it is. You can also skip Function and only give Module, x.
Warning: Running this on a live system can be dangerous as the amount of information that is going to be printed to the shell can be enormous.

Resources