How to return value from the TK toplevel window? - return

I have a TCL script that opens a TK toplevel window. In this window the user can manipulate some widgets (buttons that call other procedures to do some work, checkbuttons, etc.). When user is finished, I want him to click a button in this window that would cause the toplevel to be destroyed and the procedure that opened it to return a value. Here is what I've got:
proc openWindow {} {
set w .testwindow
catch {destroy $w}
toplevel $w
<here I setup all the widgets of the window>
button $w.btn -text "Exit" -command {
set ret [finishTest]
puts "returning $ret"
return $ret
}
pack $w.btn
pack $w
}
proc finishTest {} {
<here I evaluate the state of the $w widgets>
if {some condition} {
destroy $w
return 0
} else {
destroy $w
return -1
}
}
when I call openWindow, the window displays and behaves as it should, when I click the Exit button, it correctly prints "returning: $ret" but whan I call puts [openWindow] it only prints a new line with no other characters after I click the button.
I defined the $w variable globaly so that I can access it outside the openWindow in the finishTest procedure.
Thank you for your advice!

Callbacks can't affect local variables, and your openWindow isn't currently waiting for the callback to process before returning. To make it work, you instead have to use vwait (or tkwait) to let the event loop run until the callback is performed. (It does this by running a subsidiary event loop, which does take stack space, so watch out for reentrancy problems.) Check out the marked lines below (your finishTest is unchanged).
proc openWindow {} {
global ret; #########################
set w .testwindow
catch {destroy $w}
toplevel $w
<here I setup all the widgets of the window>
button $w.btn -text "Exit" -command {
set ret [finishTest]; #########################
}
pack $w.btn
pack $w
vwait ret; #########################
puts "returning $ret"; #########################
return $ret; #########################
}
It's possible to untangle this mess somewhat with Tcl 8.6's coroutines, provided the call into openWindow can be done from inside a coroutine (i.e., a suspendable stack context). That would make this part of the code look more like (pay attention to marked lines again):
proc openWindow {} {
set w .testwindow
catch {destroy $w}
toplevel $w
<here I setup all the widgets of the window>
button $w.btn -text "Exit" -command [info coroutine]; #########################
pack $w.btn
pack $w
yield; #########################
set ret [finishTest]; #########################
puts "returning $ret"; #########################
return $ret; #########################
}

Related

How to stop parsing and reset yacc?

I am writing a job control shell. I use Yacc and Lex for parsing. The top rule in my grammar is pipeline_list, which is a list of pipelines separated by a comma. Thus, examples of pipelinelists are as follows:
cmd1 | cmd2; cmd3; cmd4 | cmd5 <newline>
cmd1 <newline>
<nothing> <newline>
I represent a pipeline with the pipeline rule (showed below). within that rule, I do the following:
1. call execute_pipeline() to execute the pipeline. execute_pipeline() returns -1 if anything went wrong in the execution of the pipeline.
2. Check the return value of execute_pipeline() and if it is -1, then STOP parsing the rest of the input, AND make sure YACC starts fresh when called again in the main function (shown below). The rationale for doing this is this:
Take, for example, the following pipelinelist: cd ..; ls -al. My intent here would be to move one directory up, and then list its content. However, if execution of the first pipeline (i.e., "cd ..") in the pipelinelist fails, then carrying on to execute the second pipeline (i.e. " ls -al") would list of the contents of the current directory (not the parent), which is wrong! For this reason, when parsing a pipelinelist of length n, if executing of some pipeline k > n fails, then I want to discard the rest of the pipelinelist (i.e., pipelines k+1..n), AND make sure the next invocation of yyparse() starts brand new (i.e. recieve new input from readline() -- see code below).
if tried the following, but it does not work:
pipeline:
simple_command_list redirection_list background pipeline_terminator // ignore these
{
if (execute_pipeline() == -1)
{
// do some stuff
// then call YYABORT, YYACCEPT, or YYERROR, but none of them works
}
}
int main()
{
while(1)
{
char *buffer = readline("> ");
if (buffer)
{
struct yy_buffer_state *bp;
bp = yy_scan_string(buffer);
yy_switch_to_buffer(bp);
yyparse()
yy_delete_buffer(bp);
free(buffer);
} // end if
} // end while
return 0;
} // end main()
You can use YYABORT; in an action to abort the current parse and immediately return from yyparse with a failure. You can use YYACCEPT; to immediately return success from yyparse.
Both of these macros can only be used directly in an action in the grammar -- they can't be used in some other function called by the action.

How to handle unexisting variables passed to a proc

I would like to create a procedure like this simple example:
proc name {args} {
foreach val $args {
puts $val
}
}
But I would like the procedure to handle variables that don't exist, something like the code shown below:
proc name {args} {
foreach val $args {
if [info exists $val] {
puts $val
}
}
}
The problem is that the code is not executed because as soon as I call the procedure with an unexisting variable it immediately stalls, prior to go into the code, saying that there is a variable that doesn't exist. Is it probable because the procedure checks argument existance before entering the body?.
I can make it work by changing args by several optional variables with predefined values, but that limits the procedure and makes it look bad.
Can I make a proc able to handle unexisting variables?
You can't pass a variable as an argument: arguments have to be values. You can pass a variable name as an argument and use that as a reference to the variable inside the procedure. Example:
proc name args {
foreach varname $args {
upvar 1 $varname var
if {[info exists var]} {
puts $var
}
}
}
(The call to upvar creates a link between the variable whose name is the value of the variable varname outside the procedure and the variable called var inside the procedure. This is one way to "pass a variable to a procedure".)
Then you can do this:
% set foo 1 ; set baz 3
% name foo bar baz
1
3
Note that if you try to invoke the procedure as
% name $bar
where bar is undefined, the interpreter tries (and fails) to evaluate it before calling the procedure. That might be what you are seeing.
Documentation:
upvar
If we look at the point where you are calling the command (procedures are commands; they're a subclass really) you'll see something like this in your code:
name $a $b $c
That's fine if all those variables exist, but if one doesn't, it will blow up even before name is called. In Tcl, $a means exactly “read the variable a and use its contents here”, unlike in some other languages where $ means “look out language, here comes a variable name!”
Because of this, we need to change the calling convention to be one that works with this:
name a b c
That's going to require the use of upvar. Like this:
proc name {args} {
foreach varName $args {
# Map the caller's named variable to the local name “v”
upvar 1 $varName v
# Now we can work with v in a simple way
if {[info exists v]} {
puts $v
}
}
}
You made a mistake here
if [info exists $val]
When info exists is used it should be checked against variable name, not the variable value.
Lets come to your actual question.
You can pass the arguments to the procedure as a key-value pair, then it is pretty simple.
proc user_info {args} {
#Converting the arguments into array
if {[catch {array set aArgs $args}]} {
puts "Please pass the arguments as key-value pair"
return 1
}
#Assume, we need to ensure these 3 arguments passed for sure.
set mandatoryArgs "-name -age -country"
foreach mArg $mandatoryArgs {
if {![info exists aArgs($mArg)]} {
puts "Missing mandatory argument '$mArg'"
return 1
}
}
}
user_info -name Dinesh

Env not modify when loading module in modulefile

I would like to load a module into a modulefile (to resolve dependencies).
MyModule:
#%Module########################################
##
## Modulefile
#
proc ModulesHelp { } {
puts stderr "Env for MyProg"
}
proc addPath {var val} {
prepend-path $var $val
}
module load MyOtherModule
addPath PATH /opt/MyModule/bin
MyOtherModule:
#%Module########################################
##
## Modulefile
#
proc ModulesHelp { } {
puts stderr "Env for MyOtherProg"
}
proc addPath {var val} {
prepend-path $var $val
}
addPath PATH /opt/MyOtherModule/bin
When I run module load MyModule, both modules seem to be loaded but environment is not right :
$module list
Currently Loaded Modulefiles:
1) MyModule 2) MyOtherModule
$echo $PATH
/opt/MyModule/bin:/usr/bin:/bin
If I add the line foreach p [array names env] { set tmp $env($p) } or at least set tmp $env(PATH) in the MyModule after the module load MyOtherModule line, the environment is correctly modified. It also work fine if I don't use my function addPath but I use the prepend-path command directly, which is a bit annoying because I would like to do more things in the addPath function of course.
Anyone as an idea on what is going on and what I am missing ?
The prepend-path is probably doing some “clever” stuff to manage a variable; what exactly it is is something I don't know and don't need to know, because we can solve it all using generic Tcl. To make your wrapping of it work, use uplevel to evaluate the code in the proper scope, though you need to consider whether to use the global scope (name #0) or the caller's scope (1, which is the default); they're the same when your procedure addPath is called from the global level, but otherwise can be quite different, and I don't know what other oddness is going on with the modules system processing.
To demonstrate, try this addPath:
proc addPath {var val} {
puts stderr "BEFORE..."
uplevel 1 [list prepend-path $var $val]
puts stderr "AFTER..."
}
We use list to construct the thing to evaluate in the caller's scope, as it is guaranteed to generate substitution-free single-command scripts. (And valid lists too.) This is the whole secret to doing code generation in Tcl: keep it simple, use list to do any quoting required, call a helper procedure (with suitable arguments) when things get complicated, and use uplevel to control evaluation scope.
(NB: upvar can also be useful — it binds local variables to variables in another scope — but isn't what you're recommended to use here. I mention it because it's likely to be useful if you do anything more complex…)

How to exit a Lua script's execution?

I want to exit execution of Lua script on some condition .
Example :
content = get_content()
if not content then
-- ( Here i want some kind of exit function )
next_content = get_content()
--example there can lot of further checks
Here I want that if I am not getting content my script suppose to terminate is should not go to check to next.
Use os.exit() or just return from some "main" function if your script is embedded.
os.exit()
kill process by sending a signal
do return end
stop execution
The two methods are not equal if you want to write and execute some luacode in the interpreter after stopping the execution by launching your program using the -i flag.
th -i main.lua
extract from the lua api doc :
For syntactic reasons, a break or return can appear only as the last statement of a block (in other words, as the last statement in your chunk or just before an end, an else, or an until). For instance, in the next example, break is the last statement of the then block.
local i = 1
while a[i] do
if a[i] == v then break end
i = i + 1
end
Usually, these are the places where we use these statements, because any other statement following them is unreachable. Sometimes, however, it may be useful to write a return (or a break) in the middle of a block; for instance, if you are debugging a function and want to avoid its execution. In such cases, you can use an explicit do block around the statement:
function foo ()
return --<< SYNTAX ERROR
-- `return' is the last statement in the next block
do return end -- OK
... -- statements not reached
end
In lua 5.2.0-beta-rc1+, you can add a label at the end of your code called ::exit:: or something of the like, and then whenever you need to exit the program just call it like this:
goto exit

How to Add hookups to Powershell script

I have a powershell script called PostPro.ps1.I would like to provide a hookups to this script so
that if there is need one can add functionality before and after execution of PostPro.ps1 script.
Thanks in advance for your help!
Ramani
another way with parameters :
postpro.ps1:
[CmdletBinding()]
Param(
[ScriptBlock]$before,
[ScriptBlock]$after
)
if($before -ne $null){
Invoke-Command $before
}
write-host "hello"
if($after -ne $null){
Invoke-Command $after
}
then one can provide script to execute :
$b={write-host "before"}
$a={write-host 'after' }
PS>.\postpro.ps1 -before $b -after $a
before
hello
after
One way to do this would be to use modules. If you put all of your extension functions in modules in a certain folder with a certain name format, and then each module needs a runBefore and a runAfter function.
In your PostPro.ps1 script you can load the modules like this:
$modules = ls $(Join-Path $hookDir "postPro-extension-*.psm1") |
% { import-Module $_.FullName -AsCustomObject }
This will load all of the files in $hookDir that have a name that looks like postPro-extension-doSomething.psm1. Each module will be stored in a object that will give you access to each modules functions. To run the functions you can just call them on each object as show below.
You can go like this before the main part of the script
$modules | % { $_.runBefore }
and this after the main part of the script
$module | % { $_.runAfter }

Resources