Determining if OZ variable is bound? - declarative

Is there a safe way to ask if a single assignment variable in OZ is bound or not?
Using an unassigned data flow variable in a way that requires the value will cause the program to wait until a value is assigned. In a sequential environment, this means the the program hangs. Assigning a different value to a variable will cause the program to fail. So both ways "tell" me if the variable was bound but not in a safe way.
I'm looking for some function "Bound" where
local X Y=1 Xbound YBound in
Xbound={Bound? X}
Ybound={Bound? Y}
end
gives false and true for Xbound and Ybound respectively.
My use case involves processing a list where values are added incrementally with the last value always being unbound. I want to use the last bound item (the one before unbound one.) And I'm trying to work in the OZ paradigm with the least concepts added (so no mutable variables or exceptions.)

You can check whether a variable is bound with the function IsDet.
See here: http://mozart.github.io/mozart-v1/doc-1.4.0/base/node4.html (also works in Mozart 1.3.0)
A word of caution: if you are using multiple threads, this opens the door for race conditions.

Related

how to stop summation index from constantly changing

I have some expressions containing nested sums, looking like this:
and I want to substitute one of the summation indices(such as i369) for something else.
But the problem is, sometimes maxima seems to re-define the whole sum every time expr is called. So, every time I call second(expr) maxima tells me some new index names, instead of i369:
So when I call subst, nothing happens because it sees:
How to stop this from happening?
Thanks for the additional information. The changing indices are the result of sumexpand being true, and test3 is a product of summations, and it's getting resimplified according to sumexpand every time it is evaluated. The sumexpand transformation assigns new indices every time it processes a summation, so that's how you're getting different indices each time you evaluate test3.
I think the clearest way forward is to only enable sumexpand when you need to process a summation, and otherwise leave it disabled. Something like block([sumexpand: true], <whatever operations here>) enables sumexpand for the duration of the block and then restores the previous value. Something like that works for global flags in general.
As an aside, bear in mind that function definitions are always global. When you write f(x) := block(g(y) := ..., ...), the function g is not local to f; instead f and g are both globally defined. You can supply lambda expression to scanmap instead of a named function, or just pull the would-be local function out and define it outside of simplify_expr_for_extract.

Unable to verify UPPAAL properties

I am verifying a very small model. But I receive the memory exhaust message. I changed the model several times but having same problem.
I thought that problem would be due to using a user defined function or using the select option to get the random number. Then I changed the model and didn't call the function nor used the Selection option but still....
I am wondering either it's UPPAAL's issue or in my model. There is no error other than memory exhaust. Once the value of "r1" and "r2" are changed after that ctl property doesn't work.
CTL works for all values of r1 and r2 before the increment.
The model increments several variables (r1, r2 and cntr): if there is no upper bound for them (and it seems there is not, but I cannot see all the functions), then the state space is going to be huge (all values multiplied times the number of locations, times clock zones) and thus exhaust all the memory.
Either make those variables bounded (do not allow increments passed some value), or declare them as meta (don't do it if you do not understand the consequences).

Am I basically doing extra work by making my local functions global?

I read that it is faster and better to keep most of your functions local instead of global.
So I'm doing this:
input = require("input")
draw = require("draw")
And then in input.lua for example:
local tableOfFunctions = {isLetter = isLetter, numpadCheck = numpadCheck, isDigit = isDigit, toUpper = toUpper}
return tableOfFunctions
Where isLetter, numpadCheck etc are local functions for that file.
Then I call the functions like so:
input.isLetter(key)
Now, my question is: Am I reinventing the wheel with this? Aren't global functions stored in a lua table? I do like the way it looks with the input. before the function name, keeps it nice and tidy so I may keep it if it's not a bad coding practise.
Reinventing the wheels tailored to your personal needs is centerpiece of lua.
The method you describe is described as a valid one by lua creator himself in his book, here.
Everything in Lua is stored inside a table. The "faster" local function (as well as faster local variables) comes from the way of how globals and upvalues are looked up.
Below the line there's a quote of relevant part of more detailed explanation on speed that happened to occur in on game's forum.
Apart from that, locals are recommended due to cleanness of the code and error proofing.
In lua a table is created with {}, this operator reserves a certain amount of memory in the ram for the table. That reserved space stays constant and unmovable, exceptions are implementation details that script writer should not concern himself with.
Any variable you assign table to
a={};
b={ c={a} }
is just a pointer to the table in memory. A pointer takes up either 32 or 64 bits and that's it.
Whenever you pass table around only those 64 bits are copied.
Whenever you query a table in a table:
return b.c[1]
computer follows the pointer stored in b, finds a table object in ram, queries it for key "c", takes pointer to another table, queries that for key "1" then returns the pointer to the table a. Just a simple pointer hopping, workload on par with arithmetic.
Every function has associated table _ENV, any variable lookup
return a
is actually a query to that table
return _ENV.a
If the variable is local, it is stored in _ENV.
If there's no variable in _ENV with the given name, then global variables are queried, those actually reside in top-level table, the _ENV of the root function of the script (it is require or dofile function that loads and executes the script).
Usually, a link to the global table is stored in any other _ENV as _G. So the access to a global variable
return b
is actually something like
return _ENV.b or _ENV._G.b
Thus it is about 3 pointer jumps instead of 1.
Here is convoluted example that should give you an insight on the amount of work that implies:
%RUN THIS IN STANDALONE LUA INTERPRETER
local depth=100--how many pointers will be in a chain
local q={};--a table
local a={};--a start of pointer chain
local b=a; -- intermediate variable
for i=1,depth do b.a={} b=b.a end; --setup chain
local t=os.clock();
print(q)
print(os.clock()-t);--time of previous line execution
t=os.clock(); --start of pointer chain traversal
b=a
while b.a do b=b.a end
print(b)
print(os.clock()-t)--time of pointer traversal
When a pointer chain is about 100 elements, system load fluctuations may actually cause the second time be smaller. The direct access gets notably faster only when you change depth to thousands and more intermediate pointers.
Note that, whenever you query uninitialized variable, all 3 jumps are taken.
Globals are stored in the reserved table _G (the contents of which you can examine at any time), but it is good programming practice to avoid the use of globals.
Unless there is a very good reason not to, your table input should be local as well.
From Programming in Lua:
It is good programming style to use local variables whenever possible. Local variables help you avoid cluttering the global environment with unnecessary names. Moreover, the access to local variables is faster than to global ones.

Erlang - How to construct reference(), tref(), socket()... values?

The problem is that sometimes I forgot to assign the returned value to a variable.
With pid() variable, it can be constructed by pid(X, Y, Z).
How can we do like that with reference, timer reference, socket, port...?
You can create a reference only by using make_ref/0. The whole point of references is that "The reference is unique among connected nodes", so if you didn't assign it to anything, there is no way to recreate it. tref() is actually a reference, so the same applies.
But in the shell, you can use v(-1) to get the return value of the previous command (and v(-N) to get the value N commands back). Search http://erlang.org/doc/man/shell.html for "v(" to see examples.

How can we implement loop constructs in genetic programming?

I've been playing around with genetic programming for some time and started wondering how to implement looping constructs.
In the case of for loops I can think of 3 parameters:
start: starting value of counter
end: counter upper limit
expression: the expression to execute while counter < end
Now the tricky part is the expression because it generates the same value in every iteration unless counter is somehow injected into it. So I could allow the symbol for counter to be present in the expressions but then how do I prevent it from appearing outside of for loops?
Another problem is using the result of the expression. I could have a for loop which sums the results, another one that multiplies them together but that's limiting and doesn't seem right. I would like a general solution, not one for every operator.
So does anyone know a good method to implement loops in genetic programming?
Well, that's tricky. Genetic programming (the original Koza-style GP) is best suited for functional-style programming, i.e. there is no internal execution state and every node is a function that returns (and maybe takes) values, like lisp. That is a problem when the node is some loop - it is not clear what the node should return.
You could also design your loop node as a binary node. One parameter is a boolean expression that will be called before every loop and if true is returned, the loop will be executed. The second parameter would be the loop expression.
The problem you already mentioned, that there is no way of changing the loop expression. You can solve this by introducing a concept of some internal state or variables. But that leaves you with another problems like the need to define the number of variables. A variable can be realized e.g. by a tuple of functions - a setter (one argument, no return value, or it can return the argument) and getter (no arguments, returns the value of the variable).
Regarding the way of handling the loop result processing, you could step from GP to strongly typed GP or STGP for short. It is essentialy a GP with types. Your loop could then be effectively a function that returns a list of values (e.g. numbers) and you could have other functions that take such lists and calculate other values...
There is another GP algorithm (my favourite), called Grammatical Evolution (or GE) which uses context-free grammar to generate the programs. It can be used to encode type information like in STGP. You could also define the grammar in a way that classical c-like for and while loops can be generated. Some extensions to it, like Dynamically Defined Functions, could be used to implement variables dynamically.
If there is anything unclear, just comment on the answer and I'll try to clarify it.
The issue with zegkjan's answer is that there is more than one way to skin a cat.
Theres actually a simpler, and at times, better solution to creating GP datastructures than koza trees, instead using stacks.
This method is called Stack Based Genetic Programming, which is quite old (1993). This method of genetic programming removes trees entirely, you have a instruction list, and a data stack (where your functional and terminal set remain the same). You iterate through your instruction list, pushing values to the data stack, and pulling values to consume them, and returning a new value/values to the stack given your instruction. For example, consider the following genetic program.
0: PUSH TERMINAL X
1: PUSH TERMINAL X
2: MULTIPLY (A,B)
Iterating through this program will give you:
step1: DATASTACK X
step2: DATASTACK X, X
step3: DATASTACK X^2
Once you have executed all program list statements, you then just take off the number of elements from the stack that you care about (you can get multiple values out of the GP program this way). This ends up being a fast and extremely flexible method (memory locality, number of parameters doesn't matter, nor number of elements returned) you can implement fairly quickly.
To loop in this method, you can create a separate stack, an execution stack, where new special operators are used to push and pop multiple statements from the execution stack at once to be executed afterwards.
Additionally you can simply include a jump statement to move backwards in your program list, make a loop statement, or have a separate stack holding loop information. With this a genetic program could theoretically develop its own for loop.
0: PUSH TERMINAL X
1: START_LOOP 2
2: PUSH TERMINAL X
3: MULTIPLY (A, B)
4: DECREMENT_LOOP_NOT_ZERO
step1: DATASTACK X
LOOPSTACK
step2: DATASTACK X
LOOPSTACK [1,2]
step3: DATASTACK X, X
LOOPSTACK [1,2]
step4: DATASTACK X^2
LOOPSTACK [1,2]
step5: DATASTACK X^2
LOOPSTACK [1,1]
step6: DATASTACK X^2, X
LOOPSTACK [1,1]
step7: DATASTACK X^3
LOOPSTACK [1,1]
step8: DATASTACK X^3
LOOPSTACK [1,0]
Note however, with any method, it may be difficult for a GP program to actually evolve a member that has a loop, and even if it does, its likely that such a mechanism would result in a poor fitness evaluation at the start, likely removing it from the population any way. To fix this type of problem (potentially good innovations dying early due to poor early fitness), you'll need to include the concepts of demes in your genetic program to isolate genetically disparate populations.

Resources