How can we implement loop constructs in genetic programming? - machine-learning

I've been playing around with genetic programming for some time and started wondering how to implement looping constructs.
In the case of for loops I can think of 3 parameters:
start: starting value of counter
end: counter upper limit
expression: the expression to execute while counter < end
Now the tricky part is the expression because it generates the same value in every iteration unless counter is somehow injected into it. So I could allow the symbol for counter to be present in the expressions but then how do I prevent it from appearing outside of for loops?
Another problem is using the result of the expression. I could have a for loop which sums the results, another one that multiplies them together but that's limiting and doesn't seem right. I would like a general solution, not one for every operator.
So does anyone know a good method to implement loops in genetic programming?

Well, that's tricky. Genetic programming (the original Koza-style GP) is best suited for functional-style programming, i.e. there is no internal execution state and every node is a function that returns (and maybe takes) values, like lisp. That is a problem when the node is some loop - it is not clear what the node should return.
You could also design your loop node as a binary node. One parameter is a boolean expression that will be called before every loop and if true is returned, the loop will be executed. The second parameter would be the loop expression.
The problem you already mentioned, that there is no way of changing the loop expression. You can solve this by introducing a concept of some internal state or variables. But that leaves you with another problems like the need to define the number of variables. A variable can be realized e.g. by a tuple of functions - a setter (one argument, no return value, or it can return the argument) and getter (no arguments, returns the value of the variable).
Regarding the way of handling the loop result processing, you could step from GP to strongly typed GP or STGP for short. It is essentialy a GP with types. Your loop could then be effectively a function that returns a list of values (e.g. numbers) and you could have other functions that take such lists and calculate other values...
There is another GP algorithm (my favourite), called Grammatical Evolution (or GE) which uses context-free grammar to generate the programs. It can be used to encode type information like in STGP. You could also define the grammar in a way that classical c-like for and while loops can be generated. Some extensions to it, like Dynamically Defined Functions, could be used to implement variables dynamically.
If there is anything unclear, just comment on the answer and I'll try to clarify it.

The issue with zegkjan's answer is that there is more than one way to skin a cat.
Theres actually a simpler, and at times, better solution to creating GP datastructures than koza trees, instead using stacks.
This method is called Stack Based Genetic Programming, which is quite old (1993). This method of genetic programming removes trees entirely, you have a instruction list, and a data stack (where your functional and terminal set remain the same). You iterate through your instruction list, pushing values to the data stack, and pulling values to consume them, and returning a new value/values to the stack given your instruction. For example, consider the following genetic program.
0: PUSH TERMINAL X
1: PUSH TERMINAL X
2: MULTIPLY (A,B)
Iterating through this program will give you:
step1: DATASTACK X
step2: DATASTACK X, X
step3: DATASTACK X^2
Once you have executed all program list statements, you then just take off the number of elements from the stack that you care about (you can get multiple values out of the GP program this way). This ends up being a fast and extremely flexible method (memory locality, number of parameters doesn't matter, nor number of elements returned) you can implement fairly quickly.
To loop in this method, you can create a separate stack, an execution stack, where new special operators are used to push and pop multiple statements from the execution stack at once to be executed afterwards.
Additionally you can simply include a jump statement to move backwards in your program list, make a loop statement, or have a separate stack holding loop information. With this a genetic program could theoretically develop its own for loop.
0: PUSH TERMINAL X
1: START_LOOP 2
2: PUSH TERMINAL X
3: MULTIPLY (A, B)
4: DECREMENT_LOOP_NOT_ZERO
step1: DATASTACK X
LOOPSTACK
step2: DATASTACK X
LOOPSTACK [1,2]
step3: DATASTACK X, X
LOOPSTACK [1,2]
step4: DATASTACK X^2
LOOPSTACK [1,2]
step5: DATASTACK X^2
LOOPSTACK [1,1]
step6: DATASTACK X^2, X
LOOPSTACK [1,1]
step7: DATASTACK X^3
LOOPSTACK [1,1]
step8: DATASTACK X^3
LOOPSTACK [1,0]
Note however, with any method, it may be difficult for a GP program to actually evolve a member that has a loop, and even if it does, its likely that such a mechanism would result in a poor fitness evaluation at the start, likely removing it from the population any way. To fix this type of problem (potentially good innovations dying early due to poor early fitness), you'll need to include the concepts of demes in your genetic program to isolate genetically disparate populations.

Related

Why is splitting a Rust's std::collections::LinkedList O(n)?

The .split_off method on std::collections::LinkedList is described as having a O(n) time complexity. From the (docs):
pub fn split_off(&mut self, at: usize) -> LinkedList<T>
Splits the list into two at the given index. Returns everything after the given index, including the index.
This operation should compute in O(n) time.
Why not O(1)?
I know that linked lists are not trivial in Rust. There are several resources going into the how's and why's like this book and this article among several others, but I haven't got the chance to dive into those or the standard library's source code yet.
Is there a concise explanation about the extra work needed when splitting a linked list in (safe) Rust?
Is this the only way? And if not why was this implementation chosen?
The method LinkedList::split_off(&mut self, at: usize) first has to traverse the list from the start (or the end) to the position at, which takes O(min(at, n - at)) time. The actual split off is a constant time operation (as you said). And since this min() expression is confusing, we just replace it by n which is legal. Thus: O(n).
Why was the method designed like that? The problem goes deeper than this particular method: most of the LinkedList API in the standard library is not really useful.
Due to its cache unfriendliness, a linked list is often a bad choice to store sequential data. But linked lists have a few nice properties which make them the best data structure for a few, rare situations. These nice properties include:
Inserting an element in the middle in O(1), if you already have a pointer to that position
Removing an element from the middle in O(1), if you already have a pointer to that position
Splitting the list into two lists at an arbitrary position in O(1), if you already have a pointer to that position
Notice anything? The linked list is designed for situations where you already have a pointer to the position that you want to do stuff at.
Rust's LinkedList, like many others, just store a pointer to the start and end. To have a pointer to an element inside the linked list, you need something like an Iterator. In our case, that's IterMut. An iterator over a collection can function like a pointer to a specific element and can be advanced carefully (i.e. not with a for loop). And in fact, there is IterMut::insert_next which allows you to insert an element in the middle of the list in O(1). Hurray!
But this method is unstable. And methods to remove the current element or to split the list off at that position are missing. Why? Because of the vicious circle that is:
LinkedList lacks almost all features that make linked lists useful at all
Thus (nearly) everyone recommends not to use it
Thus (nearly) no one uses LinkedList
Thus (nearly) no one cares about improving it
Goto 1
Please note that are a few brave souls occasionally trying to improve the situations. There is the tracking issue about insert_next, where people argue that Iterator might be the wrong concept to perform these O(1) operations and that we want something like a "cursor" instead. And here someone suggested a bunch of methods to be added to IterMut (including cut!).
Now someone just has to write a nice RFC and someone needs to implement it. Maybe then LinkedList won't be nearly useless anymore.
Edit 2018-10-25: someone did write an RFC. Let's hope for the best!
Edit 2019-02-21: the RFC was accepted! Tracking issue.
Maybe I'm misunderstanding your question, but in a linked list, the links of each node have to be followed to proceed to the next node. If you want to get to the third node, you start at the first, follow its link to the second, then finally arrive at the third.
This traversal's complexity is proportional to the target node index n because n nodes are processed/traversed, so it's a linear O(n) operation, not a constant time O(1) operation. The part where the list is "split off" is of course constant time, but the overall split operation's complexity is dominated by the dominant term O(n) incurred by getting to the split-off point node before the split can even be made.
One way in which it could be O(1) would be if a pointer existed to the node after which the list is split off, but that is different from specifying a target node index. Alternatively, an index could be kept mapping the node index to the corresponding node pointer, but it would be extra space and processing overhead in keeping the index updated in sync with list operations.
pub fn split_off(&mut self, at: usize) -> LinkedList<T>
Splits the list into two at the given index. Returns everything after the given index, including the index.
This operation should compute in O(n) time.
The documentation is either:
unclear, if n is supposed to be the index,
pessimistic, if n is supposed to be the length of the list (the usual meaning).
The proper complexity, as can be seen in the implementation, is O(min(at, n - at)) (whichever is smaller). Since at must be smaller than n, the documentation is correct that O(n) is a bound on the complexity (reached for at = n / 2), however such a large bound is unhelpful.
That is, the fact that list.split_off(5) takes the same time if list.len() is 10 or 1,000,000 is quite important!
As to why this complexity, this is an inherent consequence of the structure of doubly-linked list. There is no O(1) indexing operation in a linked-list, after all. The operation implemented in C, C++, C#, D, F#, ... would have the exact same complexity.
Note: I encourage you to write a pseudo-code implementation of a linked-list with the split_off operation; you'll realize this is the best you can get without altering the data-structure to be something else.

Sequential, cumulative operations on Rascal trees

Given immutability of data in Rascal, what is the preferred method for these, if later operations depend on the results of earlier ones?
For example, assigning annotation values to every node in a tree, where values of higher nodes depend on values of lower ones. If you write a single visit statement with multiple cases, then insertion statements at lower levels don't change the tree, so higher level operations may have nothing on which to operate. On the other hand, surrounding each case statement with a visit statement -- and rebinding your tree variable to the new tree after every visit -- is cumbersome and, worse, seems to make the results depend on the order of the statements.
Subtle question :-) The visit statement has a subtle semantics especially if you look at it from an OO perspective. While it is actively traversing a value it is also rebuilding a new one. Depending on the strategy (order) of the traversal sequential matches in its case statement "see" different things.
It is sometimes insightful to imagine visit as a code generator which generates a set of mutually recursive functions which take the visited part as argument and return a new value on return. The body of the visit (the cases) turn into a switch statement which is the core of these generated functions. Then, depending on the traversal order, the recursive steps are either positioned before (bottom-up) or after this switch statement (top-down):
// bottom-up visit
x f(value x) {
newChildren = map(f, children of x);
x = newX(x, newChildren);
switch (x) {
case y : return whatever(y);
}
}
Hence the code in the switch case for bottom-up visits see the trees produced by recursive calls (although no tree has actually been updated).
Here's an example:
rascal>data T = tee(T, T) | i(int i);
data T = tee(T, T) | i(int i);
ok
rascal>t = tee(tee(i(1),i(2)),tee(i(3),i(4)));
t = tee(tee(i(1),i(2)),tee(i(3),i(4)));
T: tee(
tee(
i(1),
i(2)),
tee(
i(3),
i(4)))
rascal>visit(t) {
>>>>>>> case i(x) => i(x+1)
>>>>>>> case tee(i(x),y) => tee(i(x+1),y)
>>>>>>>}
T: tee(
tee(
i(3), // here you see how 1 turned into 3 by incrementing twice
i(3)), // increment happened once here
tee(
i(5), // increment happened twice here too
i(5))) // increment happened once here
You can see that some nodes had an increment twice because the second case matched
one a tee after it's child i node was already visited to return a different i node which is already incremented.
Experimenting with the other strategies will give you other results, see http://tutor.rascal-mpl.org/Rascal/Rascal.html#/Rascal/Expressions/Visit/Visit.html . Note that the variables in the scope of the visit statement are shared by all visits on all levels of recursion which gives power to emulate zipper-like behavior (you can always store the previously visited node in a temporary).
As an aside, the language design is trying to avoid the need for more involved "functional programming design patterns" such as zippers because they complicate the type system and the way people interact with it. To make these things work type correctly in a visit which does a recursion over a heterogeneous data-type you need a PhD in polymorphism to understand when it is type correct. Secretly, the visit statement simulates a set of built-in type-safe rank-2 higher-order polymorphic functions, but that's all under the hood.

working on sequences with lagged operators

A machine is turning on and off.
seqStartStop is a seq<DateTime*DateTime> that collects the start and end time of the execution of the machine tasks.
I would like to produce the sequence of periods where the machine is idle. In order to do so, I would like to build a sequence of tuples (beginIdle, endIdle).
beginIdle corresponds to the stopping time of the machine during
the previous cycle.
endIdle corresponds to the start time of the current production
cycle.
In practice, I have to build (beginIdle, endIdle) by taking the second element of the tuple for i-1 and the fist element of the following tuple i
I was wondering how I could get this task done without converting seqStartStop to an array and then looping through the array in an imperative fashion.
Another idea creating two copies of seqStartStop: one where the head is tail is removed, one where the head is removed (shifting backwards the elements); and then appying map2.
I could use skipand take as described here
All these seem rather cumbersome. Is there something more straightforward
In general, I was wondering how to execute calculations on elements with different lags in a sequence.
You can implement this pretty easily with Seq.pairwise and Seq.map:
let idleTimes (startStopTimes : seq<DateTime * DateTime>) =
startStopTimes
|> Seq.pairwise
|> Seq.map (fun (_, stop) (start, _) ->
stop, start)
As for the more general question of executing on sequences with different lag periods, you could implement that by using Seq.skip and Seq.zip to produce a combined sequence with whatever lag period you require.
The idea of using map2 with two copies of the sequence, one slightly shifted by taking the tail of the original sequence, is quite a standard one in functional programming, so I would recommend that route.
The Seq.map2 function is fine with working with lists with different lengths - it just stops when you reach the end of the shorter list - so you don't need to chop the last element of the original copy.
One thing to be careful of is how your original seq<DateTime*DateTime> is calculated. It will be recalculated each time it is enumerated, so with the map2 idea it will be calculated twice. If it's cheap to calculate and doesn't involve side-effects, this is fine. Otherwise, convert it to a list first with List.ofSeq.
You can still use Seq.map2 on lists as a list is an IEnumerable (i.e. a seq). Don't use List.map2 unless the lists are the same length though as it is more picky than Seq.map2.

multi dimension dynamic array allocation

I want to use dynamic allocation for multi-block CFD code, where, the index (i,j,k) varies for different blocks. I really dont know, how to allocate arbitrary array index for n number of blocks and pass it to subroutines. I have given a sample code, which gives error message "Error: Expression at (1) must be scalar", on compilation using gfortran.
common/iteration/nb
integer, dimension (:),allocatable::nib,njb,nkb
real, dimension (:,:,:,:),allocatable::x,y,z
allocate (nib(nb),njb(nb),nkb(nb))
do l=1,nb
ni=nib(l)
nj=njb(l)
nk=nkb(l)
allocate (x(l,ni,nj,nk),y(l,ni,nj,nk),z(l,ni,nj,nk))
enddo
call gridatt (x,y,z,nib,njb,nkb)
deallocate(x,y,z,nib,njb,nkb)
end
subroutine gridatt (x,y,z,nib,njb,nkb)
common/iteration/nb
integer, dimension (nb)::nib,njb,nkb
real, dimension (nb,nib,njb,nkb)::x,y,z
do l=1,nb
read(7,*)nib(l),njb(l),nkb(l)
read(7,*)(((x(l,i,j,k),i=1,nib(l)),j=1,njb(l)),k=1,nkb(l)),
$ (((y(l,i,j,k),i=1,nib(l)),j=1,njb(l)),k=1,nkb(l)),
$ (((z(l,i,j,k),i=1,nib(l)),j=1,njb(l)),k=1,nkb(l))
enddo
return
end
The error message gfortran gives, is as good as they get. It points to nib in the line
real, dimension (nb,nib,njb,nkb)::x,y,z
nib is declared as an array. This is not allowed. (What would the size of x, y, and z be in this dimension?)
Apart from this, I don't really understand your description of what it is that you are trying to do, and the sample code you show doesn't make much sense to me.
common/iteration/nb
integer, dimension (:),allocatable::nib,njb,nkb
real, dimension (:,:,:,:),allocatable::x,y,z
allocate (nib(nb),njb(nb),nkb(nb))
When writing new code, using modules to communicate between program units is highly preferred. Old style common blocks are to be avoided.
You are trying to allocate nib, njb, and nkb with size nb. The problem is that nb hasn't been given a value yet (and won't be given one anywhere in the code).
do l=1,nb
ni=nib(l)
nj=njb(l)
nk=nkb(l)
allocate (x(l,ni,nj,nk),y(l,ni,nj,nk),z(l,ni,nj,nk))
enddo
Again the problem with nb not having a value. This loop runs for an unknown number of times. You're also using the arrays nib, njb, and nkb, which don't contain any values yet.
In each iteration of the loop x, y, and z get allocated. This will lead to a runtime error in the second iteration, because you can not allocate an already allocated variable. Even if the allocations would work, this loop would be useless, because the three arrays would be reset in each iteration and would eventually be set to the dimensions of the last allocation.
Now that I'm writing this, I'm starting to think that what it is that you are trying to do, is create so called 'jagged arrays': you want to create a block in x(1,:,:,:) that differs in size in the second, third, and/or fourth dimension from the block in x(2,:,:,:), and so on. This is simply not possible in fortran.
One way to achieve this would be to create a user defined type with an allocatable, three dimensional array component, and create an array of this type. You can then allocate the array component to the desired size for each element of the array of user defined type.
This would look something like the following (disclaimer: untested and just one possible way of achieving your goal).
type :: blocktype
real, dimension(:, :, :), allocatable :: x, y, z
end type blocktype
type(blocktype), dimension(nb) :: myblocks
You can then run a loop to allocate x, y, and z to a different size for each array element. This is assuming nb has been set to the required value, and nib, njb, and nkb contain the desired sizes for the different blocks.
do block = 1, nb
ni = nib(block)
nj = njb(block)
nk = nkb(block)
allocate(myblocks(block)%x(ni, nj, nk))
allocate(myblocks(block)%y(ni, nj, nk))
allocate(myblocks(block)%z(ni, nj, nk))
enddo
If you want to do it like this, you will definitely want to put your procedures in modules, because that way you automatically get explicit interfaces, which are required for passing around such arrays of user defined type.
One afterthought: Don't use implicit typing, not even in sample code. Always use implicit none.

What are the advantages of the "apply" functions? When are they better to use than "for" loops, and when are they not? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is R's apply family more than syntactic sugar
Just what the title says. Stupid question, perhaps, but my understanding has been that when using an "apply" function, the iteration is performed in compiled code rather than in the R parser. This would seem to imply that lapply, for instance, is only faster than a "for" loop if there are a great many iterations and each operation is relatively simple. For instance, if a single call to a function wrapped up in lapply takes 10 seconds, and there are only, say, 12 iterations of it, I would imagine that there's virtually no difference at all between using "for" and "lapply".
Now that I think of it, if the function inside the "lapply" has to be parsed anyway, why should there be ANY performance benefit from using "lapply" instead of "for" unless you're doing something that there are compiled functions for (like summing or multiplying, etc)?
Thanks in advance!
Josh
There are several reasons why one might prefer an apply family function over a for loop, or vice-versa.
Firstly, for() and apply(), sapply() will generally be just as quick as each other if executed correctly. lapply() does more of it's operating in compiled code within the R internals than the others, so can be faster than those functions. It appears the speed advantage is greatest when the act of "looping" over the data is a significant part of the compute time; in many general day-to-day uses you are unlikely to gain much from the inherently quicker lapply(). In the end, these all will be calling R functions so they need to be interpreted and then run.
for() loops can often be easier to implement, especially if you come from a programming background where loops are prevalent. Working in a loop may be more natural than forcing the iterative computation into one of the apply family functions. However, to use for() loops properly, you need to do some extra work to set-up storage and manage plugging the output of the loop back together again. The apply functions do this for you automagically. E.g.:
IN <- runif(10)
OUT <- logical(length = length(IN))
for(i in IN) {
OUT[i] <- IN > 0.5
}
that is a silly example as > is a vectorised operator but I wanted something to make a point, namely that you have to manage the output. The main thing is that with for() loops, you always allocate sufficient storage to hold the outputs before you start the loop. If you don't know how much storage you will need, then allocate a reasonable chunk of storage, and then in the loop check if you have exhausted that storage, and bolt on another big chunk of storage.
The main reason, in my mind, for using one of the apply family of functions is for more elegant, readable code. Rather than managing the output storage and setting up the loop (as shown above) we can let R handle that and succinctly ask R to run a function on subsets of our data. Speed usually does not enter into the decision, for me at least. I use the function that suits the situation best and will result in simple, easy to understand code, because I'm far more likely to waste more time than I save by always choosing the fastest function if I can't remember what the code is doing a day or a week or more later!
The apply family lend themselves to scalar or vector operations. A for() loop will often lend itself to doing multiple iterated operations using the same index i. For example, I have written code that uses for() loops to do k-fold or bootstrap cross-validation on objects. I probably would never entertain doing that with one of the apply family as each CV iteration needs multiple operations, access to lots of objects in the current frame, and fills in several output objects that hold the output of the iterations.
As to the last point, about why lapply() can possibly be faster that for() or apply(), you need to realise that the "loop" can be performed in interpreted R code or in compiled code. Yes, both will still be calling R functions that need to be interpreted, but if you are doing the looping and calling directly from compiled C code (e.g. lapply()) then that is where the performance gain can come from over apply() say which boils down to a for() loop in actual R code. See the source for apply() to see that it is a wrapper around a for() loop, and then look at the code for lapply(), which is:
> lapply
function (X, FUN, ...)
{
FUN <- match.fun(FUN)
if (!is.vector(X) || is.object(X))
X <- as.list(X)
.Internal(lapply(X, FUN))
}
<environment: namespace:base>
and you should see why there can be a difference in speed between lapply() and for() and the other apply family functions. The .Internal() is one of R's ways of calling compiled C code used by R itself. Apart from a manipulation, and a sanity check on FUN, the entire computation is done in C, calling the R function FUN. Compare that with the source for apply().
From Burns' R Inferno (pdf), p25:
Use an explicit for loop when each
iteration is a non-trivial task. But a
simple loop can be more clearly and
compactly expressed using an apply
function. There is at least one
exception to this rule ... if the result will
be a list and some of the components
can be NULL, then a for loop is
trouble (big trouble) and lapply gives
the expected answer.

Resources