The following fragment produces a compilation error:
arma::Mat<double> a(10,10,arma::fill::zeros);
arma::ucolvec w = whatever1;
whatever2 = a.rows(w).each_col() + another-col-vector;
The error is that arma::subview_elem2 has no member named each_col.
In a number of cases in Armadillo, the standard array functions are not always available on expressions or results of other function calls. Clearly the rows() function does not return a Mat object, but a subview_elem2 object, presumably for optimization. Another way to do this would be to declare all the array functions in interfaces/pure abstract classes that Mat and other internal classes, such as subviews, implement. It seems it should be possible to make all Armadillo array expressions interchangeable with array objects aside from write operations for expressions that only generate r-values.
So... I could wish for the following
a) An explanation of which methods are not available for which results.
b) Preferably, enabling all combinations of array methods that make sense.
Absent the above, how can accomplish the desired result, which is to evaluate the expression:
a.rows(w).each_col()
??
Some prior information about armadillo
The armadillo library uses templates heavily and most operations return expression templates. Only when you assign the result to a variable the actual computation is performed. This is why you should not store the result of some computation with armadillo using auto.
For instance, given some matrices A, B and C, something like
auto D = A * B + C;
will not perform the computation and only the expression template is stored in D. On the other hand, using
arma::mat D = A * B + C;
will force the computation to happen and the result is stored in D.
Solution to your problem
Particularly to your question, something like a.rows(w) returns an expression template of type subview_elem2 (this file is defined in the source code armadillo_bits/subview_elem2_bones.hpp). This "temporary type" does not have a .each_col method, which results in the error you got. One way around is to store the result of a.rows(w) in a variable, but since you are not interested in the variable you can use the .eval() method. The .eval() method forces the expression template to perform the actual computation up to that point and thus the subsequent call to .each_col will work. That is, replace
a.rows(w).each_col() + another-col-vector;
with
a.rows(w).eval().each_col() + another-col-vector;
Related
I'd like the example computation expression and values below to return 6. For some the numbers aren't yielding like I'd expect. What's the step I'm missing to get my result? Thanks!
type AddBuilder() =
let mutable x = 0
member _.Yield i = x <- x + i
member _.Zero() = 0
member _.Return() = x
let add = AddBuilder()
(* Compiler tells me that each of the numbers in add don't do anything
and suggests putting '|> ignore' in front of each *)
let result = add { 1; 2; 3 }
(* Currently the result is 0 *)
printfn "%i should be 6" result
Note: This is just for creating my own computation expression to expand my learning. Seq.sum would be a better approach. I'm open to the idea that this example completely misses the value of computation expressions and is no good for learning.
There is a lot wrong here.
First, let's start with mere mechanics.
In order for the Yield method to be called, the code inside the curly braces must use the yield keyword:
let result = add { yield 1; yield 2; yield 3 }
But now the compiler will complain that you also need a Combine method. See, the semantics of yield is that each of them produces a finished computation, a resulting value. And therefore, if you want to have more than one, you need some way to "glue" them together. This is what the Combine method does.
Since your computation builder doesn't actually produce any results, but instead mutates its internal variable, the ultimate result of the computation should be the value of that internal variable. So that's what Combine needs to return:
member _.Combine(a, b) = x
But now the compiler complains again: you need a Delay method. Delay is not strictly necessary, but it's required in order to mitigate performance pitfalls. When the computation consists of many "parts" (like in the case of multiple yields), it's often the case that some of them should be discarded. In these situation, it would be inefficient to evaluate all of them and then discard some. So the compiler inserts a call to Delay: it receives a function, which, when called, would evaluate a "part" of the computation, and Delay has the opportunity to put this function in some sort of deferred container, so that later Combine can decide which of those containers to discard and which to evaluate.
In your case, however, since the result of the computation doesn't matter (remember: you're not returning any results, you're just mutating the internal variable), Delay can just execute the function it receives to have it produce the side effects (which are - mutating the variable):
member _.Delay(f) = f ()
And now the computation finally compiles, and behold: its result is 6. This result comes from whatever Combine is returning. Try modifying it like this:
member _.Combine(a, b) = "foo"
Now suddenly the result of your computation becomes "foo".
And now, let's move on to semantics.
The above modifications will let your program compile and even produce expected result. However, I think you misunderstood the whole idea of the computation expressions in the first place.
The builder isn't supposed to have any internal state. Instead, its methods are supposed to manipulate complex values of some sort, some methods creating new values, some modifying existing ones. For example, the seq builder1 manipulates sequences. That's the type of values it handles. Different methods create new sequences (Yield) or transform them in some way (e.g. Combine), and the ultimate result is also a sequence.
In your case, it looks like the values that your builder needs to manipulate are numbers. And the ultimate result would also be a number.
So let's look at the methods' semantics.
The Yield method is supposed to create one of those values that you're manipulating. Since your values are numbers, that's what Yield should return:
member _.Yield x = x
The Combine method, as explained above, is supposed to combine two of such values that got created by different parts of the expression. In your case, since you want the ultimate result to be a sum, that's what Combine should do:
member _.Combine(a, b) = a + b
Finally, the Delay method should just execute the provided function. In your case, since your values are numbers, it doesn't make sense to discard any of them:
member _.Delay(f) = f()
And that's it! With these three methods, you can add numbers:
type AddBuilder() =
member _.Yield x = x
member _.Combine(a, b) = a + b
member _.Delay(f) = f ()
let add = AddBuilder()
let result = add { yield 1; yield 2; yield 3 }
I think numbers are not a very good example for learning about computation expressions, because numbers lack the inner structure that computation expressions are supposed to handle. Try instead creating a maybe builder to manipulate Option<'a> values.
Added bonus - there are already implementations you can find online and use for reference.
1 seq is not actually a computation expression. It predates computation expressions and is treated in a special way by the compiler. But good enough for examples and comparisons.
Let's say, I want to declare an elliptic integral as
K(k):=elliptic_kc (k^2);
k:=<something like tanh()*coth()...>
The problem is that maxima will always substitute elliptic_kc(x^2) in place of K(x), and k's definition in place of k.
I want to prevent it, while still allowing numeric evaluation of K(), k, and simplifying expressions with these symbols.
...
A function, can be declared as "noun" for disabling substitution. But this also disables its evaluation.
Well, I use various strategies. Sometimes one approach works better than another.
(1) Put a single quote ' on function names to nounify that specific function call. At a later time, ev(expr, nouns) verbifies any nouns, so the functions are called. E.g. foo: 'integrate(sin(x), x); yields a noun expression. Then ev(foo, nouns); (which can be abbreviated to foo, nouns; at the console input) to actually calculate it.
(2) Don't define functions, but just let them be undefined symbols. Then substitute a lambda expression when you want to evaluate them. E.g. foo: f(2); then later subst(f = lambda([x], x + 1), foo);.
(3) Don't assign values, but let them be undefined, then substitute values later on. E.g. foo: a + b; then later subst([a = 123, b = y*z], foo);.
I'm learning about f# and I understand you don't need to use parentheses when calling a function.
Ex
let addOne arg1 =
arg1 + 1
addOne 1
vs
this.GetType()
Why do I have to use parentheses on the second function?
There is a bit of a mismatch between working with .NET libraries and working with F# libraries when it comes to parameters, but you can generally see () not as parentheses, but as a special value of type unit that means "no useful information".
This means that when you say:
addOne 1
You are calling addOne with a value - number 1 - as a parameter. Now, when you apply the same reading to the second example:
this.GetType()
You can read this as calling this.GetType with a value - the special () unit value as a parameter. If you wanted to be consistent, you could write this with space too:
this.GetType ()
In practice, most people will omit the space when calling .NET libraries. When you do not write the space, F# also supports method chaining so you can write e.g. foo().bar().
Many F# functions taking multiple parameters will use the "curried" form, which means that the parameters need to be separated by spaces. For example:
let add a b = a + b
let mul a b = a * b
add 10 (mul 20 3)
Here, you need parentheses around the second expression, so that the compiler knows how to parse the code. This is in contrast with typical .NET methods, which take parameters as a tuple. F# tuples are written as (10, "hello") and so you can see a method call as an ordinary call accepting a tuple:
some.Operation (10, "Hello")
Again, typically you wouldn't write the space here, because you know this is actually a .NET method call, rather than "passing tuple to a function", but conceptually, you can think of it in both ways.
This is the summary - there are a few corner cases where method calls do not really behave like tuples (e.g. when it comes to named parameters), but this way of thinking about it should give you an idea about how things work.
Are the following two examples equivalent?
Example 1:
let x = String::new();
let y = &x[..];
Example 2:
let x = String::new();
let y = &*x;
Is one more efficient than the other or are they basically the same?
In the case of String and Vec, they do the same thing. In general, however, they aren't quite equivalent.
First, you have to understand Deref. This trait is implemented in cases where a type is logically "wrapping" some lower-level, simpler value. For example, all of the "smart pointer" types (Box, Rc, Arc) implement Deref to give you access to their contents.
It is also implemented for String and Vec: String "derefs" to the simpler str, Vec<T> derefs to the simpler [T].
Writing *s is just manually invoking Deref::deref to turn s into its "simpler form". It is almost always written &*s, however: although the Deref::deref signature says it returns a borrowed pointer (&Target), the compiler inserts a second automatic deref. This is so that, for example, { let x = Box::new(42i32); *x } results in an i32 rather than a &i32.
So &*s is really just shorthand for Deref::deref(&s).
s[..] is syntactic sugar for s.index(RangeFull), implemented by the Index trait. This means to slice the "whole range" of the thing being indexed; for both String and Vec, this gives you a slice of the entire contents. Again, the result is technically a borrowed pointer, but Rust auto-derefs this one as well, so it's also almost always written &s[..].
So what's the difference? Hold that thought; let's talk about Deref chaining.
To take a specific example, because you can view a String as a str, it would be really helpful to have all the methods available on strs automatically available on Strings as well. Rather than inheritance, Rust does this by Deref chaining.
The way it works is that when you ask for a particular method on a value, Rust first looks at the methods defined for that specific type. Let's say it doesn't find the method you asked for; before giving up, Rust will check for a Deref implementation. If it finds one, it invokes it and then tries again.
This means that when you call s.chars() where s is a String, what's actually happening is that you're calling s.deref().chars(), because String doesn't have a method called chars, but str does (scroll up to see that String only gets this method because it implements Deref<Target=str>).
Getting back to the original question, the difference between &*s and &s[..] is in what happens when s is not just String or Vec<T>. Let's take a few examples:
s: String; &*s: &str, &s[..]: &str.
s: &String: &*s: &String, &s[..]: &str.
s: Box<String>: &*s: &String, &s[..]: &str.
s: Box<Rc<&String>>: &*s: &Rc<&String>, &s[..]: &str.
&*s only ever peels away one layer of indirection. &s[..] peels away all of them. This is because none of Box, Rc, &, etc. implement the Index trait, so Deref chaining causes the call to s.index(RangeFull) to chain through all those intermediate layers.
Which one should you use? Whichever you want. Use &*s (or &**s, or &***s) if you want to control exactly how many layers of indirection you want to strip off. Use &s[..] if you want to strip them all off and just get at the innermost representation of the value.
Or, you can do what I do and use &*s because it reads left-to-right, whereas &s[..] reads left-to-right-to-left-again and that annoys me. :)
Addendum
There's the related concept of Deref coercions.
There's also DerefMut and IndexMut which do all of the above, but for &mut instead of &.
They are completely the same for String and Vec.
The [..] syntax results in a call to Index<RangeFull>::index() and it's not just sugar for [0..collection.len()]. The latter would introduce the cost of bound checking. Gladly this is not the case in Rust so they both are equally fast.
Relevant code:
index of String
deref of String
index of Vec (just returns self which triggers the deref coercion thus executes exactly the same code as just deref)
deref of Vec
I'm experimenting with enumeration sorts in Z3 as described in How to use enumerated constants after calling of some tactic in Z3? and I noticed that I might have some misunderstanding on how to use C and C++ api properly. Let's consider the following example.
context z3_cont;
Z3_symbol e_names[2 ];
Z3_func_decl e_consts[2];
Z3_func_decl e_testers[2];
e_names[0] = Z3_mk_string_symbol(z3_cont, "x1");
e_names[1] = Z3_mk_string_symbol(z3_cont, "x2");
Z3_symbol e_name = Z3_mk_string_symbol(z3_cont, "enum_type");
Z3_sort new_enum_sort = Z3_mk_enumeration_sort(z3_cont, e_name, 2, e_names, e_consts, e_testers);
sort enum_sort = to_sort(z3_cont, new_enum_sort);
expr e_const0(z3_cont), e_const1(z3_cont);
/* WORKS!
func_decl a_decl = to_func_decl(z3_cont, e_consts[0]);
func_decl b_decl = to_func_decl(z3_cont, e_consts[1]);
e_const0 = a_decl(0, 0);
e_const1 = b_decl(0, 0);
*/
// SEGFAULT when doing cout
e_const0 = to_func_decl(z3_cont, e_consts[0])(0, 0);
e_const1 = to_func_decl(z3_cont, e_consts[1])(0, 0);
cout << e_const0 << " " << e_const1 << endl;
I expected the two variants of code to nicely wrap the C entity Z3_func_decl with a smart pointer so I can use with C++ api, however only the first variant seems to be correct. So my questions are
Is this a correct behavior that the second way doesn't work? If so, how can I better understand the reasons of why it shouldn't?
What happens with unwrapped C entities, as for example Z3_symbol e_name - here I don't wrap it, I don't increment references. So will the memory be managed properly? Is it safe to use it? when the object will be destroyed?
A minor question: I didn't see the to_symbol() function in C++ api. Is it just unnecessary?
Thank you.
Whenever we create a new Z3 AST, Z3 may garbage collect an AST n if the reference counter of n is 0. In the piece of code that works, we wrap e_consts[0] and e_consts[1] before we create any new AST. When we wrap them, the smart pointer will bump their reference counter.
This is why it works. In the piece of code that crashes, we wrap e_consts[0], and then create e_const0 before we wrap e_consts[1]. Thus, the AST referenced by e_consts[1] is deleted before we have the chance to create e_const1.
BTW, in the next official release, we will have support for creating enumeration types in the C++ API: http://z3.codeplex.com/SourceControl/changeset/b2810592e6bb
This change is already available in the nightly builds.
Z3_symbol is not a reference counted object. They are persistent, Z3 maintains a global table of all symbols created. We should view symbols as unique strings.
Note that we can use the class symbol and the constructor symbol::symbol(context & c, Z3_symbol s). The functions to_* are used to wrap objects created using the C API with smart pointers. We usually have a function to_A, if there is a C API function that returns an A object, and there is not function/method equivalent in the C++.