Can I use broadcasting in ArrayFire? - arrayfire

Can I do something like this:
auto minEl = min(a);
a -= minEl;
?
I get an unknown af::exception when I do that. For now, I'm doing this:
auto minEl = *min(a).host<float>();
a -= minEl;
But of course, it does an unnecessary download.
I borrow the term "broadcasting" from numpy, because there it works perfectly :)

ArrayFire does not currrently support broadcasting. You would have to manually tile the array to match the required dimensions.
auto minEl = min(a);
a -= tile(minEl, a.dims(0));
This method also avoids the copy of the scalar to host memory.

Related

Clean way to loop over a masked list in Julia

In Julia, I have a list of neighbors of a location stored in all_neighbors[loc]. This allows me to quickly loop over these neighbors conveniently with the syntax for neighbor in all_neighbors[loc]. This leads to readable code such as the following:
active_neighbors = 0
for neighbor in all_neighbors[loc]
if cube[neighbor] == ACTIVE
active_neighbors += 1
end
end
Astute readers will see that this is nothing more than a reduction. Because I'm just counting active neighbors, I figured I could do this in a one-liner using the count function. However,
# This does not work
active_neighbors = count(x->x==ACTIVE, cube[all_neighbors[loc]])
does not work because the all_neighbors mask doesn't get interpreted correctly as simply a mask over the cube array. Does anyone know the cleanest way to write this reduction? An alternative solution I came up with is:
active_neighbors = count(x->x==ACTIVE, [cube[all_neighbors[loc][k]] for k = 1:length(all_neighbors[loc])])
but I really don't like this because it's even less readable than what I started with. Thanks for any advice!
This should work:
count(x -> cube[x] == ACTIVE, all_neighbors[loc])

Provide custom gradient to drake::MathematicalProgram

Drake has an interface where you can give it a generic function as a constraint and it can set up the nonlinearly-constrained mathematical program automatically (as long as it supports AutoDiff). I have a situation where my constraint does not support AutoDiff (the constraint function conducts a line search to approximate the maximum value of some function), but I have a closed-form expression for the gradient of the constraint. In my case, the math works out so that it's difficult to find a point on this function, but once you have that point it's easy to linearize around it.
I know many optimization libraries will allow you to provide your own analytical gradient when available; can you do this with Drake's MathematicalProgram as well? I could not find mention of it in the MathematicalProgram class documentation.
Any help is appreciated!
It's definitely possible, but I admit we haven't provided helper functions that make it pretty yet. Please let me know if/how this helps; I will plan to tidy it up and add it as an example or code snippet that we can reference in drake.
Consider the following code:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
print(x[0].value())
print(x[0].derivatives())
return x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
When we register the cost or constraint with MathematicalProgram in this way, we are allowing that it can get called with either x being a float, or x being an AutoDiffXd -- which is simply a wrapping of Eigen's AutoDiffScalar (with dynamically allocated derivatives of type double). The snippet above shows you roughly how it works -- every scalar value has a vector of (partial) derivatives associated with it. On entry to the function, you are passed x with the derivatives of x set to dx/dx (which will be 1 or zero).
Your job is to return a value, call it y, with the value set to the value of your cost/constraint, and the derivatives set to dy/dx. Normally, all of this happens magically for you. But it sounds like you get to do it yourself.
Here's a very simple code snippet that, I hope, gets you started:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
y = AutoDiffXd(2*x[0].value(), 2*x[0].derivatives())
return [y]
return 2*x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
Let me know?

Initialising hist function in Julia for use in a loop

When I use a loop, to access the variables outside of the loop they need to be initialised before you enter the loop. For example:
Y = Array{Int}()
for i = 1:end
Y = i
end
Since I have initialised Y before entering the loop, I can access it later by typing
Y
If I had not initialised it before entering the loop, typing Y would not have returned anything.
I want to extend this functionality to the output of the 'hist' function. I don't know how to set up the empty hist output before the loop. The only work around I have found is below.
yHistData = [hist(DataSet[1],Bins)]
for j = 2:NumberOfLayers
yHistData = [yHistData;hist(DataSet[j],Bins)]
end
Now when I access this later on by simply typing
yHistData
I get the correct values returned to me.
How can I initialise this hist data before entering the loop without defining it using the first value of the list I'm iterating over?
This can be done with a loop like follows:
yHistData = []
for j = 1:NumberOfLayers
push!(yHistData, hist(DataSet[j], Bins))
end
push! modifies the array by adding the specified element to the end. This increases code speed because we do not need to create copies of the array all the time. This code is nice and simple, and runs faster than yours. The return type, however, is now Array{Any, 1}, which can be improved.
Here I have typed the array so that the performance when using this array in the future is better. Without typing the array, the performance is sometimes better and sometimes worse than your code, depending on NumberOfLayers.
yHistData = Tuple{FloatRange{Float64},Array{Int64,1}}[]
for j = 1:NumberOfLayers
push!(yHistData, hist(DataSet[j], Bins))
end
Assuming length(DataSet) == NumberOfLayers, we can use anonymous functions to simplify the code even further:
yHistData = map(data -> hist(data, Bins), DataSet)
This solution is short, easy to read, and very fast on Julia 0.5. However, this version is not yet released. On 0.4, the currently released version, the performance of this version will be slower.

Is there any way to resize a Surface by any natural number value in ROBLOX?

I've been working with the built-in Resize function in Roblox Studio and have been using it to expand the Top Surface of multiple Parts in order to form a wall-like structure.
The only problem that has arisen when using this method is that the surface of the wall created is not even: Some Parts are higher than others.
I later discovered that this problem is due to the fact that the built-in Resize function only takes integers as it's second parameter (or "expand-by" value). Ideally I need the Parts to have the ability expand by any Real Number.
Are there any alternatives to the built-in Resize function that allow one to resize a Surface by any Real Number?
Yes, this is possible, but it actually requires a custom function to do so. With some fairly basic math we can write a simple function to accomplish such a task:
local Resize
do
local directions = {
[Enum.NormalId.Top] = {Scale=Vector3.new(0,1,0),Position=Vector3.new(0,1,0)},
[Enum.NormalId.Bottom] = {Scale=Vector3.new(0,1,0),Position=Vector3.new(0,-1,0)},
[Enum.NormalId.Right] = {Scale=Vector3.new(1,0,0),Position=Vector3.new(1,0,0)},
[Enum.NormalId.Left] = {Scale=Vector3.new(1,0,0),Position=Vector3.new(-1,0,0)},
[Enum.NormalId.Front] = {Scale=Vector3.new(0,0,1),Position=Vector3.new(0,0,1)},
[Enum.NormalId.Back] = {Scale=Vector3.new(0,0,1),Position=Vector3.new(0,0,-1)},
}
function Resize(p, d, n, c)
local prop = c and 'Position' or 'CFrame'
p.Size = p.Size + directions[d].Scale*n
p[prop] = p[prop] + directions[d].Position*(n/2)
return p.Size, p[prop]
end
end
Resize(workspace.Part, Enum.NormalId.Bottom, 10, false) --Resize workspace.Part downards by 10 studs, ignoring collisions
If you're interested more on how and why this code works the way it does, here's a link to a pastebin that's loaded with comments, which I felt would be rather ugly for the answer here: http://pastebin.com/LYKDWZnt

matlab indexing into nameless matrix [duplicate]

For example, if I want to read the middle value from magic(5), I can do so like this:
M = magic(5);
value = M(3,3);
to get value == 13. I'd like to be able to do something like one of these:
value = magic(5)(3,3);
value = (magic(5))(3,3);
to dispense with the intermediate variable. However, MATLAB complains about Unbalanced or unexpected parenthesis or bracket on the first parenthesis before the 3.
Is it possible to read values from an array/matrix without first assigning it to a variable?
It actually is possible to do what you want, but you have to use the functional form of the indexing operator. When you perform an indexing operation using (), you are actually making a call to the subsref function. So, even though you can't do this:
value = magic(5)(3, 3);
You can do this:
value = subsref(magic(5), struct('type', '()', 'subs', {{3, 3}}));
Ugly, but possible. ;)
In general, you just have to change the indexing step to a function call so you don't have two sets of parentheses immediately following one another. Another way to do this would be to define your own anonymous function to do the subscripted indexing. For example:
subindex = #(A, r, c) A(r, c); % An anonymous function for 2-D indexing
value = subindex(magic(5), 3, 3); % Use the function to index the matrix
However, when all is said and done the temporary local variable solution is much more readable, and definitely what I would suggest.
There was just good blog post on Loren on the Art of Matlab a couple days ago with a couple gems that might help. In particular, using helper functions like:
paren = #(x, varargin) x(varargin{:});
curly = #(x, varargin) x{varargin{:}};
where paren() can be used like
paren(magic(5), 3, 3);
would return
ans = 16
I would also surmise that this will be faster than gnovice's answer, but I haven't checked (Use the profiler!!!). That being said, you also have to include these function definitions somewhere. I personally have made them independent functions in my path, because they are super useful.
These functions and others are now available in the Functional Programming Constructs add-on which is available through the MATLAB Add-On Explorer or on the File Exchange.
How do you feel about using undocumented features:
>> builtin('_paren', magic(5), 3, 3) %# M(3,3)
ans =
13
or for cell arrays:
>> builtin('_brace', num2cell(magic(5)), 3, 3) %# C{3,3}
ans =
13
Just like magic :)
UPDATE:
Bad news, the above hack doesn't work anymore in R2015b! That's fine, it was undocumented functionality and we cannot rely on it as a supported feature :)
For those wondering where to find this type of thing, look in the folder fullfile(matlabroot,'bin','registry'). There's a bunch of XML files there that list all kinds of goodies. Be warned that calling some of these functions directly can easily crash your MATLAB session.
At least in MATLAB 2013a you can use getfield like:
a=rand(5);
getfield(a,{1,2}) % etc
to get the element at (1,2)
unfortunately syntax like magic(5)(3,3) is not supported by matlab. you need to use temporary intermediate variables. you can free up the memory after use, e.g.
tmp = magic(3);
myVar = tmp(3,3);
clear tmp
Note that if you compare running times with the standard way (asign the result and then access entries), they are exactly the same.
subs=#(M,i,j) M(i,j);
>> for nit=1:10;tic;subs(magic(100),1:10,1:10);tlap(nit)=toc;end;mean(tlap)
ans =
0.0103
>> for nit=1:10,tic;M=magic(100); M(1:10,1:10);tlap(nit)=toc;end;mean(tlap)
ans =
0.0101
To my opinion, the bottom line is : MATLAB does not have pointers, you have to live with it.
It could be more simple if you make a new function:
function [ element ] = getElem( matrix, index1, index2 )
element = matrix(index1, index2);
end
and then use it:
value = getElem(magic(5), 3, 3);
Your initial notation is the most concise way to do this:
M = magic(5); %create
value = M(3,3); % extract useful data
clear M; %free memory
If you are doing this in a loop you can just reassign M every time and ignore the clear statement as well.
To complement Amro's answer, you can use feval instead of builtin. There is no difference, really, unless you try to overload the operator function:
BUILTIN(...) is the same as FEVAL(...) except that it will call the
original built-in version of the function even if an overloaded one
exists (for this to work, you must never overload
BUILTIN).
>> feval('_paren', magic(5), 3, 3) % M(3,3)
ans =
13
>> feval('_brace', num2cell(magic(5)), 3, 3) % C{3,3}
ans =
13
What's interesting is that feval seems to be just a tiny bit quicker than builtin (by ~3.5%), at least in Matlab 2013b, which is weird given that feval needs to check if the function is overloaded, unlike builtin:
>> tic; for i=1:1e6, feval('_paren', magic(5), 3, 3); end; toc;
Elapsed time is 49.904117 seconds.
>> tic; for i=1:1e6, builtin('_paren', magic(5), 3, 3); end; toc;
Elapsed time is 51.485339 seconds.

Resources