how can I make cvxpy to support 3d variable - cvxpy

I have an mixed integer problem which evolves 3d variables. I would like to define variables like:
X = cvxpy.Variables((n,m,T))
And set constraints using slicing operation like in numpy:
b*cvxpy.sum(X[:,j,:],axis=1) <= 1
And some other even complex expression as constraints.
However, cvxpy does not support N>2 D variables. The question is, is there any alternative package I can use to support this demand? Or any efficient techniques I can adopt to easily implement it. Thanks for any suggestions you may give me.

Related

Is there a Halide::BoundaryConditions to mimic OpenCV default border type?

The documentation says this is similar to GL_MIRRORED_REPEAT. I tried to research this, but it doesn't seem as specific as the OpenCV border types.
BORDER_REFLECT_101 as gfedcb|abcdefgh|gfedcba, this is the default.
BORDER_REFLECT as fedcba|abcdefgh|hgfedcb
I guess the corners are not strictly defined by this, but I can clearly see what the edges are. The documentation for GL_MIRRORED_REPEAT seems to focus on corner behaviour. Overall, it does not matter with our application as there are physical limitations on the targets of interest that keep them within the bounds of the field of view. However, if I am writing regression tests and these specifics matter.
How can I replicate BORDER_REFLECT_101 in Halide? Is it possible with Halide::BoundaryConditions or do I need to implement my own clamping? I can relax the conditions after proving we have replicated behaviour and use Halide::BoundaryConditions::mirror_image.
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
The boundary conditions are just a convenience. They're implemented here. They should be no more or less performant than writing the same yourself since they're just metaprogramming Exprs (i.e. they aren't compiler intrinsics).

Can you "teach" computers to do algebra using variable expressions (eg aX+bX=(a+b)X)

Let's say in the example lower case is constant and upper case is variable.
I'd like to have programs that can "intelligently" do specified tasks like algebra, but teaching the program new methods should be easy using symbols understood by humans. For example if the program told these facts:
aX+bX=(a+b)X
if a=bX then X=a/b
Then it should be able to perform these operations:
2a+3a=5a
3x+3x=6x
3x=1 therefore x=1/3
4x+2x=1 -> 6x=1 therefore x= 1/6
I was trying to do similar things with Prolog as it can easily "understand" variables, but then I had too many complications, mainly because two describing a relationship both ways results in a crash. (not easy to sort out)
To summarise: I want to know if a program which can be taught algebra by using mathematic symbols only. I'd like to know if other people have tried this and how complicated it is expected to be. The purpose of this is to make programming easier (runtime is not so important)
It depends on what do you want machine to do and how intelligent it should be.
Your question is mostly about AI but not ML. AI deals with formalization of "human" tasks while ML (though being a subset of AI) is about building models from data.
Described program may be implemented like this:
Each fact form a pattern. Program given with an expression and some patterns can try to apply some of them to expression and see what happens. If you want your program to be able to, for example, solve quadratic equations given rule like ax² + bx + c = 0 → x = (-b ± sqrt(b²-4ac))/(2a) then it'd be designed as follows:
Somebody gives a set of rules. Rule consists of a pattern and an outcome (solution or equivalent form). Think about the pattern as kind of a regular expression.
Then the program is asked to show some intelligence and prove its knowledge via doing something with a given expression. Here comes the major part:
you build a graph of expressions by applying possible rules (if a pattern is applicable to an expression you add new vertex with the corresponding outcome).
Then you run some path-search algorithm (A*, for example) to find sequence of transformations leading to the form like x = ...
I think this is an interesting question, although it off topic in SO (tool recommendation)
But nevertheless, because it captured my imagination, I wrote couple of function using R that can solve stuff like that quite easily
First, you'll have to install R, after words you'll need to download package called stringr
So in R console run
install.packages("stringr")
library(stringr)
And then you can define the following functions that I wrote
FirstFunc <- function(temp){
paste0(eval(parse(text = gsub("[A-Z]", "", temp))), unique(str_extract_all(temp, "[A-Z]")[[1]]))
}
SecondFunc <- function(temp){
eval(parse(text = strsplit(temp, "=")[[1]][2])) / eval(parse(text = gsub("[[:alpha:]]", "", strsplit(temp, "=")[[1]][1])))
}
Now, the first function will solve equations like
aX+bX=(a+b)X
While the second will solve equations like
4x+2x=1
For example
FirstFunc("3X+6X-2X-3X")
will return
"4X"
Now this functions is pretty primitive (mostly for the propose of illustration) and will solve equation that contain only one variable type, something like FirstFunc("3X-2X-2Y") won't give the correct result (but the function could be easily modified)
The second function will solve stuff like
SecondFunc("4x-2x=1")
will return
0.5
or
SecondFunc("4x+2x*3x=1")
will return
0.1
Note that this function also works only for one unknown variable (x) but could be easily modified too

Tuples without (declare-datatypes)?

I wrote a large-ish library in the Z3 dialect of SMT-LIB. Unfortunately, my use of (declare-datatypes) to create tuples means that I cannot set the logic to QF_AUFBV as I desire. This has the side effect of making my scripts slower (sometimes timing out) than when I manually create the formulas programmatically and solve using QF_ABV. Thus, I want to eliminate (declare-datatypes) from my script. Most of the data types can be encoded as bit vectors. However, the most important sort in the library is a tuple of a bitvector term and three arrays. Is there a solution where I can make a sort like this, while still using QF_AUFBV logic?
You can always concatenate the bitvectors of the tuple and extract the relevant half whenever needed.

Erlang: Compute data structure literal (constant) at compile time?

This may be a naive question, and I suspect the answer is "yes," but I had no luck searching here and elsewhere on terms like "erlang compiler optimization constants" etc.
At any rate, can (will) the erlang compiler create a data structure that is constant or literal at compile time, and use that instead of creating code that creates the data structure over and over again? I will provide a simple toy example.
test() -> sets:from_list([usd, eur, yen, nzd, peso]).
Can (will) the compiler simply stick the set there at the output of the function instead of computing it every time?
The reason I ask is, I want to have a lookup table in a program I'm developing. The table is just constants that can be calculated (at least theoretically) at compile time. I'd like to just compute the table once, and not have to compute it every time. I know I could do this in other ways, such as compute the thing and store it in the process dictionary for instance (or perhaps an ets or mnesia table). But I always start simple, and to me the simplest solution is to do it like the toy example above, if the compiler optimizes it.
If that doesn't work, is there some other way to achieve what I want? (I guess I could look into parse transforms if they would work for this, but that's getting more complicated than I would like?)
THIS JUST IN. I used compile:file/2 with an 'S' option to produce the following. I'm no erlang assembly expert, but it looks like the optimization isn't performed:
{function, test, 0, 5}.
{label,4}.
{func_info,{atom,exchange},{atom,test},0}.
{label,5}.
{move,{literal,[usd,eur,yen,nzd,peso]},{x,0}}.
{call_ext_only,1,{extfunc,sets,from_list,1}}.
No, erlang compiler doesn't perform partial evaluation of calls to external modules which set is. You can use ct_expand module of famous parse_trans to achieve this effect.
providing that set is not native datatype for erlang, and (as matter of fact) it's just a library, written in erlang, I don't think it's feasibly for compiler to create sets at compile time.
As you could see, sets are not optimized in erlang (as any other library written in erlang).
The way of solving your problem is to compute the set once and pass it as a parameter to the functions or to use ETS/Mnesia.

Z3 naming let bindings in API

I am using Z3 from the API and I'm looking for a way to debug my constraints. My code compiles and Z3 runs on my constraints, but something is wrong with my constraints. I'm hoping to look at the constraints that I gave to Z3 to determine what is wrong or missing, but I'm not sure how to do this in a way that is very readable. The problem is that using facilities like SMTLIB_DUMP_ASSERTIONS does not provide meaningful names in any let bound variables. Since I have many reuses of the same expressions, nearly everything is let-bound with a generated variable.
Is there any way to dump a file of the input constraints, where let-bound variables have a name that I have assigned? I don't particularly care what the format is, but SMTLIB 1 or 2 would be nice.
No, you cannot provide names to let variables automatically created by Z3 AST printers.
One possible solution is to write your own AST printer. In the Z3 distribution, we have an example application examples/c/test_capi.c. It contains the function:
void display_ast(Z3_context c, FILE * out, Z3_ast v)
It shows how to implement a simple AST printer. This example is very simple, but it is a starting point.

Resources