Suppose we have the following code:
integrate(a*x/a,x);
How can we run this code without any simplification, so that we can get the LaTeX equation without any simplification:
$\int\frac{ax}{a}dx$
?
You can disable the simplifier with:
simp:false;
before evaluating the integral. Don't forget to enable it afterwards.
The manual mentions this as an "extreme option". Presumably a lot of other functionality will be broken while it is off.
Another way (maybe you neeed both?): single quote before the expression. This one prevents evaluation but not simplification. See also: https://maxima.sourceforge.io/docs/manual/maxima_43.html
Related
In constraint solver like Gecode , We can control the exploration of search space with help of branching function. for e.g. branch(home , x , INT_VAL_MIN ) This will start exploring the search space from the minimum possible value of variable x in its domain and try to find solution.(There are many such alternatives .)
For z3, do we have this kind of flexibility in-built ?? Any alternative possible??
SMT solvers usually do not allow for these sorts of "hints" to be given, they act more as black-boxes.
Having said that, each solver uses a ton of internal heuristics, and z3 itself has a number of settings that you can play with to give it hints. If you run:
z3 -pd
it will display all the options you can provide, and there are literally over 600 of them! Unfortunately, these options are not really well documented, and how they impact the solver is rather cryptic. The only reliable way to find out would be to study the source code and see what they do, which isn't for the faint of heart. But in any case, it will not be as obvious as the branch feature you cite for gecode.
There are, however, other tricks one can use to speed up solving for SMT-solvers, unfortunately, these things are usually very problem-specific. If you post specific instances, you might get better suggestions.
I am trying to find an optimal solution using the Z3 API for python. I have used set_option("verbose", 1) to print statements that Z3 generates while checking for sat. One of the statements it prints is pb.conflict statements. The statements look something like this -
pb.conflict statements.
I want to know what exactly is pb.conflict. What do these statements signify? Also, what are the two numbers that get printed along with it?
pb stands for Pseudo-boolean. A pseudo-boolean function is a function from booleans to some other domain, usually Real. A conflict happens when the choice of a variable leads to an unsatisfiable clause set, at which point the solver has to backtrack. Keeping the backtracking to a minimum is essential for efficiency, and many of the SAT engines carefully track that number. While the details are entirely solver specific (i.e., those two numbers you're asking about), in general the higher the numbers, the more conflict cases the solver met, and hence might decide to reset the state completely or take some other action. Often, there are parameters that users can set to specify when such actions are taken and exactly what those are. But again, this is entirely solver and implementation specific.
A google search on pseudo-boolean optimization will result in a bunch of scholarly articles that you might want to peruse.
If you really want to find Z3's treatment of pseudo-booleans, then your best bet is probably to look at the implementation itself: https://github.com/Z3Prover/z3/blob/master/src/smt/theory_pb.cpp
I am using the python API of the Z3 solver to search for optimized schedules. It works pretty well apart from that it sometimes is very slow even for small graphs (sometimes its very quick though). The reason for that is probably that the constraints of my scheduling problem are quite complex.
I am trying to speed things up and stumbled on some articles about incremental solving.
As far I understood, you can use incremental solving to prune some of the search space by only applying parts of the constraints.
So my original code was looking like that:
for constraint in constraint_set:
self._opt_solver.add(constraint)
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I changed it now to the following:
for constraint in constraint_set:
self._opt_solver.push(constraint)
self._opt_solver.check()
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I basically substituted the "add" command by the "push" command and added a check() after each push.
So first of all: is my general approach correct?
Furthermore, I get an exception which I can't get rid of:
self._opt_solver.push(constraint) TypeError: push() takes 1 positional argument but 2 were given
Can anyone give me a hint, what I am doing wrong. Also is there maybe a z3py tutorial that explains (with some examples maybe) how to use incremental solving with the python api.
My last question is: Is that at all the right way of minimizing the execution time of the solver or is there a different/better way?
The function push doesn't take an argument. It creates a "backtracking" point that you can pop to later on. See here: http://z3prover.github.io/api/html/classz3py_1_1_solver.html#abc4ae989afee7ad164844640537107d9
So, it seems push isn't really what you want/need here at all. You should simply add your constraints one-by-one and call check. However, I very much doubt checking after each addition is going to speed anything up significantly. The optimizing solver (as opposed to the regular one), in particular, usually solves everything from scratch. (See the relevant discussion here: https://github.com/Z3Prover/z3/issues/1577)
Regarding incremental: The python API is automatically "incremental." Incremental simply means the ability to call the command check() multiple times, without the solver forgetting what it has seen before. (i.e., call check, assert more facts, call check again; the second check will take into account all the assertions from the very beginning.) You shouldn't make any assumptions regarding this will give you speed over calling check just once at the very end: It entirely depends on the heuristics and the decision procedures involved, which is dependent on the problem at hand.
I have a project that doesn't have -spec or -type in code, currently dialyzer can find some warnings, most of them are in machine generated codes.
Will adding type specs to code make dialyzer find more errors?
Off the topic, is there any tool to check whether the specs been violated or not?
Adding typespecs will dramatically improve the accuracy of Dialyzer.
Because Erlang is a dynamic language Dialyzer must default to a rather broad interpretation of types unless you give it hints to narrow the "success" typing that it will go by. Think of it like giving Dialyzer a filter by which it can transform a set of possible successes to a subset of explicit types that should ever work.
This is not the same as Haskell, where the default assumption is failure and all code must be written with successful typing to be compiled at all -- Dialyzer must default to assume success unless it knows for sure that a type is going to fail.
Typespecs are the main part of this, but Dialyzer also checks guards, so a function like
increment(A) -> A + 1.
Is not the same as
increment(A) when A > 100 -> A + 1.
Though both may be typed as
-spec increment(integer()) -> integer().
Most of the time you only care about integer values being integer(), pos_integer(), neg_integer(), or non_neg_integer(), but occasionally you need an arbitrary range bounded only on one side -- and the type language has no way to represent this currently (though personally I would like to see a declaration of 100..infinity work as expected).
The unbounded-range of when A > 100 requires a guard, but a bounded range like when A > 100 and A < 201 could be represented in the typespec alone:
-spec increment(101..200) -> pos_integer().
increment(A) -> %stuff.
Guards are fast with the exception of calling length/1 (which you should probably never actually need in a guard), so don't worry with the performance overhead until you actually know and can demonstrate that you have a performance problem that comes from guards. Using guards and typespecs to constrain Dialyzer is extremely useful. It is also very useful as documentation for yourself and especially if you use edoc, as the typespec will be shown there, making APIs less mysterious and easy to toy with at a glance.
There is some interesting literature on the use of Dialyzer in existing codebases. A well-documented experience is here: Gradual Typing of Erlang Programs: A Wrangler Experience. (Unfortunately some of the other links I learned a lot from previously have disappeared or moved. (!.!) A careful read of the Wrangler paper, skimming over the User's Guide and man page, playing with Dialyzer, and some prior experience in a type system like Haskell's will more than prepare you for getting a lot of mileage out of Dialyzer, though.)
[On a side note, I've spoken with a few folks before about specifying "pure" functions that could be guaranteed as strongly typed either with a notation or by using a different definition syntax (maybe Prolog's :- instead of Erlang's ->... or something), but though that would be cool, and it is very possible even now to concentrate side-effects in a tiny part of the program and pass all results back in a tuple of {Results, SideEffectsTODO}, this is simply not a pressing need and Erlang works pretty darn well as-is. But Dialyzer is indeed very helpful for showing you where you've lost track of yourself!]
A pet peeve of mine is the use of double square brackets for Part rather than the single character \[LeftDoubleBracket] and \[RightDoubleBracket]. I would like to have these automatically replaced when pasting plain-text code (from StackOverflow for example) into a Mathematica Notebook. I have been unable to configure this.
Can it be done with ImportAutoReplacements or another automatic method (preferred), or will I need use a method like the "Paste Tabular Data Palette" referenced here?
Either way, I am not good with string parsing, and I want to learn the best way to handle bracket counting.
Sjoerd gave Defer and Simon gave Ctrl+Shift+N which both cause Mathematica to auto-format code. These are fine options.
I am still interested in a method that is automatic and/or preserves as much of the original code as possible. For example, maintaining prefix f#1, infix 1 ~f~ 2, and postfix 1 // f functions in their original forms.
A subsection of this question was reposted as Matching brackets in a string and received several good answers.
Not really an answer, but a thread on entering the double [[ ]] pair (with the cursor between both pairs) using a single keystroke occurred a couple of weeks ago on the mathgroup. It didn't help me, but for others this was a solution apparently.
EDIT
to make good on my slightly off-topic first response here's a pattern replacement that seems to do the job (although I have difficulties myself to understand why it should be b and not b_; the latter doesn't work):
Defer[f[g[h[[i[[j[2], k[[1, m[[1, n[2]]]]]]]]]]]] /.
HoldPattern[Part[b, a_]] -> HoldPattern[b\[LeftDoubleBracket]a\[RightDoubleBracket]]
I leave the automation part to you.
EDIT 2
I discovered that if you add the above rule to ImportAutoReplacements and paste your SO code in a notebook in a Defer[] and evaluate this, you end up with a usable form with double brackets which can be used as input somewhere else.
EDIT 3
As remarked by Mr.Wizard invisibly below in the comments, the replacement rule isn't necessary. Defer does it on its own! Scientific progress goes "Boink", to cite Bill Watterson.
EDIT 4
The jury is still out on Defer. It has some peculiar side effects, and doesn't work well on all expressions. try the "Paste Tabular Data Palette" in the toolbag question for instance. Pasting this block of code in Defer and executing gives me this:
It worked much better in another code snippet from the same thread:
The second part is how it looks after turning it in to input by editing the output of the first block (basically, I inserted a couple of returns to restore the format). This turns it into Input. Please notice that all double brackets turned into the correct corresponding symbol, but notice also the changing position of ReleaseHold.
Simon wrote in a comment, but declined to post as an answer, something fairly similar to what I requested, though it is not automatic on paste, and is not in isolation from other formatting.
(One can) select the text and press Ctrl+Shift+N to translate to StandardForm