Z3 Setting Random Seed using C-API - z3

In the Z3 options we get -
Search heuristics:
-rd:num random case-split frequency (default: 2).
-rs:num random seed.
I am wondering, if there is an C API to set the random seed?
I use the following API to set the timeout.
Is there anything similar for random seed?
params = Z3_mk_params(ctx);
Z3_params_set_uint(ctx, params, Z3_mk_string_symbol(ctx, ":timeout"), timeout);
Z3_solver_set_params(ctx, solver, params);
Thanks !

The name of the parameter is :random-seed. The value is also a unsigned int.
That being said, the next Z3 version (v4.3.2) will have much better support for setting parameters. The improvements are already available in the unstable (work-in-progress) branch at http://z3.codeplex.com.

Related

Can I get a solution using "timeout" when using Optimize.minimize()?

I'm trying to minimize a variable, but z3 takes to long in order to give me a solution.
And I would like to know if it's possible to get a solution when timeout gets triggered.
If yes how can i do that?
Thx in advance!
If by "solution" you mean the latest approximation of the optimal value, then you may be able to retrieve it, provided that the optimization algorithm being used finds any intermediate solution along the way. (Some optimization algorithms --like, e.g., maxres-- don't find any intermediate solution).
Example:
import z3
o = z3.Optimize()
o.add(...very hard problem...)
cf = z3.Int('cf')
o.add(cf = ...)
obj = o.minimize(cf)
o.set(timeout=...)
res = o.check()
print(res)
print(obj.upper())
Even when res = unknown because of a timeout, the objective instance contains the latest approximation of the optimum value found by z3 before the timeout.
Unfortunately, I am not sure whether it is also possible to retrieve the corresponding sub-optimal model with o.model() (or any other method).
For OptiMathSAT, I show how to retrieve the latest approximation of the optimum value and the corresponding model in the unit-test timeout.py.

Extracting raw p-values from glm glht function (instead of Tukey adjusted p-values)

I was given the code below and asked to extract the raw p-values rather than the Tukey adjusted values (as we will be adjusting for multiple comparisons using Homes-Bonferroni at a later stage), but I'm not sure what to replace "Tukey" with (I'm new to using R).....
res=glht(x, linfct=mcp(Letter="Tukey")
out=summary(res)
out
I found the answer. For anyone else who is interested...
The "Tukey" option for the glht function in the multcomp package does not actually use the Tukey correction, it just sets up all pairwise comparisons. It doesn't do p-values; for that you need summary.glht. To get the raw p values you use test=adjusted("none").
res=glht(x, linfct=mcp(Letter="Tukey")
out=summary(res, test = adjusted("none"))
out

Z3 getting last valid model

I using Z3 C++ api to find a satisfiable formula that is minimal with respect to some boolean variables (let us call them b0,...,bn) being true.
I have a formula that includes boolean variables b0,...,bn and I want to find some satisfiable formula where I have the least number of b0,...,bn set to true.
I do this by initially finding a subset of b0,...,bn that can be assigned to true and satisfy my formula, and I incrementally ask the solver to find smaller subsets (i.e. where one of these boolean variables is flipped to false).
I find my local minimum when I cannot find a smaller subset, i.e. I get a unsat result from the Z3. At this point, I would like to access the last valid model.
Is that possible? Does Z3 modify the model when a call to "check" is unsat?
If so, how can I do this using the C++ api?
Many thanks in advance,
You can retrieve a model if the solver returns "sat". The model refers to the state of the solver, so if you add assertions, the state changes and models are no longer valid until you check satisfiability and it returns sat.
So you can retrieve a model every time the solver returns SAT, and then discharge all but the last model.
As Nikolaj mentioned, you need to keep track of models after each call that results in sat and return the last one when you get an unsat if you follow the strategy you outlined.
However, there might be another alternative that avoids repeated calls altogether. Instead of a satisfaction problem, you can cast your problem as an optimization one. You mentioned you have control variables b0, b1, .. bn such that you want to minimize the number of them getting set to true for a satisfying model. Create a metric that counts the number of ones in these variables. Something like:
metric = (if b0 then 1 else 0)
+ (if b1 then 1 else 0)
+ ...
+ (if bn then 1 else 0)
Then use Z3's optimization routines to minimize metric. I believe this will provide you with the solution you are looking for in one call only.
Some helpful references:
Here's the Z3 optimization tutorial: http://rise4fun.com/z3opt/tutorialcontent/guide.
This example, in particular, talks about soft-constraints, and might quite be applicable in your case as well: http://rise4fun.com/z3opt/tutorialcontent/guide#h25.
Here's the C++ API reference for the optimizer: http://z3prover.github.io/api/html/classz3_1_1optimize.html.

How to keep track of the seed

So in Lua it's common knowledge that you can use math.randomseed but it's also obvious that math.random sets the seed as well (calling it twice does not return the same result), what does it set it to, and how can I keep track of it, and if it's impossible, please explain why that is so.
This is not a Lua question, but general question on how some RNG algorithm works.
First, Lua don't have their own RNG - they just output you (slightly mangled) value from RNG of underlying C library. Most RNG implementations do not reveal you their inner state, but sometimes you can caclulate it yourself.
For example when you use Lua on Windows, you'll be using LCG-based RNG from MS C library. The numbers you get is a slice of seed, not full value. There are two ways you can deal with that:
If you know how many times you called random, you can just take initial seed value, feed it to your copy of the same algorithm with same constants that are hardcoded in MS library and get exact value of seed.
If you don't, but you can be sure that nobody interferes in between your two calls to random, you can get two generated numbers, and reverse LCG algorithm by shifting bits back to their place. This will leave you with several missing bits (with one more bit thanks to Lua mangling) that you will need to simply bruteforce - just reiterate over all missing bits until your copy of algorithm produces exactly same two "random" numbers you've recorded before. That will be current seed stored inside library's RNG as well. Well programmed solution in Lua can bruteforce this in about 0.2-0.5s on somewhat dated PC - I did it past. Here's example on Crypto.SE talking about this task in more details: Predicting values from a Linear Congruential Generator.
First approach can be used with any other RNG algorithm that doesn't use any real entropy, second with most RNGs that don't mask too much bits in slice to make bruteforcing unreasonable.
Real answer though is: you don't need to keep track of seed at all. What you want is probably something else.
If you set a seed all numbers math.random() generates are pseudo-random (This is always the case as the system will generate a seed by itself).
math.randomseed(4)
print(math.random())
print(math.random())
math.randomseed(4)
print(math.random())
Outputs
0.50827539156303
0.75454387490399
0.50827539156303
So if you reset the seed to the same value you can predict all values that are going to come up to the maximum number of consecutive values that you already generated using that seed.
What the seed does not do is keep the output of math.random() the same. It would be the same if you kept resetting it to the same value.
An analogy as an example
Imagine the random number is an integer between 0 and 9 (instead of a double between 0 and 1).
math.random() could traverse pi's decimals from an arbitrary starting position (default could be system time).
What you do when you use set.seed() is (not literally, this is an analogy as mentioned) set the starting decimals of where in pi you are going to retrieve your numbers.
If you now reset the seed to the same starting position the numbers are going to be the same as the last time you reset the starting position.
You will know the numbers of to the last call, after that you can't be certain anymore.

Will random() ever change?

I have been looking into a development issue that requires the use of pseudorandom number generation to allow the same set of random numbers to be generated for a given seed.
I have currently been looking at using long random(void) and void srandom(unsigned seed) for this (man page), and currently these are generating the same set of random numbers in a Mac app, an iOS app and an iOS app (64-bit) which is what I was hoping. The iOS tests were only in the simulator so I don't know whether this will affect the result.
My main concerns is that this algorithm could change at some point, making the applications we're developing effectively useless with old data. What are the chances of these algorithms changing / being different on a future device?
I'd say it's extremely likely they will change as the sequence is not guaranteed by any standard.
Why not use your own random number sequence? Even a simple linear congruential generator satisfies most statistical properties of randomness. Here is the formula for such a generator:
next_number = (a * current_number + b) % c
with
a = 1103515245
b = 12345
c = 4294967296
These values of a, b, c give you good statistical properties and are quite well known for building quick and dirty generators.
I don't have the slightest idea about the answer to the question you ask.
If a related question is "How can I be absolutely sure to have the same pseudo-random sequences generated in 10 years time ?", the answer to this question is : don't rely on an external library, write the code explicitly.
Bathsheba proposed this generator. You can google for "pseudo random generator algorithm". Here is a list of algorithms listed on wikipedia.
In fact, srandom did change since Mac OS X 10.7, according to this blog post. However, this was due
to the way srandom was implemented: it tried to access an uninitialized local variable, which
is undefined behavior in C. According to the post, the new compiler used since Mac
OS X 10.7 optimized out the uninitialized memory access, changing its behavior in subtle
ways.

Resources