Number of variables in Z3 statistics - z3

I've tried to find the number of the variables in Z3 statistics but I couldn't find any indications.Can any one help about which one of the following statistics is number of the variables.
Please note I'm looking for precisely the number of the variables not the number of clauses.
(:added-eqs 9977
:binary-propagations 9922
:conflicts 367
:decisions 132793
:del-clause 244104
:final-checks 30
:lazy-quant-instantiations 334
:max-generation 11
:max-memory 15.36
:memory 4.29
:minimized-lits 2
:missed-quant-instantiations 49
:mk-clause 245835
:num-allocs 2987116.00
:propagations 108837
:quant-instantiations 124407
:restarts 17
:rlimit-count 13420765)

Z3 doesn't expose number of Boolean variables created during search.
It could if you move the line:
st.update("mk bool var", m_stats.m_num_mk_bool_var);
in the file src/smt/smt_context_pp.cpp from under the comment to above the comment, recompile, etc.

Related

Get a list of function results until result > x

I basically want the same thing as this OP:
Is there a J idiom for adding to a list until a certain condition is met?
But I cant get the answers to work with OP's function or my own.
I will rephrase the question and write about the answers at the bottom.
I am trying to create a function that will return a list of fibonacci numbers less than 2.000.000. (without writing "while" inside the function).
Here is what i have tried:
First, i picked a way to culculate fibonacci numbers from this site:
https://code.jsoftware.com/wiki/Essays/Fibonacci_Sequence
fib =: (i. +/ .! i.#-)"0
echo fib i.10
0 1 1 2 3 5 8 13 21 34
Then I made an arbitrary list I knew was larger than what I needed. :
fiblist =: (fib i.40) NB. THIS IS A BAD SOLUTION!
Finally, I removed the numbers that were greater than what I needed:
result =: (fiblist < 2e6) # fiblist
echo result
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1.34627e6
This gets the right result, but is there a way to avoid using some arbitrary number like
40 in "fib i.40" ?
I would like to write a function, such that "func 2e6" returns the list of fibonacci numbers below 2.000.000. (without writing "while" inside the function).
echo func 2e6
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1.34627e6
here are the answers from the other question:
first answer:
2 *^:(100&>#:])^:_"0 (1 3 5 7 9 11)
128 192 160 112 144 176
second answer:
+:^:(100&>)^:(<_) ] 3
3 6 12 24 48 96 192
As I understand it, I just need to replace the functions used in the answers, but i dont see how
that can work. For example, if I try:
echo (, [: +/ _2&{.)^:(100&>#:])^:_ i.2
I get an error.
I approached it this way. First I want to have a way of generating the nth Fibonacci number, and I used f0b from your link to the Jsoftware Essays.
f0b=: (-&2 +&$: -&1) ^: (1&<) M.
Once I had that I just want to put it into a verb that will check to see if the result of f0b is less than a certain amount (I used 1000) and if it was then I incremented the input and went through the process again. This is the ($:#:>:) part. $: is Self-Reference. The right 0 argument is the starting point for generating the sequence.
($:#:>: ^: (1000 > f0b)) 0
17
This tells me that the 17th Fibonacci number is the largest one less than my limit. I use that information to generate the Fibonacci numbers by applying f0b to each item in i. ($:#:>: ^: (1000 > f0b)) 0 by using rank 0 (fob"0)
f0b"0 i. ($:#:>: ^: (1000 > f0b)) 0
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
In your case you wanted the ones under 2000000
f0b"0 i. ($:#:>: ^: (2000000 > f0b)) 0
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269
... and then I realized that you wanted a verb to be able to answer your original question. I went with dyadic where the left argument is the limit and the right argument generates the sequence. Same idea but I was able to make use of some hooks when I went to the tacit form. (> f0b) checks if the result of f0b is under the limit and ($: >:) increments the right argument while allowing the left argument to remain for $:
2000000 (($: >:) ^: (> f0b)) 0
32
fnum=: (($: >:) ^: (> f0b))
2000000 fnum 0
32
f0b"0 i. 2000000 fnum 0
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269
I have little doubt that others will come up with better solutions, but this is what I cobbled together tonight.

SPSS LAG Function

I have a SPSS dataset like this, where I would like to identify if a subsequent date is a "duplicate" of a previous date for a given ID:
ID CorrDate
39 07/24/2017
39 07/25/2017
39 07/27/2017
39 07/27/2017
91 03/01/2017
99 07/04/2017
999 02/22/2017
999 02/22/2017
999 02/22/2017
999 02/22/2017
I tried the following LAG function in SPSS:
SORT CASES BY ID(A) CorrDate(A).
IF (ID=LAG(ID) AND CorrDate ne LAG(CorrDate)) Duplicate = 0.
EXECUTE.
IF (ID=LAG(ID) AND CorrDate eq LAG(CorrDate)) Duplicate = 1.
EXECUTE.
However, this did not appear to yield accurate results, so I tried the following commands to see if I could determine the source of the problem:
COMPUTE PreviousID=LAG(ID).
COMPUTE PreviousDate=LAG(CorrDate).
EXECUTE.
IF (ID=PreviousID) AND (CorrDate~=PreviousDate) Duplicate = 0.
EXECUTE.
IF (ID=PreviousID) AND (CorrDate=PreviousDate) Duplicate = 1.
EXECUTE.
Both yielded the following output, which does not seem to correctly identify duplicates for ID #39 and 999:
ID PreviousID CorrDate PreviousDate Duplicate
39 39 07/24/2017 07/23/2017 0
39 39 07/25/2017 07/24/2017 0
39 39 07/27/2017 07/25/2017 0
39 39 07/27/2017 07/27/2017 0
91 39 03/01/2017 07/27/2017 .
99 91 07/04/2017 03/01/2017 .
999 99 02/22/2017 07/04/2017 .
999 999 02/22/2017 02/22/2017 0
999 999 02/22/2017 02/22/2017 0
999 999 02/22/2017 02/22/2017 1
Am I sorting incorrectly? Or do I need to specify another lag option? Thanks for any assistance!
Both your methods for finding the duplicates are good and should work, but here are two more efficient ways:
aggregate out=* mode=add /break=ID CorrDate/occurrences=n.
This will create a new variable with the number of times that each combination of ID and CorrDate occurs in the data.
If you want more options (e.g automatically selecting one of the duplicates for keepin) use the menus Data > Identify Duplicate Cases, choose the options that you need.
Re the cases that don't seem to work:
If SPSS says those two dates are not equal, they aren't...
Like #horace_vr says, the dates probably contain time also. You can easily see that in the data by changing the date format to include time, or just change type to numeric, then the difference will be visible.

Parsing complex files with Parsec

I would like to parse files with several sequences of data (same number of column, same content, ...) with Haskell.
My data sequences will be delimited by keywords before and after.
BEGIN
1 882
2 809
3 435
4 197
5 229
6 425
...
END
BEGIN
1 235 623 684
2 871 699 557
3 918 686 49
4 53 564 906
5 246 344 501
6 929 138 474
...
END
My problem is that after several tests with Parsec, I have the impression that Parsec is rather made to parse a file line by line and not the whole file.
Is Parsec the right way to make what I want or should I consider an other tool like Happy or Alex ?
Is there a website (or other ressource) providing examples of parsing complex text files with Parsec ?
Note : The example I give is a very simple one. Things would be more tricky in my files with many more keywords and combinations.
The format as you've described wouldn't be hard at all to handle in parsec.
As for learning how to use it: your first step should be to avoid whatever guide gave you the impression that parsec worked line-by-line. I recommend Chapter 16 of Real World Haskell as a good place to get started, and once you're comfortable with the basics the reference material at http://hackage.haskell.org/package/parsec is actually very clear.

Logical Addresses & Page numbers

I just started learning Memory Management and have an idea of page,frames,virtual memory and so on but I'm not understanding the procedure from changing logical addresses to their corresponding page numbers,
Here is the scenario-
Page Size = 100 words /8000 bits?
Process generates this logical address:
10 11 104 170 73 309 185 245 246 434 458 364
Process takes up two page frames,and that none of its are resident (in page frames) when the process begins execution.
Determine the page number corresponding to each logical address and fill them into a table with one row and 12 columns.
I know the answer is :
0 0 1 1 0 3 1 2 2 4 4 3
But can someone explain how this is done? Is there a equation or something? I remember seeing something with a table and changing things to binary and putting them in the page table like 00100 in Page 1 but I am not really sure. Graphical representations of how this works would be more than appreciated. Thanks

xcode : retrieving one line of xcode based on search query

Here is a sample of my CSV
10820 0 0 0 0
10900 2 4 4 4
11000 21 50 54 58
11100 23 54 59 63
11200 25 59 63 68
11300 27 63 68 73
11400 29 68 73 78
11500 31 72 78 83
11600 32 76 82 88
11700 34 81 87 93
I'm looking to create to use xcode to retreive one line of code from this very lengthy CSV based on the first line.
For example:
if the user enters "10900", the second line columns will be returned.
If the user returns 11650, the 11600 line columns will be returned...always taking the lower line when the input value is less then the following line.
Any help would be appreciated. I've seen code to parse an entire CSV file, but I'm thinking this may be a big memory drain, right now my CSV has 2000 lines of values, which are all in ascending order based on the first column.
You have to load a file into memory anyways to find correct value.
With such a big CSV file I would recommend to turn CSV file into binary file (plist file for example) and put it as binary into your application - instead of parsing it each time in RunTime. It has much better performance and it's easier to work with that since you are working directly with NSDictonaries an NSArrays.
If you don't want to do it for some reason, the next solution is to use something like CHCSVParser:
https://github.com/davedelong/CHCSVParser
It provides optimization for loading only part of file at a time - which is the optimization you might be looking for.

Resources