Z3 4.0 Push and Pop In Solver - z3

I want to verify my problem using the solver for 2 different constraints. I wrote a sample program for the same, where I have a variable x which I want to check and get a model for x = 0 and x = 1.
I am trying to use Push and Pop in the Solver. However I am not sure about how to do it exactly. I have written the following code. When I try to push the context and pop it back, I get a crash. I do not understand the reason for the crash, but its a Seg Fault. Even if I comment out the push and pop instructions as below, I am still getting the crash.
Could someone please give some pointers to solve the problem.
Z3_config cfg;
Z3_context ctx;
Z3_solver solver;
Z3_ast x, zero, one, x_eq_zero, x_eq_one;
cfg = Z3_mk_config();
ctx = Z3_mk_context(cfg);
Z3_del_config(cfg);
solver = Z3_mk_solver((Z3_context)ctx);
x = mk_int_var(ctx, "x");
zero = mk_int(ctx, 0);
one = mk_int(ctx, 1);
x_eq_zero = Z3_mk_eq(ctx, x, zero);
x_eq_one = Z3_mk_eq(ctx, x, one);
//Z3_solver_push (ctx, solver);
Z3_solver_assert(ctx, solver, x_eq_zero);
printf("Scopes : %d\n", Z3_solver_get_num_scopes((Z3_context) ctx, (Z3_solver) solver));
printf("%s \n", Z3_ast_to_string(ctx, x_eq_zero));
int result = Z3_solver_check ((Z3_context) ctx, (Z3_solver) solver);
printf("Sat Result : %d\n", result);
printf("Model : %s\n", Z3_model_to_string ((Z3_context) ctx, Z3_solver_get_model ((Z3_context) ctx, (Z3_solver) solver)));
// Z3_solver_pop (ctx, solver, 1);
// printf("Scopes : %d\n", Z3_solver_get_num_scopes((Z3_context) ctx, (Z3_solver) solver));
Z3_solver_assert(ctx, solver, x_eq_one);
result = Z3_solver_check ((Z3_context) ctx, (Z3_solver) solver);
printf("Sat Result : %d\n", result);
printf("Model : %s\n", Z3_model_to_string ((Z3_context) ctx, Z3_solver_get_model ((Z3_context) ctx, (Z3_solver) solver)));
return 0;

The new API in Z3 4.0 has many new features. For example, it introduces several new objects: Solvers, Goals, Tactics, Probes, etc. Moreover, we also introduce a new memory management policy for objects such as ASTs and Models that existed in previous APIs. The new memory management policy is based on reference counting. Every object has APIs of the form Z3_<object>_inc_ref and Z3_<object>_dec_ref. We still support the old memory management policy for ASTs and Models. If the Z3_context is created using Z3_mk_context, then the old memory management policy is enabled for ASTs. If it is created using Z3_mk_context_rc, then Z3_inc_ref and Z3_dec_ref must be used to manage the reference counters. However, the new objects (Solvers, Goals, Tactics, etc) only support reference counting. We strongly encourage all users to move to the new reference counting memory management policy. So, all new objects only support this policy. Moreover, all managed APIs (.Net, Python and OCaml) are based on the reference counting policy. Note that, we provide a thin C++ layer on top of the C API. It "hides" all reference counting calls using "smart pointers". The source code for the C++ layer is included in the Z3 distribution.
That being said, your program crashes because you did not increment the reference counter of the object Z3_solver. Here is the corrected version of your program. I essentially added the missing calls to Z3_solver_inc_ref and Z3_solver_dec_ref. The latter is needed to avoid a memory leak. After it, I also included the same program using the C++ API. It is much simpler. The C++ API is provided in the file include\z3++.h in the Z3 distribution. Examples are included at examples\c++.
Z3_config cfg;
Z3_context ctx;
Z3_solver solver;
Z3_ast x, zero, one, x_eq_zero, x_eq_one;
cfg = Z3_mk_config();
ctx = Z3_mk_context(cfg);
Z3_del_config(cfg);
solver = Z3_mk_solver((Z3_context)ctx);
Z3_solver_inc_ref(ctx, solver);
x = mk_int_var(ctx, "x");
zero = mk_int(ctx, 0);
one = mk_int(ctx, 1);
x_eq_zero = Z3_mk_eq(ctx, x, zero);
x_eq_one = Z3_mk_eq(ctx, x, one);
//Z3_solver_push (ctx, solver);
Z3_solver_assert(ctx, solver, x_eq_zero);
printf("Scopes : %d\n", Z3_solver_get_num_scopes((Z3_context) ctx, (Z3_solver) solver));
printf("%s \n", Z3_ast_to_string(ctx, x_eq_zero));
int result = Z3_solver_check ((Z3_context) ctx, (Z3_solver) solver);
printf("Sat Result : %d\n", result);
printf("Model : %s\n", Z3_model_to_string ((Z3_context) ctx, Z3_solver_get_model ((Z3_context) ctx, (Z3_solver) solver)));
// Z3_solver_pop (ctx, solver, 1);
// printf("Scopes : %d\n", Z3_solver_get_num_scopes((Z3_context) ctx, (Z3_solver) solver));
Z3_solver_assert(ctx, solver, x_eq_one);
result = Z3_solver_check ((Z3_context) ctx, (Z3_solver) solver);
printf("Sat Result : %d\n", result);
// printf("Model : %s\n", Z3_model_to_string ((Z3_context) ctx, Z3_solver_get_model ((Z3_context) ctx, (Z3_solver) solver)));
Z3_solver_dec_ref(ctx, solver);
return 0;
C++ version
context c;
solver s(c);
expr x = c.int_const("x");
expr x_eq_zero = x == 0;
expr x_eq_one = x == 1;
s.add(x_eq_zero);
std::cout << "Scopes : " << Z3_solver_get_num_scopes(c, s) << "\n";
std::cout << x_eq_zero << "\n";
std::cout << s.check() << "\n";
std::cout << s.get_model() << "\n";
s.add(x_eq_one);
std::cout << s.check() << "\n";
return 0;

Related

Quickly dumping large tables passed from Lua to C

In order to quickly save Lua tables containing large 1-dimensional arrays (the number of arrays is known however the number of elements isn't fixed. approximately 800,000 elements in each array), I planned to use Lua C binding in the following way-
#include "lua.h"
#include "lauxlib.h"
#include <stdio.h>
#include <assert.h>
static int save_table(lua_State *L) {
assert(L && lua_type(L, -1) == LUA_TTABLE);
int len, r;
void *ptr;
FILE *f;
lua_pushstring(L, "p");
lua_gettable(L, -2);
len = lua_objlen(L, -1);
ptr = lua_topointer(L, -1);
f = fopen("p.bin", "wb");
assert(f);
r = fwrite(ptr, sizeof(int), len, f);
printf("[p] wrote %d elements out of %d requested\n", r, len);
fclose(f);
lua_pop(L, 1);
lua_pushstring(L, "q");
lua_gettable(L, -2);
len = lua_objlen(L, -1);
ptr = lua_topointer(L, -1);
f = fopen("q.bin", "wb");
assert(f);
r = fwrite(ptr, sizeof(float), len, f);
printf("[q] wrote %d elements out of %d requested\n", r, len);
fclose(f);
lua_pop(L, 1);
return 1;
}
int luaopen_savetable(lua_State *L) {
static const luaL_reg Map[] = {{"save_table", save_table}, {NULL, NULL}};
luaL_register(L, "mytask", Map);
return 1;
}
Lua code is shown below-
-- sample table containg two 1-d array
my_table = {p = {11, 22, 33, 44}, q = {0.12, 0.23, 0.34, 0.45, 0.56}}
require "savetable"
mytask.save_table(my_table)
The above code produces two binary files with the wrong content. What is wrong here?
PS: I am using Lua 5.1. I am not sure if this is the fastest way of dumping large Lua tables. Suggestions are always welcome.

OpenCL "read_imageui " always returns zero 0

I have written a simple OpenCL program with an objective to make a copy of input image using OpenCL image2d struct. It seemed like a simple job to do but I have been stuck at it.
The kernel has "read_imageui" which always returns zero value. The input image is a all white jpeg image.
Image loading is done using OpenCV imread.
Here is the Kernel :
const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;
__kernel void copy(__read_only image2d_t in, __write_only image2d_t out)
{
int idx = get_global_id(0);
int idy = get_global_id(1);
int2 pos = (int2)(idx,idy);
uint4 pix = read_imageui(in,smp,pos);
write_imageui(out,pos,pix);
}
Here is the host code :
int main(){
//get all platforms (drivers)
std::vector<cl::Platform> all_platforms;
cl::Platform::get(&all_platforms);
if(all_platforms.size()==0){
std::cout<<" No platforms found. Check OpenCL installation!\n";
exit(1);
}
cl::Platform default_platform=all_platforms[0];
std::cout << "Using platform: "<<default_platform.getInfo<CL_PLATFORM_NAME>()<<"\n";
std::cout <<" Platform Version: "<<default_platform.getInfo<CL_PLATFORM_VERSION>() <<"\n";
//cout << "Image 2D support : " << default_platform.getInfo<CL_DEVICE_IMAGE_SUPPORT>()<<"\n";
//get default device of the default platform
std::vector<cl::Device> all_devices;
default_platform.getDevices(CL_DEVICE_TYPE_ALL, &all_devices);
if(all_devices.size()==0){
std::cout<<" No devices found. Check OpenCL installation!\n";
exit(1);
}
cl::Device default_device=all_devices[0];
std::cout<< "Using device: "<<default_device.getInfo<CL_DEVICE_NAME>()<<"\n";
//creating a context
cl::Context context(default_device);
//cl::Program::Sources sources;
//sources.push_back(LoadKernel('kenel2.cl'));
//load kernel coad
cl::Program program(context,LoadKernel("image_test.cl"));
//build kernel code
if(program.build(all_devices)!=CL_SUCCESS){
std::cout<<" Error building: "<<program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(default_device)<<"\n";
exit(1);
}
/* IMAGE FORMTS */
// Determine and show image format support
vector<cl::ImageFormat > supportedFormats;
context.getSupportedImageFormats(CL_MEM_READ_ONLY,CL_MEM_OBJECT_IMAGE2D,&supportedFormats);
cout <<"No. of supported formats " <<supportedFormats.size()<<endl;
Mat white = imread("white_small.jpg");
cvtColor(white, white, CV_BGR2RGBA);
//white.convertTo(white,CV_8UC4);
Mat out = Mat(white);
out.setTo(Scalar(0));
char * inbuffer = reinterpret_cast<char *>(white.data);
char * outbuffer = reinterpret_cast<char *>(out.data);
//cout <<"Type of input : " <<white.type<<endl;
int sizeOfImage = white.cols * white.rows * white.channels();
int outImageSize = white.cols * white.rows * white.channels();
int w = white.cols;
int h = white.rows;
cout <<"Creating Images ... "<<endl;
cout <<"Dimensions ..." <<w << " x "<<h<<endl;
const cl::ImageFormat format(CL_RGBA, CL_UNSIGNED_INT8);
cl::Image2D imageSrc(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, format, white.cols, white.rows,0,inbuffer);
cl::Image2D imageDst(context, CL_MEM_WRITE_ONLY, format , white.cols, white.rows,0,NULL);
cout <<"Creating Kernel Program ... "<<endl;
cl::Kernel kernelCopy(program, "copy");
kernelCopy.setArg(0, imageSrc);
kernelCopy.setArg(1, imageDst);
cout <<"Creating Command Queue ... "<<endl;
cl::CommandQueue queue(context, default_device);
cout <<"Executing Kernel ... "<<endl;
int64 e = getTickCount();
for(int i = 0 ; i < 100 ; i ++)
{
queue.enqueueNDRangeKernel(kernelCopy, cl::NullRange, cl::NDRange(w, h), cl::NullRange);
queue.finish();
}
cout <<((getTickCount() - e) / getTickFrequency())/100 <<endl;;
cl::size_t<3> origin;
cl::size_t<3> size;
origin[0] = 0;
origin[1] = 0;
origin[2] = 0;
size[0] = w;
size[1] = h;
size[2] = 1;
cout <<"Transfering Images ... "<<endl;
//unsigned char *tmp = new unsigned char (w * h * 4);
//CL_TRUE means that it waits for the entire image to be copied before continuing
queue.enqueueReadImage(imageDst, CL_TRUE, origin, size, 0, 0, outbuffer);
queue.finish();
imwrite("result.jpg",out);
/* OLD CODE ==================================================*/
return 0;
}
However if I change the kernel as
uint4 pix2 = (uint4)(255,255,255,1);
write_imageui(out,pos,pix2);
It outputs a white image. Which means there is something wrong with how I am using the read_image
it came out to be something related to "reference counting" on Mat copy constructor.
if instead of using
Mat white = imread("white_small.jpg");
cvtColor(white, white, CV_BGR2RGBA);
//white.convertTo(white,CV_8UC4);
Mat out = Mat(white);
Initialize the output matrix "out" as
Mat out = Mat(white.size,CV_8UC4)
then it works fine.
I couldn't comprehend completely what exactly caused it but I know that it is due to "reference counting" of Mat copy constructor when used as first syntax.
When write:
Mat out = Mat(white);
It is like a shallow copy of white to out. Bot white.data and out.data pointers will be pointing to same memory and reference count will be incremented. So, when you call out.setTo, white Mat will also see same change. Declaring out as below might be good idea:
Mat out = Mat(white.size,CV_8UC(white.channels()));

Manual CBC encryption handing with Crypto++

I am trying to play around with a manual encryption in CBC mode but still use Crypto++, just to know can I do it manually.
The CBC algorithm is (AFAIK):
Presume we have n block K[1]....k[n]
0. cipher = empty;
1. xor(IV, K1) -> t1
2. encrypt(t1) -> r1
3. cipher += r1
4. xor (r1, K2) -> t2
5. encrypt(t2) -> r2
6. cipher += r2
7. xor(r2, K3)->t3
8. ...
So I tried to implement it with Crypto++. I have a text file with alphanumeric characters only. Test 1 is read file chunk by chunk (16 byte) and encrypt them using CBC mode manually, then sum up the cipher. Test 2 is use Crypto++ built-in CBC mode.
Test 1
char* key;
char* iv;
//Iterate in K[n] array of n blocks
BSIZE = 16;
std::string vectorToString(vector<char> v){
string s ="";
for (int i = 0; i < v.size(); i++){
s[i] = v[i];
}
return s;
}
vector<char> xor( vector<char> s1, vector<char> s2, int len){
vector<char> r;
for (int i = 0; i < len; i++){
int u = s1[i] ^ s2[i];
r.push_back(u);
}
return r;
}
vector<char> byteToVector(byte *b, int len){
vector<char> v;
for (int i = 0; i < len; i++){
v.push_back( b[i]);
}
return v;
}
string cbc_manual(byte [n]){
int i = 0;
//Open a file and read from it, buffer size = 16
// , equal to DEFAULT_BLOCK_SIZE
std::ifstream fin(fileName, std::ios::binary | std::ios::in);
const int BSIZE = 16;
vector<char> encryptBefore;
//This function will return cpc
string cpc ="";
while (!fin.eof()){
char buffer[BSIZE];
//Read a chunk of file
fin.read(buffer, BSIZE);
int sb = sizeof(buffer);
if (i == 0){
encryptBefore = byteToVector( iv, BSIZE);
}
//If i == 0, xor IV with current buffer
//else, xor encryptBefore with current buffer
vector<char> t1 = xor(encryptBefore, byteToVector((byte*) buffer, BSIZE), BSIZE);
//After xored, encrypt the xor result, it will be current step cipher
string r1= encrypt(t1, BSIZE).c_str();
cpc += r1;
const char* end = r1.c_str() ;
encryptBefore = stringToVector( r1);
i++;
}
return cpc;
}
This is my encrypt() function, because we have only one block so I use ECB (?) mode
string encrypt(string s, int size){
ECB_Mode< AES >::Encryption e;
e.SetKey(key, size);
string cipher;
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
) // StreamTransformationFilter
); // StringSource
return cipher;
}
And this is 100% Crypto++ made solution:
Test 2
encryptCBC(char * plain){
CBC_Mode < AES >::Encryption encryption(key, sizeof(key), iv);
StreamTransformationFilter encryptor(encryption, NULL);
for (size_t j = 0; j < plain.size(); j++)
encryptor.Put((byte)plain[j]);
encryptor.MessageEnd();
size_t ready = encryptor.MaxRetrievable();
string cipher(ready, 0x00);
encryptor.Get((byte*)&cipher[0], cipher.size());
}
Result of Test 1 and Test 2 are different. In the fact, ciphered text from Test 1 is contain the result of Test 2. Example:
Test 1's result aaa[....]bbb[....]ccc[...]...
Test 2 (Crypto++ built-in CBC)'s result: aaabbbccc...
I know the xor() function may cause a problem relate to "sameChar ^ sameChar = 0", but is there any problem relate to algorithm in my code?
This is my Test 2.1 after the 1st solution of jww.
static string auto_cbc2(string plain, long size){
CBC_Mode< AES >::Encryption e;
e.SetKeyWithIV(key, sizeof(key), iv, sizeof(iv));
string cipherText;
CryptoPP::StringSource ss(plain, true,
new CryptoPP::StreamTransformationFilter(e,
new CryptoPP::StringSink(cipherText)
, BlockPaddingSchemeDef::NO_PADDING
) // StreamTransformationFilter
); // StringSource
return cipherText;
}
It throw an error:
Unhandled exception at 0x7407A6F2 in AES-CRPP.exe: Microsoft C++
exception: CryptoPP::InvalidDataFormat at memory location 0x00EFEA74
I only got this error when use BlockPaddingSchemeDef::NO_PADDING, tried to remove BlockPaddingSchemeDef or using BlockPaddingSchemeDef::DEFAULT_PADDING, I got no error . :?
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)));
This uses PKCS padding by default. It takes a 16-byte input and produces a 32-byte output due to padding. You should do one of two things.
First, you can use BlockPaddingScheme::NO_PADDING. Something like:
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
BlockPaddingScheme::NO_PADDING));
Second, you can process blocks manually, 16 bytes at a time. Something like:
AES::Encryption encryptor(key, keySize);
byte ibuff[<some size>] = ...;
byte obuff[<some size>];
ASSERT(<some size> % AES::BLOCKSIZE == 0);
unsigned int BLOCKS = <some size>/AES::BLOCKSIZE;
for (unsigned int i=0; i<BLOCKS; i==)
{
encryptor.ProcessBlock(&ibuff[i*16], &obuff[i*16]);
// Do the CBC XOR thing...
}
You may be able to call ProcessAndXorBlock from the BlockCipher base class and do it in one shot.

Can Montgomery multiplication be used to speed up the computation of (large number)! % (some prime)

This question originates in a comment I almost wrote below this question, where Zack is computing the factorial of a large number modulo a large number (that we will assume to be prime for the sake of this question). Zack is using the traditional computation of factorial, taking the remainder at each multiplication.
I almost commented that an alternative to consider was Montgomery multiplication, but thinking more about it, I have only seen this technique used to speed up several multiplications by the same multiplicand (in particular, to speed up the computation of an mod p).
My question is: can Montgomery multiplication be used to speed up the computation of n! mod p for large n and p?
Naively, no; you need to transform each of the n terms of the product into the "Montgomery space", so you have n full reductions mod m, the same as the "usual" algorithm.
However, a factorial isn't just an arbitrary product of n terms; it's much more structured. In particular, if you already have the "Montgomerized" kr mod m, then you can use a very cheap reduction to get (k+1)r mod m.
So this is perfectly feasible, though I haven't seen it done before. I went ahead and wrote a quick-and-dirty implementation (very untested, I wouldn't trust it very far at all):
// returns m^-1 mod 2**64 via clever 2-adic arithmetic (http://arxiv.org/pdf/1209.6626.pdf)
uint64_t inverse(uint64_t m) {
assert(m % 2 == 1);
uint64_t minv = 2 - m;
uint64_t m_1 = m - 1;
for (int i=1; i<6; i+=1) { m_1 *= m_1; minv *= (1 + m_1); }
return minv;
}
uint64_t montgomery_reduce(__uint128_t x, uint64_t minv, uint64_t m) {
return x + (__uint128_t)((uint64_t)x*-minv)*m >> 64;
}
uint64_t montgomery_multiply(uint64_t x, uint64_t y, uint64_t minv, uint64_t m) {
return montgomery_reduce(full_product(x, y), minv, m);
}
uint64_t montgomery_factorial(uint64_t x, uint64_t m) {
assert(x < m && m % 2 == 1);
uint64_t minv = inverse(m); // m^-1 mod 2**64
uint64_t r_mod_m = -m % m; // 2**64 mod m
uint64_t mont_term = r_mod_m;
uint64_t mont_result = r_mod_m;
for (uint64_t k=2; k<=x; k++) {
// Compute the montgomerized product term: kr mod m = (k-1)r + r mod m.
mont_term += r_mod_m;
if (mont_term >= m) mont_term -= m;
// Update the result by multiplying in the new term.
mont_result = montgomery_multiply(mont_result, mont_term, minv, m);
}
// Final reduction
return montgomery_reduce(mont_result, minv, m);
}
and benchmarked it against the usual implementation:
__uint128_t full_product(uint64_t x, uint64_t y) {
return (__uint128_t)x*y;
}
uint64_t naive_factorial(uint64_t x, uint64_t m) {
assert(x < m);
uint64_t result = x ? x : 1;
while (x --> 2) result = full_product(result,x) % m;
return result;
}
and against the usual implementation with some inline asm to fix a minor inefficiency:
uint64_t x86_asm_factorial(uint64_t x, uint64_t m) {
assert(x < m);
uint64_t result = x ? x : 1;
while (x --> 2) {
__asm__("mov %[result], %%rax; mul %[x]; div %[m]"
: [result] "+d" (result) : [x] "r" (x), [m] "r" (m) : "%rax", "flags");
}
return result;
}
Results were as follows on my Haswell laptop for reasonably large x:
implementation speedup
---------------------------
naive 1.00x
x86_asm 1.76x
montgomery 5.68x
So this really does seem to be a pretty nice win. The codegen for the Montgomery implementation is pretty decent, but could probably be improved somewhat further with hand-written assembly as well.
This is an interesting approach for "modest" x and m. Once x gets large, the various approaches that have sub-linear complexity in x will necessarily win out; factorial has so much structure that this method doesn't take advantage of.

check; get_model; check causes segfault in Z3 C API

I'm trying to use Z3 via the C API and smtlib2 for incremental solving. Unfortunately, I got a segmentation violation when asserting some simple formula, checking it, obtaining its model, asserting something additional and then checking again. This also happens without asserting something new, i.e. when checking, retrieving a model, and checking again. Here is a minimal example to reproduce the error:
#include<z3.h>
int main()
{
Z3_config cfg = Z3_mk_config();
Z3_context ctx = Z3_mk_context(cfg);
Z3_ast fs = Z3_parse_smtlib2_string(ctx, "(declare-fun a () Int) (assert (= a 0))", 0, 0, 0, 0, 0, 0);
Z3_solver solver = Z3_mk_solver(ctx);
Z3_solver_assert(ctx, solver, fs);
Z3_solver_check(ctx, solver);
Z3_model m = Z3_solver_get_model(ctx, solver);
Z3_solver_check(ctx, solver);
Z3_del_config(cfg);
return 0;
}
I tried with two Z3 versions (4.3.1 on a Mac 64 bit and 4.1 on Ubuntu 64 bit).
I appreciate any help, hints or workarounds - maybe I'm just using the API in a wrong way?
Many thanks,
Elisabeth
Here is a version of your code using reference counts.
It crashes when I delete the reference counting.
void main() {
Z3_config cfg = Z3_mk_config();
Z3_context ctx = Z3_mk_context(cfg);
Z3_ast fs = Z3_parse_smtlib2_string(ctx, "(declare-fun a () Int) (assert (= a 0))", 0, 0, 0, 0, 0, 0);
Z3_inc_ref(ctx, fs);
Z3_solver solver = Z3_mk_solver(ctx);
Z3_solver_inc_ref(ctx, solver);
Z3_solver_assert(ctx, solver, fs);
Z3_solver_check(ctx, solver);
Z3_model m = Z3_solver_get_model(ctx, solver);
Z3_model_inc_ref(ctx, m);
Z3_solver_check(ctx, solver);
// work with model
Z3_solver_dec_ref(ctx, solver);
Z3_model_dec_ref(ctx, m);
Z3_dec_ref(ctx, fs);
Z3_del_config(cfg);
}
BTW. The C++ API hides all the reference counting details. It is much more convenient to work with.

Resources