Migrating from Z3 Version 3.2 to Version 4.0 - z3

I am migrating from Z3 version 3.2 to version 4.0.
However, the code which was working earlier, no longer works directly and I am trying to find out the reasons for the same. I reduced the entire code to a very simple declaration and assertion, still it won't work. The code is -
long intSort = Z3_mk_int_sort (context);
long periodDeclStr = Z3_mk_string_symbol(context, "period");
long periodVar = Z3_mk_const(context, periodDeclStr, intSort);
long solver = z3_mk_solver();
long zero = Z3_mk_int (context, 0, intSort);
long eqSt = Z3_mk_eq(context, periodVar, zero);
Z3_solver_assert (context, solver, eqSt);
The problem is with the second last statement Z3_mk_eq()
I receive the error as -
WARNING: invalid function application, sort mismatch on argument at position 2
WARNING: (define = arith arith Bool) applied to:
period of sort arith
0::Int of sort Int
My Questions are as follows -
How to debug this error? This is still working with version 3.2, without solver though.
Is it necessary that I must create solver only after I finished adding variables to the context? Can I add variables to the context after creating the solver? Or I have to re-create the solver?
Sorry for the trouble. I was mixing up solver and context to pass them to the solver.
However, the problem still remains the unsolved.
I am having a crash in Z3_ast_to_String() API.
I will try to resolve the problem and post the update.

There is an interaction log now with Z3 4.0 that records the API interaction precisely.
It should be possible use this feature to debug the JNI layering and the bugs you find.
The log is opened using Z3_open_log().
You should open the log before creating any contexts.
You can close the log at any point as well (Z3_close_log()) if you only want to capture a subset of the interaction. You can replay the log by giving the the suffix ".log" and run Z3 on it.
Alternatively, you can run Z3 with the option /log, that is, "Z3.exe /log " to replay the interaction.

Don't you want Z3_mk_eq(context, id, zero) instead of Z3_mk_eq(context, periodDecl, zero)?

Related

How to use add_packet_field in a Wireshark Lua dissector?

I am stumbling my way through writing a dissector for our custom protocol in Lua. While I have basic field extraction working, many of our fields have scale factors associated with them. I'd like to present the scaled value in addition to the raw extracted value.
It seems to me tree_item:add_packet_field is tailor-made for this purpose. Except I can't get it to work.
I found Mika's blog incredibly helpful, and followed his pattern for breaking my dissector into different files, etc. That's all working.
Given a packet type "my_packet", I have a 14-bit signed integer "AOA" that I can extract just fine
local pref = "my_packet"
local m = {
aoa = ProtoField.new("AOA", pref .. ".aoa", ftypes.INT16, nil, base.DEC, 0x3FFF, "angle of arrival measurement"),
}
local option=2
local aoa_scale = 0.1
function m.parse(tree_arg, buffer)
if option == 1 then
-- basic field extraction. This works just fine. The field is extracted and added to the tree
tree_arg:add(m.aoa, buffer)
elseif option == 2 then
-- This parses and runs. The item is decoded and added to the tree,
-- but the value of 'v' is always nil
local c,v = tree_arg:add_packet_field(m.aoa, buffer, ENC_BIG_ENDIAN)
-- this results in an error, doing arithmetic on 'nil'
c:append_text(" (scaled= " .. tostring(v*aoa_scale) .. ")")
end
end
(I use ProtoField.new instead of any of the type-specific variants for consistency in declaring my fields)
The documentation for add_packet_field says that the encoding argument is mandatory.
There is a README in the source code that says ENC_BIG_ENDIAN should be specified for network byte-order data (mine is). I know that section is for proto_tree_add_item, but I traced the code far enough to see that add_packet_field ends up passing the encoding to proto_tree_add_item.
Basically, at this point, I'm lost. I did find this post from 2014 that suggested limited support for add_packet_field but surely by now something as basic as an integer value is supported?
Also, I do know how to declare a Field and extract the value after tree:add does the parsing; worst case I'll fall back to that, but surely there is a more expedient way to access the just-parsed value added to the tree?
Wireshark Version
3.2.4 (v3.2.4-0-g893b5a5e1e3e)
Compiled (64-bit) with Qt 5.12.8, with WinPcap SDK (WpdPack) 4.1.2, with GLib
2.52.3, with zlib 1.2.11, with SMI 0.4.8, with c-ares 1.15.0, with Lua 5.2.4,
with GnuTLS 3.6.3 and PKCS #11 support, with Gcrypt 1.8.3, with MIT Kerberos,
with MaxMind DB resolver, with nghttp2 1.39.2, with brotli, with LZ4, with
Zstandard, with Snappy, with libxml2 2.9.9, with QtMultimedia, with automatic
updates using WinSparkle 0.5.7, with AirPcap, with SpeexDSP (using bundled
resampler), with SBC, with SpanDSP, with bcg729.
Running on 64-bit Windows 10 (1803), build 17134, with Intel(R) Xeon(R) CPU
E3-1505M v6 # 3.00GHz (with SSE4.2), with 32558 MB of physical memory, with
locale English_United States.1252, with light display mode, without HiDPI, with
Npcap version 0.9991, based on libpcap version 1.9.1, with GnuTLS 3.6.3, with
Gcrypt 1.8.3, with brotli 1.0.2, without AirPcap, binary plugins supported (19
loaded).
Built using Microsoft Visual Studio 2019 (VC++ 14.25, build 28614).
Looking at the try_add_packet_field() source code, only certain FT_ types are supported, namely:
FT_BYTES
FT_UINT_BYTES
FT_OID
FT_REL_OID
FT_SYSTEM_ID
FT_ABSOLUTE_TIME
FT_RELATIVE_TIME
None of the other FT_ types are supported [yet], including FT_UINT16, which is the one you're interested in here, i.e., anything else just needs to be done the old fashioned way.
If you'd like this to be implemented, I'd suggest filing a Wireshark enhancement bug request for this over at the Wireshark Bug Tracker.

How to verify with StepVerifier that provided Mono did not completed?

With StepVerifier it is very easy to check whether provided Mono has completed (just by expectComplete() method in StepVerifier), but what should I do if need to check the opposite case ?
I tried to use this approach:
#Test
public void neverMonoTest() {
Mono<String> neverMono = Mono.never();
StepVerifier.create(neverMono)
.expectSubscription()
.expectNoEvent(Duration.ofSeconds(1))
.thenCancel()
.verify();
}
and such test passes. But this is false positive, because when I replace Mono.never() with Mono.empty() the test is still green.
Is there any better and reliable method to check lack of Mono's completion (of course within given scope of time) ?
It looks like you're hitting a bug in reactor-test, and unfortunately one that doesn't look to be solved any time soon:
Due to my memory that was a constant flaw in design of reactor-test. Most likely that will be fixed once reactor-test will be rewritten from scratch / significantly.
Downgrading to 3.1.2 seems to fix the problem, but that's quite a downgrade. The only other workaround I'm aware of was posted by PyvesB here, and involves waiting for the Mono to timeout:
Mono<String> mono = Mono.never();
StepVerifier.create(mono.timeout(Duration.ofSeconds(1L)))
.verifyError(TimeoutException.class);
When the next release rolls out, then you should be able to do:
Mono<String> mono = Mono.never();
StepVerifier.create(mono)
.expectTimeout(Duration.ofSeconds(1));
...as a more concise alternative.

ZeroBrane remote debugging embedded script

I work on image processing application in Embarcadero C++ Builder XE10.2 that executes Lua scripts. I use LuaJIT with FFI to share image data. Everything works fine. I've downloaded ZeroBrane studio and tried to see if I can debug scripts executed from "host" C++ application, so I've included
package.path = package.path .. ";C:/Portable_App/ZeroBraneStudio/lualibs/mobdebug/?.lua"
package.cpath = package.cpath .. ";C:/Portable_App/ZeroBraneStudio/bin/clibs/?.dll"
require("mobdebug").start()
before any function in the script is called. However, when the script is loaded and executed (on C++ side):
FResult = lua_pcall(FLs, 0, 0, 0);
host program crashes with "floating point division by zero" exception. It crashes on
require("mobdebug").start()
Without this line script works OK. Any clue?
It's not possible to tell what may be going wrong based on the provided information, but you can try to get the stack trace (using this SO answers), which should provide more information about what's leading to the error.
The only division that I'm aware of is in the serialization code that uses tostring(1/0) code to generate platform-independent NaN values. Would this lead to "floating point division by zero" error in your Lua configuration?
(Update to include the solution mentioned in comments) The issue was related to BCC compiler settings on how to handle FPU exceptions. One way is to manipulate FP control: _clear87(); _control87(MCW_EM, MCW_EM); or to set arithmetic exception mask: SetExceptionMask(exAllArithmeticExceptions);.

MetaEditor/MQL4 ExpertAdviser: Local Variable Declaration More Than Once?

I am looking at some old MetaEditor4 / MQL4 code, where a local variable was declared twice:
......
1 int start()
2 {
3 if (1==2)
4 {
5 double myVar = 1;
6 } else
7 {
8 double myVar = 2;
9 }
10 return;
11 }
.......
The compilation process in MetaEditor, version 5.00, build 1601, fails with:
'myVar' - variable already defined in line 8.
If I remove the line 8, the compilation goes well.
My questions are:
1. Is there any option in MetaEditor that tolerates the multiple declaration of a local variable?
2. In previous versions of MetaTrader Terminal 4 / MetaEditor and .MQ4 code: was it possible to declare a local variable more than once in such a situation?
3. The MetaEditor has the version 5.00, build 1601, but the extension of the code is .mq4 and it was installed together with the MetaTrader Terminal software MetaTrader4 ( from FXCM ). Therefore I assume I can still use .MQ4 code with it. Is there any chance to get a pure MQL4 installation from somewhere?
Whenever I install mt4 ( from e.g.: mt4 download), it ends up
with the mt5 installer.
Prologue:
The worlds of MQL4 evolve. One may try to circumvent this fact, but finally, at one's own disappointment, attempts to avoid evolution will sooner or later go in vain.
Having been thrown into a need to re-engineer code-base spanning a few man*decades in size, I can tell you many stories about what worked and what did not.
An "Old code" v/s a New-MQL4.56789
If just one thing ought be taken from this, never try to "circumvent" New-MQL4, but rather review the code and refactor the "Old code" - this is a way safer way to survive ( way longer ).
Yes, there are chances ( zero warrants, just a few chances left temporarily on the table ) the new compiler version will remain able to generate an executable version of the code, but given a new set of rules have already come in the city, the game will not last long.
Ad 1 + 2 )Compiler still tolerates multiple declarations, but not in one scope
If new version of a compiler defined that any variable is declared only relative to it's scope of validity, the serious programmer ought take this as a general principle. The code above actually has other problem, right nailed to the scope-of-validity:
2 ...
3 if ( 1 == 2 ) {
4 ...
5 double myVar = 1; // myVar declared & known |since HERE >
6 ... // masking any other,|known HERE :
7 ... // |known HERE :
8 } else // |till HERE . Undef further
9 {
10 ...
11 double myVar = 2; // myVar declared & known |since HERE >
12 // masking any other,|known HERE :
13 ... // |known HERE :
14 } // |till HERE . Undef further
so, if there were any _global_ scope'd variable with the same name myVar, it will not be "visible" during an existence of locally declared variable, wearing the same name.
Finally, having the code-execution escaped from any of the lines 8 or 14 further, the locally there declared variable double myVar simply ceased to exist anymore and this behaviour is principally correct ( and the "older" compiler releases were tolerating a sort of dangerous habit of side-effects, during years of tolerating scope-of-validity spillover(s), so it was the high time to clean the rules, so as to meet a fair level of C/S standards.
Ad 3) language receives a lot from MQL5, even if not used in MQL4
Yes, MetaEditor will correctly compile a MQL4 code into .mq4 code-execution format, no problem here. Even an auto-update process started to go independently from MT4 Terminal platform (auto-)updates ( so you will quite often see new Help file coming and enforced re-compilation of all your localhost visible .MQ4 assets into updated .EX4 format, so "Do not panic."
Better never install a Broker-agnostic MT4, always go to your Broker's Support and get installation package & help from your Broker. This is business relation you have signed in a contract, so keep these strings as you are going to trade your money on a table they operate under the set Terms & Conditions. Some Brokers have means of platform customisations, so rather benefit from their custom settings that will match their Server-side automation.
It is more a question of economy of R&D efforts. ( May read a lot about language components injected from the MQL5 domain in the IDE Editor MQL4 Help ). This is a natural will of the product design strategy, not to double efforts on a dual-line. Without doubts, there are many details the Help file could be improved and better maintained, the common sense here is to live with the facts and re-learn what newly introduced features remain neutral for the MQL4 code base and what new things may actually help one a lot in aspects, where older compilers were short in powers.
If one objects that some compiler / platform re-design steps were bad, I would agree on a single-thread, platform-critical, potentially blocking, concentration of executing all the CustomIndicator-s in just one SPoF-thread.
But C'est La Vie, until system architects will not review this SPoF, the platform will remain susceptible to crashes from this feature, but the ball is on the other side of the court and a change will have to be implemented there.
the code might be run with 'strict' or non-strict mode.
strict means that variable must be declared within its scope, non-strict - all the mess that you have now.
so put #property strict at the beginning of the file
open a demo account somewhere and install mt4 there. demo can be valid for 30 days only with registration via web-site of a broker or with unlimited and demo opened from mt4 (example - Alpari)

FAKE Fsc task is writing build products to wrong directory

I'm just learning F#, and setting up a FAKE build harness for a hello-world-like application. (Though the phrase "Hell world" does occasionally come to mind... :-) I'm using a Mac and emacs (generally trying to avoid GUI IDEs by preference).
After a bit of fiddling about with documentation, here's how I'm invoking the F# compiler via FAKE:
let buildDir = #"./build-app/" // Where application build products go
Target "CompileApp" (fun _ -> // Compile application source code
!! #"src/app/**/*.fs" // Look for F# source files
|> Seq.toList // Convert FileIncludes to string list
|> Fsc (fun p -> // which is what the Fsc task wants
{p with //
FscTarget = Exe //
Platform = AnyCpu //
Output = (buildDir + "hello-fsharp.exe") }) // *** Writing to . instead of buildDir?
) //
That uses !! to make a FileIncludes of all the sources in the usual way, then uses Seq.toList to change that to a string list of filenames, which is then handed off to the Fsc task. Simple enough, and it even seems to work:
...
Starting Target: CompileApp (==> SetVersions)
FSC with args:[|"-o"; "./build-app/hello-fsharp.exe"; "--target:exe"; "--platform:anycpu";
"/Users/sgr/Documents/laboratory/hello-fsharp/src/app/hello-fsharp.fs"|]
Finished Target: CompileApp
...
However, despite what the console output above says, the actual build products go to the top-level directory, not the build directory. The message above looks like the -o argument is being passed to the compiler with an appropriate filename, but the executable gets put in . instead of ./build-app/.
So, 2 questions:
Is this a reasonable way to be invoking the F# compiler in a FAKE build harness?
What am I misunderstanding that is causing the build products to go to the wrong place?
This, or a very similar problem, was reported in FAKE issue #521 and seems to have been fixed in FAKE pull request #601, which see.
Explanation of the Problem
As is apparently well-known to everyone but me, the F# compiler as implemented in FSharp.Compiler.Service has a practice of skipping its first argument. See FSharp.Compiler.Service/tests/service/FscTests.fs around line 127, where we see the following nicely informative comment:
// fsc parser skips the first argument by default;
// perhaps this shouldn't happen in library code.
Whether it should or should not happen, it's what does happen. Since the -o came first in the arguments generated by FscHelper, it was dutifully ignored (along with its argument, apparently). Thus the assembly went to the default place, not the place specified.
Solutions
The temporary workaround was to specify --out:destinationFile in the OtherParams field of the FscParams setter in addition to the Output field; the latter is the sacrificial lamb to be ignored while the former gets the job done.
The longer term solution is to fix the arguments generated by FscHelper to have an extra throwaway argument at the front; then these 2 problems will annihilate in a puff of greasy black smoke. (It's kind of balletic in its beauty, when you think about it.) This is exactly what was just merged into the master by #forki23:
// Always prepend "fsc.exe" since fsc compiler skips the first argument
let optsArr = Array.append [|"fsc.exe"|] optsArr
So that solution should be in the newest version of FAKE (3.11.0).
The answers to my 2 questions are thus:
Yes, this appears to be a reasonable way to invoke the F# compiler.
I didn't misunderstand anything; it was just a bug and a fix is in the pipeline.
More to the point: the actual misunderstanding was that I should have checked the FAKE issues and pull requests to see if anybody else had reported this sort of thing, and that's what I'll do next time.

Resources