I have created a custom theory plugin, which does nothing at the moment. The callbacks are all implemented and registered, but they simply return. Then, I read in a bunch of declare-consts, declare-funs, and asserts using Z3_parse_smtlib2_string, and pass the resulting ast to Z3_assert_cnstr. A subsequent call to Z3_check_and_get_model fails with the following error:
The mk_fresh_ext_data callback was not set for user theory, you must use Z3_theory_set_mk_fresh_ext_data_callback
As far as I can tell, Z3_theory_set_mk_fresh_ext_data_callback does not exist.
Using the same string, but without registering the theory plugin, Z3_check_and_get_model returns sat and gives a model as expected.
I am using version 4 and the Linux 64 bit libraries.
The full example is here: http://pastebin.com/hLJ8hFf1
The problem is the model-based quantifier instantiation module (MBQI). This module tries to create a copy of the main logical engine. To create the copy, Z3 must copy every theory plugin. It can do it for all builtin theories, but not for external theories.
The original theory plugin API did not have support for copying itself because it was implemented before the MBQI module. The API Z3_theory_set_mk_fresh_ext_data_callback is meant for that. However, it was not exposed yet for several reasons.
The main issue is that Z3 4.0 has a new API for solvers. The current theory plugin API is incompatible with the new solver API.
We are investigating ways of integrating them.
In Z3 4.0, the theory plugins only work with the old (deprecated) solver API.
To avoid the problem you described, you just have to disable the MBQI module. You can do that by setting MBQI=false when creating Z3_context.
In C, you can do that using the following code fragment.
Z3_config cfg;
Z3_context ctx;
cfg = Z3_mk_config();
Z3_set_param_value(cfg, "MBQI", "false");
ctx = Z3_mk_context(cfg);
This also explains why your plugin works on quantifier-free formulas. The MBQI module is not used for this kind of formula.
Related
Can't build GrainLib (only interfaces) with Microsoft.Orleans.CodeGenerator.MSBuild 3.0.0 package
error:
Exc level 0: System.NotSupportedException: Projects of type .fsproj are not supported.
Is there workaround?
Upd
After Arshia001 explanation i return to finding errors in F# silo configuration and solved my problems by applying WithCodeGeneration instead WithReference and applying it for every assembly.
.ConfigureApplicationParts(fun parts ->
parts.AddApplicationPart((typeof<IMyGrain>).Assembly)
.WithCodeGeneration()
.AddApplicationPart((typeof<MyGrain>).Assembly)
.WithCodeGeneration() |> ignore)
it seems there are lot of issues with orleans documentation and examples.
Unfortunately, no. I once started a discussion around adding first class F# support to Orleans, but it died down pretty quickly since nobody else seemed to be interested at the time.
You can always use runtime serializer generation. They do have an official F# sample too.
I have a question about save_main_session and best practices, and please let me know if there is a doc somewhere that covers this question. So with save_main_session set to False, if my DoFn in the process method uses for example standard lib copy module, Beam's FileSystems API or my custom module, if I import those at the module level (top of the file) in which the DoFn is defined, this would fail in Dataflow service with an error that says that copy (etc) module was not found from the process method (which all makes sense), and I could fix this by either:
importing copy inside the process method
"saving" copy reference/object as a field/provider/etc in the DoFn instance
setting save_main_session to True
I don't want to set save_main_session to True because afaiu it captures whole main session and I have bunch of objects that are not serializable in there, and overall find save_main_session to be smelly and hacky. 1st option is kinda smelly as well and doesn't always work - tho imports are cached so performance wise should be okish - but it would not work for my custom modules afaiu (unless I explicitly install/send them over to the workers). And lastly 2nd is kinda hacky - working around the Beam framework.
I'm leaning mostly towards the 2nd option, but it just doesn't feel right to no be able to just use the global imports and workaround it be adding and using instance field(s).
What is the best practice for this problem? I know the examples are suggesting to set save_main_session to True, but that again has consequences and just smells. Are there better options?
According to the documentation, if you have objects in your global namespace that cannot be pickled, you will get a pickling error. If the error is regarding a module that should be available in the Python distribution, you can solve this by importing the module locally, where it is used.
The DoFn class comes with a setup method that is called once per DoFn instance. You can override this method, and perform your imports there.
As a note, this method is available in Beam's Python release for 2.13.0. If you're using an earlier version, you can override start_bundle in your DoFn to perform the import there.
I have a little Java program. I build a binary using Graal's native-image (i.e. GraalVM AOT aka SubstrateVM).
My program can be executed either with a Java runtime or from the native-image binary. What's the best way to tell which context I'm running in?
(This might be a bad practice in general but I believe it's inevitable/necessary in certain not-uncommon circumstances.)
Edit: There is now an API for that. See user7983712's answer.
The way it's done in the GraalVM is by capturing the com.oracle.graalvm.isaot system property: it is set to true while building AOT images. If you combine that with the fact that static initializers run during image generation, you can use
static final boolean IS_AOT = Boolean.getBoolean("com.oracle.graalvm.isaot")
This boolean will remain true when running the native image.
This is also useful to cut-off paths that you don't want in the final output: for example if you have some code that uses a feature that SVM doesn't support (e.g., dynamic class-loading) you can predicate it with !IS_AOT.
GraalVM now provides an API for checking the AOT context:
ImageInfo.inImageCode()
ImageInfo.inImageRuntimeCode()
ImageInfo.inImageBuildtimeCode()
ImageInfo.isExecutable()
ImageInfo.isSharedLibrary()
I'm leaning towards checking the presence/absence of some system properties. When I print out the system properties under Graal AOT I see:
{os.arch=x86_64, file.encoding=UTF-8, user.home=/Users/thom, path.separator=:, os.name=Mac OS X, user.dir=/Users/thom, line.separator=
, sun.jnu.encoding=UTF-8, file.separator=/, java.io.tmpdir=/var/folders/0x/rms5rjn526x33rm394xwmr8c0000gn/T/, user.name=thom}
As you may notice it's fairly short and is missing all the usual java.* ones such as java.class.path. I'll omit listing the lengthy Java version and instead link to another SO listing the usual Java System properties:
What is the full list of standard keys recognized by the Java System.getProperty() method?
So one way to do it would seem to be to check whether one or more of the java.* properties are absent.
AFAIK there are no plans to set these in SubstrateVM. But System properties are mutable so one could possibly choose to fake them.
But anyway here's a way to do it:
def isGraalAOT = System.properties.getProperty("java.class.path") == null
My delphi application runs scripts using JvInterpreter (from the Jedi project).
A feature I use is runtime evaluation of expressions.
Script Example:
[...]
ShowMessage(X_SomeName);
[...]
JvInterpreter doesn't know X_SomeName.
When X_SomeName's value is required the scripter calls its OnGetValue-callback.
This points to a function I handle. There I lookup X_SomeName's value and return it.
Then JvInterpreter calls ShowMessage with the value I provided.
Now I consider switching to DelphiWebScript since it has a proper debug-interface and should also be faster than JvInterpreter.
Problem: I didn't find any obvious way to implement what JvInterpreter does with its OnGetValue/OnSetValue functions, though.
X_SomeName should be considered (and actually is, most of the time) a variable which is handled by the host application.
Any Ideas?
Thanks!
You can do that through the language extension mechanism, which has a FindUnknownName method that allows to register symbols on the spot.
It is used in the asm lib module demo, and you can also check the new "AutoExternalValues" test case in ULanguageExtensionTests, which should be closer to what you're after.
I want to make my Application configuration as F# file wich will be compiled to dll. (like XMonad configuration with xmonad.hs on haskell). I found it more interesting and just better way then using XML serialization.
So is there any way how can I compile single file (or more) to Library with some configuration like
module RNExcel.Repository
open RNExcel.Model
type ExpenseReportRepository() =
member x.GetAll() =
seq{ yield {Name="User1"
Role="someRole"
Password = "123321"
ExpenseLineItems =
[{ExpenseType="Item1"
ExpenseAmount="50"};
{ExpenseType="Item2"
ExpenseAmount="50"}]}
yield {Name="User2"
Role="Estimator"
Password = "123123"
ExpenseLineItems =
[{ExpenseType="Item1"
ExpenseAmount="50"};
{ExpenseType="Item2"
ExpenseAmount="125"}]} }
my idea was to run shell .... and msbuild the project , but I don't think it will works for every user with .net 4
Check out the F# Power Pack, specifically the FSharp.CodeDom library. You can use this library to compile F# code at run-time and, with a little Reflection thrown in, can likely achieve your goal of code-as-configuration with minimal fuss.
I think that using CodeDOM provider from the PowerPack as Ben suggested is a way to go. I'd just like to add a few things (and it didn't fit into the comment box).
To parse and compile the F# code with the configuration, you need just to compile the source file that the users write using F# PowerPack. The compilation part of PowerPack is complete and works just fine. It invokes the F# compiler under the cover and gives you the compiled assembly back. The only problem is that the users of your application will need to have F# compiler installed (and not just the redist).
The incomplete part of F# CodeDOM provider is generating F# code from CodeDOM trees (because the trees were not designed to support F#), but that's not needed in this case.