I'm trying to use Max-SMT using the Java API. Below is my attempt:
Optimize opt = ctx.mkOptimize();
opt.Add(hardConstraints);
for(BoolExpr c : C){
opt.AssertSoft(c, 1, "group");
}
However, there is a runtime error in the first line, where opt is created.
Caused by: java.lang.UnsatisfiedLinkError:
com.microsoft.z3.Native.INTERNALmkOptimize(J)J at
com.microsoft.z3.Native.INTERNALmkOptimize(Native Method) at
com.microsoft.z3.Native.mkOptimize(Native.java:5237) at
com.microsoft.z3.Optimize.(Optimize.java:265) at
com.microsoft.z3.Context.mkOptimize(Context.java:3036)
I'm using the latest version of Z3 from Github, downloaded on Sept 30th.
On OSX, make sure that the System Integrity Protection doesn't interfere with your work. In this setting it may remove the DYLD_LIBRARY_PATH environment setting from your environment when it starts the JVM, which has the effect that *.dylib can't be found.
For Z3-specific information see Z3 Java API fails to detect libz3.dylib. For general info about SIP see About System Integrity Protection on your Mac. I haven't found a good way to tell OSX that Z3 is "safe" yet without disabling SIP altogether.
Related
when I use datastage to connect to informix database, there comes an error:
main_program: PATH search failure:
main_program: Error loading "orchinformix": Could not load "orchinformix": libifasf.so: wrong ELF class: ELFCLASS32.
main_program: Could not locate operator definition, wrapper, or Unix command for "infxread"; please check that all needed libraries are preloaded, and check the PATH for the wrappers
What may cause this problem? Wait for help. Thanks a lot!
The key part of the error messages is:
libifasf.so: wrong ELF class: ELFCLASS32.
You're running a 64-bit system (or, at least, 64-bit executables), but you have a 32-bit version of the Informix ClientSDK or Informix Connect libraries installed, and your orchinformix code is trying to load the 32-bit libifasf.so library, and failing.
To fix, you need to find out which libifasf.so your code is trying to use, and you need to find out whether there's a 64-bit version installed somewhere on the machine. If there's no 64-bit version, then you'll need to install it, of course.
You then need to adjust things so that the correct library is loaded rather than the incorrect one. It isn't clear what that'll take. Look carefully at the configuration and installation instructions.
Normally, libifasf.so and other Informix libraries are installed in either $INFORMIXDIR/lib or a sub-directory of that (e.g. $INFORMIXDIR/lib/esql or $INFORMIXDIR/lib/client). You may need to set the INFORMIXDIR environment variable to where the 64-bit software is installed, or you may have to play with other environment variables (LD_LIBRARY_PATH, DYLD_LIBRARY_PATH, SHLIB_PATH, etc), or you may have to tweak a configuration file (/etc/ld.so.conf or similar).
If this isn't sufficient help, please identify the platform (o/s and version) that you're using, and also where the Informix database server is running (is it the same machine or a different machine), and the versions of Informix database and connectivity that are in use. In this context, it is important that 12.10.FC4 and 12.10.UC4 are slightly different; the F indicates 64-bit and the U 32-bit Unix (and W would indicate 32-bit Windows). Please include all the version number information for the products.
I am using the following code to list the clients connected to my ESP8266 access point.
cfg={}
cfg.ssid="ESP8266_";
cfg.pwd="12345678"
wifi.ap.config(cfg)
cfg={}
cfg.ip="192.168.1.1";
cfg.netmask="255.255.255.0";
cfg.gateway="192.168.1.1";
wifi.ap.setip(cfg);
wifi.setmode(wifi.SOFTAP)
table={}
table=wifi.ap.getclient()
for mac,ip in pairs(table) do
print(mac,ip)
end
But it's returning me an error:
attempt to call field 'getclient' (a nil value)
Based on your latest comment the solution is simple: you need an up-to-date firmware.
All the pre-built binaries you can download from GitHub are hopelessly outdated and no longer maintained or supported. Do NOT use them.
The current master branch is based on Espressif SDK 1.4 and the dev branch uses 1.5.1. However, the NodeMCU team does no longer provide recent pre-built binaries. You need to build the firmware yourself. Fortunately that is simple and well documented: http://nodemcu.readthedocs.org/en/dev/en/build/.
The easiest option is to use my NodeMCU custom build service in the cloud.
I am basically just follow the word count example to pull data from datastore in dataflow like
DatastoreV1.Query.Builder q = DatastoreV1.Query.newBuilder();
q.addKindBuilder().setName([entity kind]);
DatastoreV1.Query query = q.build();
DatastoreIO.Source source = DatastoreIO.source()
.withDataset([...])
.withQuery(query)
.withNamespace([...]);
PCollection<DatastoreV1.Entity> collection = pipeline.apply(Read.from(source));
But it keeps failing on:
java.lang.RuntimeException: Unable to find DEFAULT_INSTANCE in com.google.api.services.datastore.DatastoreV1$Query at com.google.protobuf.GeneratedMessageLite$SerializedForm.readResolve(GeneratedMessageLite.java:1065) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at ...
Couldn't find any solution that seems relevant in the internet so far.
Might somebody could suggest maybe a general direction on what might be going wrong?
Protocol Buffers have certain restrictions. Among others, you have to link in the protobuf Java runtime that matches the version of the protoc compiler that the code was generated with, and you can (normally) have only one runtime present. This applies to all use cases of Protocol Buffers, and they aren't Dataflow specific.
Dataflow SDK for Java, version 1.4.0 and older, depends on protobuf version 2.5 and links in a Datastore client library generated with the corresponding protoc compiler. The easiest solution is not to override any protobuf-java and google-api-services-datastore-protobuf dependencies and let them be brought into your project by the Dataflow SDK.
If you really have to upgrade to protobuf version 3 for an unrelated reason, you should also upgrade google-api-services-datastore-protobuf to version v1beta2-rev1-4.0.0, because that one was generated with the corresponding protoc compiler. Please note that this is a workaround for Datastore only -- I would expect other dependencies that require protobuf version 2 to break, unless they are upgraded too.
Now, we are actively working on upgrading the Dataflow SDK to protobuf version 3. I'd expect this functionality in the next minor release, possibly 1.5.0. Since any version of the Dataflow SDK can support only one protobuf at a time, support for version 2 will break at that time, unless a few dependencies are manually rolled back.
I have read nearly all of the material on Microsoft's MSDN site, used Google (for the limited information that is out there) and also looked at the answers on here but I'm still confused on how to develop a NDIS driver.
My aim is to create a ndis driver so I can capture the network packets and decide whether I want to drop them (possibly inject as well) or allow them to pass.
From my research it would seem that I need to create an intermediate NIDS driver and after installing WDK (I'm using Visual Studio 2015 enterprise) I don't know where to begin (do I need to start with a KMDF project?).
Also, when I did load a KMDF driver project nearly all of the header files are getting highlighted by Intellisense as having errors (expected an identifier, NTSTATUS is underefined)?
Can anyone give some assistance on how to start please?
I have recently created a packet sniffer using the WinPcap library (and also used it to send packets) but there was a lot of information out there that helped me. Unfortunately, with NDIS it doesn't seem to be the same.
I can't seem to find the samples either
Okay, so a simple clean install of Visual Studio 2015 and WDK 10 is all that is needed to set up the environment for creating a driver.....
But then comes the deployment part
I'm trying to build Premake5 on FreeBSD 10.1 from the sources. I eventually got it to compile by removing the "-dl" option and using gmake explicitly for the build. It built, but I can't get it to do anything but spit out the following error message. Doesn't matter how I invoke it. It crashes even on 'premake5 --help'.
Here's the message:
PANIC: unprotected error in call to Lua API (attempt to call a string value)
The code is buggy as all get-out. It starts by assuming linux is posix which is clearly not the case. They use linuxism all over the place so converting to posix is going to be quite a task, and until that is done it will never work satisfactorily on non-linux posix based systems.
The -ldl was obviously the first stumbling block. The next is in the function premake_locate_executable in premake.c. In this they are using the /proc filesystem which is a linuxism and since this fails on BSD they are falling back to some lua methods but they seem to be assuming that lua_tostring pops the corresponding value which it doesn't. Since their stack isn't balanced in this function the following lua_call is trying to call the garbage they've left on the stack rather than the function they intended.
Even after I fixed this issue they use getconf _NPROCESSORS_ONLN to get the number of cores to multi-job the make build but they don't actually check that this call succeeds (which it doesn't outside of Linux and MacOSX).
After fixing this issue I then ran into the problem that their makefiles aren't regular make, but GNU-make, so I had to change to using gmake to try and build.
From that point it just came unravelled because none of the premake files in the contrib directory are configured for BSD despite it being one of the legal configuration targets (i.e. it doesn't default to linux) and so there is no configuration for those components.
TLDR: BSD is not a supported platform