Is z3_dbg.dll still part of the distribution? - z3

In the last release of Z3, I didn't get z3_dbg.dll. Is it still released ?
Alexandre.

We stopped including z3_dbg.dll in Z3 2.14. The main motivation was to reduce the distribution size. z3_dbg.dll is quite big, and it is not used by most users. That being said, we realize this DLL is useful when developing applications on top of Z3 API and/or writing Z3 theory plugins. We will include it back in the Z3 distribution package, or create a separate debug distribution package.

Related

Reduce DeepLearning4j dependency size of exported jar

In my application, I would like to use Deeplearning4j. Deeplearning4j has over 120mb of dependencies, which is a lot considering my own code is only 0.5mb.
Is it possible to reduce the dependencies required? Would loading an already-trained network allow me to ship my application with a smaller file size?
There are many ways to reduce the size of your jar depending on what your use case is. We cover this more recently in our docs, but I'll summarize some things to try here:
DL4j is heavily based on javacpp. You can add -Djavacpp.platform=$YOUR_PLATFORM (linux-x86_64, windows-x86_64,..) to your build to reduce the number of native dependencies in there.
If you are using deeplearning4j-core, that includes a lot of extra dependencies you may not need. In this case, you may only need deeplearning4j-nn for the configuration. The same goes for if you are using only samediff, you do not need the dl4j apis. I don't know enough about your use case to confirm what you do and don't need though.
If you are deploying on an embedded platform, we also have the ability to reduce the number of supported operations and data types now as well. This feature is mainly for advanced users right now (involves building from source) but if you think that could also be applicable despite the first 2, please do confirm and I can try to clarify that a bit.

Is Electron a Reliable Framework for Enterprise Apps?

We can see good applications (such as Slack and Insomnia) going to Electron, but there is safety/stable enough to build an big solution (such as an ERP) with that? Thanks.
As far as stability goes, Electron is very stable. In my experience I've had no stability issues or unanticipated behavior while developing some complex software on Electron.
However a bigger concern for some is security. Allow me to explain.
How Electron Packages Applications
Electron packages applications by bundling all of their javascript components into an asar.
Asar is a simple extensive archive format, it works like tar that concatenates all files together without compression, while having random access support.
Why This is a Security Concern
What this means is that all of your applications code is just put into an archive. This archive can be explored and extracted using the asar command quite trivially.
npm install asar
asar extract my-app.asar
While this may not be an issue for open source projects or applications like Slack which rely on a backend paid service, license based or paid products could be easily stolen as there is no code security / obscurity that a traditional compiled application might offer. For some, this may be acceptable, for others it may not. Especially if business logic occurs in the application.
Can This Issue be Mitigated?
One potential solution to this issue would be the ability to encrypt the ASAR. This issue has been brought up to the Electron devs, but they have stated that while they are open to a pull request they will likely not be implementing it themselves.
Another possible technique to mitigate this issue is code obfuscation using something such as UglifyJS. However this is obviously not true protection, just a hiding technique.
A third solution, one used by NW.js is to compile your JS to a V8 snapshot. However the Electron devs have indicated that this has significant (50%) performance costs and they will likely not support such capability.
All of this being said, it is possible to decompile / reverse engineer almost any application in any language. Electron just makes it a little easier to do so by "putting your code out there." However they have strong reasoning for doing so (performance gains) and unless you have a paid license product it probably doesn't make much difference to you anyways.
Further reading:
https://github.com/electron/electron/issues/3041
https://github.com/electron/electron/issues/2570

Code changes to build z3 on Solaris

I have need to get z3 building on Solaris 8. I took a look at the file scoped_timer.cpp, which is the only place that uses -D_LINUX_, and figure I can get the right code in there for Solaris, guarding it with -D_SOLARIS_. Also, src/util/hwf.cpp would need to be changed to provide definitions of fma() and nearbyint(), which aren't defined on Solaris 8. That can be done too, by defining fma(x, y, z) to be x*y + z, but then there would be two roundings instead of one, which is required by IEEE 754. Would this pose a problem for the purposes of z3? I would also need to change mk_util.py to set up compile and link options for Solaris. This also seems rather feasible as we are using g++ on Solaris, so the compile options would be similar. The link options probably would require additional libraries. I am willing to do some of the legwork, but I may need help along the way. Would anyone be willing to work with me, and would this be an welcome addition?
would this be an welcome addition?
I guess we can take a pull request when it is ready (and not obscuring other settings) and there are at least two users for this.
The usual conditions apply for taking pull requests https://github.com/Z3Prover/z3/wiki/Contribution-Guidelines.
Of course you can have your own forks without merging changes back.
The other issue may be that the endianness on your machines and the
constraints on memory alignment may expose further portability problems.
You should be able to find issues by running the regression tests under z3test repository (as well as the unit tests). We recently fixed some endianness related problems for ARM/PowerPC.

Recommended way to distribute Halide generated functions?

I am currently experimenting with Halide, the initial tests show quite promising performance improvements.
I am now wondering about what is the best strategy to distribute Halide code. Requiring users to install Halide seems like a heavy barrier at this point in time (since there are no automated install options).
One option would be to use compile_to_c, add the generated C code in the repository, and distribute compilation scripts for such C code. scikit-learn uses a similar strategy for Cython generated code. For Halide this seems like a no-go since the generated C code loses all the optimizations, defeating the purpose of Halide.
My current idea would be to use
compile_to_bitcode, distribute the generated bitcode together with compilation scripts that call llc to generate the desired machine code. The only requirement for the user would be to have llc (i.e. llvm) installed.
Does anyone have experience on this issue?
What are the pro and cons of my idea of distributing bitcode?
What would you recommend?
Some details on the kind of software distribution would help. The question implies a source code distribution, but there is a big difference between a library where programmers may need to interact with Halide produced code at a fine-grained level, and an application where use of Halide is largely invisible to the end user and the goal is just to get it to build.
Distributing bitcode is doable but problematic. To be portable, you have to use something like the PNaCl backend. (PNaCl is fairly close to a generic LLVM bitcode representation.) If you target a specific architecture, there is no guarantee the bitcode will compile or run on any other one. (Halide can lower to architecture specific intrinsics for example.) The LLVM community discourages using bitcode as a distribution format, though if it is in source form (.ll, not .bc) it is likely fairly stable and seems not much worse than shipping assembly files in terms of long term stability.
Halide includes an OS specific runtime into the generated output so even with bitcode, the result includes a number of target specific dependencies.
Often one ends up with a design that chooses, at runtime, between one of a number of Halide outputs based on the actual type of processor being used. E.g. using Halide to compile the same algorithm with two different schedules for SSE2 and AVX2 processors. In this model, there are going to be a lot of object files anyway and one can simply choose at build time which ones to include for a given architecture and OS. Distributing the objects as .ll files rather than .o files will likely work, but I'm not sure it buys much.
I would strive to make the full source code available, requiring Halide if one is doing a compilation from the ground up, and look for ways to provide various levels of binary distribution. Certainly for end user software the emphasis should be on how to get the fully built package into the hands of users. For libraries, Halide may be used to surface a higher level programming model to users of the library, in which case the Halide compiler will need to be present anyway.
We strive to make Halide fairly easy to get onto a system and very stable, but have not absolutely nailed either yet. I'd likely try to provide some level of fallback and using the C backend to generate generic C code might be a decent way to do that without rewriting everything in C directly. (If building from source, one gets a choice between installing Halide or using the prebuilt C code.) This is one of the better use cases for the C backend. (Generating C code from Halide is generally a pretty marginal idea despite it seeming to be a good one at first.)
compile_to_c() is definitely not recommended, as the code it generates isn't very optimized; it's useful mostly as a debugging / development tool.
compile_to_bitcode() sounds like it could work, but I'm not aware of anyone using this as a distribution method.
(It would probably be useful to have an automated install available for Halide.)

Tools to manage semantic webs

I've seen a lot frameworks to create a semantic web (or rather the model below it). What tools are there to create a small semantic web or repository on the desktop, for example for personal information management.
Please include information how easy these are to use for a casual user, (in contrast to someone who has worked in this area for years). So I'd like to hear which tools can create a repository without a lot of types and where you can type the nodes later, as you learn about your problem domain.
For personal semantic information management on the desktop there is NEPOMUK. There are two versions, one embedded in kde4, this lets you tag, rate and comment things such as files, folders, pictures, mp3s, etc. on the desktop across all applications.
Another version is written in Java and is OS independent, this is more of a research prototype. It has more features, but is overall less stable.
For KDE-Nepomuk see http://nepomuk.kde.org/
For Java-Nepomuk see http://dev.nepomuk.semanticdesktop.org/ and http://dev.nepomuk.semanticdesktop.org/download/ for downloads (the DFKI version is better)
Extensive list of semantic web tools
Also check out Protege
If you need to create a small model, then I suggest that you use topbraid. I have used for creating much larger models and I know people who have used to create humongous models. It comes packaged with a set of reasoners and provides ability to plug-in custom reasoner and in case if you decide to make your model larger, you can even integrate Topbraid with a triple store like Allegrograph.
And since its based on eclipse, to get started with it is relatively easier.
For developers who are spoiled working in more matured programming languages like Java (IDEA ? anyone), topbraid is the closest tool to an actual IDE.
Chandler is a "a notebook you can organize, back up and share!" It seems to be pretty simple to use.
OS: Windows, Mac, Linux

Resources