Does JaVers work with GraalVM native images? - graalvm

GraalVM compiles Java programs into native executables. Certain Java features, such as reflection, are not available or limited there. Since JaVers uses reflection: Does JaVers fully work in a GraalVM native executable?

I have never tested Javers on GraalVM. If GraalVM does not fully supports the java.util.reflect package, which is extensively used by Javers -- well, how Javers are supposed to work?

Related

What is the difference between cc_toolchain_suite and register_toolchains?

In the context of C++ toolchain, I am trying to understand the difference of the concept between cc_toolchain_suite and register_toolchains, to me it seems they achieve the same purpose: select a toolchain based on command line parameters.
See https://docs.bazel.build/versions/master/toolchains.html for register_toolchains
See https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html for cc_toolchain_suite
Can someone please help understand the subtlety behind these 2 concepts?
TL;DR The cc_toolchain_suite is part of the legacy toolchain configuration system. It still exists in Bazel because the migration to the new API is not complete. The register_toolchains is part of the newer, unified toolchain API. When possible use register_toolchains instead of cc_toolchain_suite/--*crosstool_top
Originally the concept of a 'toolchain' was not standardised in Bazel, so a Java toolchain would be implemented very differently than a cc toolchain.
Before the unified starlark toolchain API, cc toolchains were specified in proto-text formatted 'CROSSTOOL' files.
With the introduction of the platforms API and the unified toolchains API, the concepts in the CROSSTOOL files were converted almost 1:1 to the new unified platforms/toolchains starlark API. This was mostly to ensure that there was compatibility between the old/new API's.
One of the concepts in the older 'CROSSTOOL' configuration system was a 'toolchain suite', that allowed you to define a group of toolchains targeting different CPU's (This was before the platforms API was introduced).
As far as I understand the only reason that cc_toolchain_suite is still a part of Bazel's starlark API is that some of the apple/android toolchains have not yet been completely migrated across.
Here are a few examples of where I've opted to using the newer register_toolchains approach. Note that these toolchains do not use cc_toolchain_suite anymore.

Is the Dart VM still used?

I've been using dart/flutter for a few projects, and I'm really enjoying it.
I've read that when building a mobile app, dart builds a native app with native code. But I've also read that dart has its own VM for performance.
What I'm trying to understand is if that VM is what is used when you build a mobile app, or is it building other code that it compiles for the native app. And if its doing something else, what is the dart VM still used for?
Short answer: yes, Dart VM is still being used when you build your mobile app.
Now longer answer: Dart VM has two different operation modes a JIT one and an AOT one.
In the JIT mode Dart VM is capable of dynamically loading Dart source, parsing it and compiling it to native machine code on the fly to execute it. This mode is used when you develop your app and provides features such as debugging, hot reload, etc.
In the AOT mode Dart VM does not support dynamic loading/parsing/compilation of Dart source code. It only supports loading and executing precompiled machine code. However even precompiled machine code still needs VM to execute, because VM provides runtime system which contains garbage collector, various native methods needed for dart:* libraries to function, runtime type information, dynamic method lookup, etc. This mode is used in your deployed app.
Where does precompiled machine code for the AOT mode comes from? This code is generated by (a special mode of the) VM from your Flutter application when you build your app in the release mode.
You can read more about how Dart VM executes Dart code here.
When the Dart VM is used in release mode, it is not really a VM (virtual machine) in the traditional sense of a virtual computer processor implemented in software, which has its own machine language that is different from the hardware's machine language.
This is what causes the confusion in the original question. In release mode, the Dart VM is basically a runtime library (not much different than runtime libraries required by all high level languages).
The Dart VM is perfectly good for server-side applications, particularly using dart:io to access local files, processes, and sockets.

What are the differences between the various OpenCL SDKs

I'd like to learn how to use the OpenCL API, however I am a bit confused about how to "install" OpenCL for development. The various articles on Google are conflicting and I suspect some are obsolete.
My understanding is that Khronos group provides the specification and then various companies provide an SDK that complies with that specification.
As I understand it you need:
The OpenCL headers, which can be downloaded from the Khronos site
The OpenCL library, which comes with the various SDKs
Is there a difference between the different SDKs? From what I can tell the options are Intel, AMD or Nvidia. I've read conflicting information about whether it matters what SDK you use - some sources say that the SDK is just for the developer and the binaries that are produced will work on any hardware that supports OpenCL while other sources say that using a particular SDK locks your application into one vendors hardware. Which is it? Does it matter which SDK I choose to use and is there a non-vendor specific OpenCL library that I can link to?
OpenCL SDKs are different. They provide tools to ease the developing, additional functions, samples, and documentation.
Every manufacturer will include what it suits best their hardware, but they all should be compatible when the app is compiled.
The ".lib" ".a" OpenCL library that gets linked into the app (the one that comes in the SDK) is the same in all the cases (except if they have different versions, 1.1, 1.2, 2.0).
This is because the library is not really a library, but only a stub to the real functions in the real driver. This also explains why the headers are all the same, they all link against the same library.
All the apps, no matter what SDK should be the same after compiling.
In the case of nVIDIA, additionally to their OpenCL.lib, they provide some functions to ease the platform acquisition (oclGetPlatformID() in oclUtils.h) and so on, that are not available on other drivers, and it is recomended NOT to use them unless you want to end up having to pack another nVIDIA propietary library to your app.
But if you really want to be generic, then you should go for dynamic library loading (dload(), LoadLibrary()). Which will make your app work even in situations where the system does not even have OpenCL installed.
You are correct, all SDKs use (or should use) the Khronos headers. You could also just use the Khronos headers yourself.
On Windows, you'd link to OpenCL.lib, which is just a static wrapper around OpenCL.dll (installed by your graphics driver in \Windows\System32). Everyone's wrapper should be similar for the same version of OpenCL. It is also supplied by Khronos (the ICD is open source) but it is easier to use one from an SDK.
OpenCL.dll uses the ICD mechanism to load the runtime(s) installed by each driver. If you have multiple runtimes installed, they each show up as a cl_platform.
If you are targeting OpenCL 1.1 (due to NVIDIA) I suggest using the version 1.1 header to ensure you don't accidentally use newer API.
While the OpenCL aims to abstract code from hardware, there are several different types of GPU architectures. These differences force writing specific code for specific hardware. Hence it is not easy to write a portable code. IMHO, you are better off selecting one hardware and utilize developer friendly SDK for that platform.
What is the use case you are trying to solve?
The binaries can be compiled at runtime (at least in Java). Therefore a OpenCL-C-runtime (?) is needed, but the compiled kernel are mostly hardware-dependent.

LuaJIT and Rocks?

Just a small question from a "Lua newbie"...I have been using LuaJIT and it is awesome, no the question is since LuaJIT is Lua 5.1 compatible does that mean I can use all the "LuaRocks" that standard Lua uses in LuaJIT?
For instance if I wanted to install one of the SQLite libraries (e.g. http://luaforge.net/projects/luasqlite/) - how would I install that in LuaJIT?
Do all the available "LuaRocks" work out the box with LuaJIT?
LuaJIT is designed to be drop-in compatible with the Lua stand-alone. There is no reason why any purely Lua-based Rocks shouldn't work. DLL-based Rocks ought to work as well, since the LuaJIT stand-alone DLL is compatible with the original DLL.
Concretely:
"LuaJIT is fully upwards-compatible with Lua 5.1. It supports all standard Lua library functions and the full set of Lua/C API
functions. LuaJIT is also fully ABI-compatible to Lua 5.1 at the
linker/dynamic loader level. This means you can compile a C module
against the standard Lua headers and load the same shared library from
either Lua or LuaJIT."
I think that pretty much says it all.

When will a newer version of flex for windows be available?

I'm using flex (lexical analyzer, not Adobe Flex) on a project. However, I want to be able to compile on Windows platforms as well, but the Windows version's newest version is only 2.5.4a, so it won't compile my file for version 2.5.35. And no, I can't downgrade to the highest supported Windows version.
Anyone know about plans to upgrade the windows version, or have a suggestion to compile on windows anyway?
You can ask on the mailing list, or get involved in the Flex project yourself. I think the code-base for Flex has remained static for a while, but I don't know who maintains the Windows port. In the interim...
I would recommend including the produced source in your project.
Generate the lexer on a Linux system to produce your lex.c/lex.h files (or whatever)
Include those files in your Win32 C source before you build
If you don't have direct access to a Linux system, a virtual machine might be a good idea. The Flex source should be complaint to some C standard that builds on Windows, but most of the POSIX differences can be altered to use Win32 API fairly easily.
Maybe distribute as:
/src/source_files.c
/src/lex.l
/src/win32_lex/lex.c
This way systems with a modern flex can generate the source from the lex file, and Windows systems compiling the source can use the complementary pre-processed C files.
Short of using some user-space POSIX (Cygwin or whatever).
A little bit of tweaking required, but isn't that portability for you!
Windows builds of flex 2.5.35 do exist, but unfortunately they are not self contained. You can download the MINGW build here, and the Cygwin build here; see also another stackoverflow question. Each build requires that its respective (MINGW or Cygwin) kernel be installed.

Resources