What is the difference between cc_toolchain_suite and register_toolchains? - bazel

In the context of C++ toolchain, I am trying to understand the difference of the concept between cc_toolchain_suite and register_toolchains, to me it seems they achieve the same purpose: select a toolchain based on command line parameters.
See https://docs.bazel.build/versions/master/toolchains.html for register_toolchains
See https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html for cc_toolchain_suite
Can someone please help understand the subtlety behind these 2 concepts?

TL;DR The cc_toolchain_suite is part of the legacy toolchain configuration system. It still exists in Bazel because the migration to the new API is not complete. The register_toolchains is part of the newer, unified toolchain API. When possible use register_toolchains instead of cc_toolchain_suite/--*crosstool_top
Originally the concept of a 'toolchain' was not standardised in Bazel, so a Java toolchain would be implemented very differently than a cc toolchain.
Before the unified starlark toolchain API, cc toolchains were specified in proto-text formatted 'CROSSTOOL' files.
With the introduction of the platforms API and the unified toolchains API, the concepts in the CROSSTOOL files were converted almost 1:1 to the new unified platforms/toolchains starlark API. This was mostly to ensure that there was compatibility between the old/new API's.
One of the concepts in the older 'CROSSTOOL' configuration system was a 'toolchain suite', that allowed you to define a group of toolchains targeting different CPU's (This was before the platforms API was introduced).
As far as I understand the only reason that cc_toolchain_suite is still a part of Bazel's starlark API is that some of the apple/android toolchains have not yet been completely migrated across.
Here are a few examples of where I've opted to using the newer register_toolchains approach. Note that these toolchains do not use cc_toolchain_suite anymore.

Related

Why can't a Yocto SDK build a Yocto SDK?

There are set of related questions here, because I suspect I am asking the wrong question. The related questions may help someone discern what my fundamental misunderstanding is.
I have worked through:
https://www.yoctoproject.org/docs/2.6/ref-manual/ref-manual.html
https://www.yoctoproject.org/docs/2.6/dev-manual/dev-manual.html
https://www.yoctoproject.org/docs/2.6/sdk-manual/sdk-manual.html
I'm looking for an single build environment from which I can use bitbake, and build a product for different target architectures.
This after all seems to be what the Yocto/OE holy grail is.
It seems like the most functional x86_64 environment is had from:
git clone git://git.yoctoproject.org/poky
It is more capable than the SDK's, but how do I cross-build this environment for another platform?
Is there an SDK that is as functional as this git clone'd environment? Meaning it has a working bitbake and I can cross-build bootable images for different targets?
Questions:
Why can't an SDK build an SDK? (e.g. http://downloads.yoctoproject.org/releases/yocto/yocto-2.6/buildtools/)
Why doesn't an SDK even include bitbake? (The ext SDK does, but doesn't like to add it to the path).
Why does an extensible SDK with properly sourced env (and bitbake added to the path) seem to prefer the distro-installed build tools instead of the ones in the SDK? (when using bitmake directly instead of devtool)
Why is an SDK apparently tied to build for a particular machine or architecture, and apparently unable to cross-build for different architectures? The process for building an SDK even wishes the final architecture to be specified in advance
What I'm used to is a build-sysroot with the cross-toolchain running under some sort of pseudo/proot/chroot with my sources mounted into it.
I realise that Yocto/bitbake does this under the hood, all the recipe caching seems great, the git clone checkout seems powerful, the devtool workflow seems great, but then it all falls down when I try to standardise generation of this environment, or make it cross-compile.
(I’m expecting to source the environment file from a target directory containing some local conf files to specialise the build, and then use bitbake to make the build)
What have I missed? - thanks for reading this far ;-)
SDK is such a generic word that in the context of yocto, it can be miss interpreted and so your question is legit.
Yocto is a wonderful tool to build completely custom images and can be adjusted at all level (bootloader, kernel, applications) based on source fetched online.
The SDK you can generate with yocto is as quoted from the documentation:
The Standard SDK provides a cross-development toolchain and libraries
tailored to the contents of a specific image.
Based on my small experience with Yocto, you use meta layers to create and customize your environment. When your environment is setup, you can generate an SDK to easily cross compile your aplicative programs for your target machine.
Yocto tool is way too powerful, heavy and complicated for developers who just focus on the aplicative part of a project. The SDK on the other side is perfect for that use but you can't change anything in the toolchain with it, you can only use it. If a bug or a patch needs to be applied in runtime libs for example, you need to regenerate the SDK and give this new versions to developers.
With that short explanations:
It is more capable than the SDK's, but how do I cross-build this
environment for another platform?
You need to customize your Yocto meta layers to change from a platform to another.
Is there an SDK that is as functional as this git clone'd environment?
Meaning it has a working bitbake and I can cross-build bootable images
for different targets?
No, i don't think so
Why can't an SDK build an SDK?
Because that's not the philosophy of the SDK, sdk is a generated toolchain for a specific image to cross compile your programs, no more.
Why doesn't an SDK even include bitbake?
Bitbake is a tool to parse yocto recipe (so meta layers) and so, there is no need to have this tool in the SDK
Why is an SDK apparently tied to build for a particular machine or
architecture, and apparently unable to cross-build for different
architectures? The process for building an SDK even wishes the final
architecture to be specified in advance
I think i already gave an answer to this question but, about the second part of your question. It is possible to be a little bit agile and start both the BSP and applications in parallel. Every week, you release a new SDK with BSP new changes an the toolchain is always up-to-date for developers (This is a very idealistic vision i admit)
Reading from https://www.yoctoproject.org/docs/2.6.1/ref-manual/ref-manual.html#cross-development-toolchain
it seems that an SDK and eSDK are examples of a relocatable toolchain;
A relocatable toolchain used outside of BitBake by developers when
developing applications that will run on a targeted device.
This sentence particularly gives the game away:
You can also find more information on using the relocatable toolchain
in the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual.
SO I guess the git-clone-poky checkout which builds the SDK and eSDK is:
A toolchain only used by and within BitBake when building an image for a target architecture
No doubt I am interested in:
toolchain concepts as they apply to the Yocto Project
and should:
see the "Cross-Development Toolchain Generation" section in the Yocto
Project Overview and Concepts Manual https://www.yoctoproject.org/docs/2.6.1/overview-manual/overview-manual.html#cross-development-toolchain-generation
Certainly the first image makes it clear that the SDK is for building apps, not the image. I want to build the image (which of course may contain apps).
And so I may wish to make an SDK for other app builders, and incorporate their app into my sources and do the final build for them.
It may also be that the toolchain used for building an image can be run within the SDK so as to use the toolchain of the SDK rather than the host linux distro toolchain no, you can't

What are the differences between the various OpenCL SDKs

I'd like to learn how to use the OpenCL API, however I am a bit confused about how to "install" OpenCL for development. The various articles on Google are conflicting and I suspect some are obsolete.
My understanding is that Khronos group provides the specification and then various companies provide an SDK that complies with that specification.
As I understand it you need:
The OpenCL headers, which can be downloaded from the Khronos site
The OpenCL library, which comes with the various SDKs
Is there a difference between the different SDKs? From what I can tell the options are Intel, AMD or Nvidia. I've read conflicting information about whether it matters what SDK you use - some sources say that the SDK is just for the developer and the binaries that are produced will work on any hardware that supports OpenCL while other sources say that using a particular SDK locks your application into one vendors hardware. Which is it? Does it matter which SDK I choose to use and is there a non-vendor specific OpenCL library that I can link to?
OpenCL SDKs are different. They provide tools to ease the developing, additional functions, samples, and documentation.
Every manufacturer will include what it suits best their hardware, but they all should be compatible when the app is compiled.
The ".lib" ".a" OpenCL library that gets linked into the app (the one that comes in the SDK) is the same in all the cases (except if they have different versions, 1.1, 1.2, 2.0).
This is because the library is not really a library, but only a stub to the real functions in the real driver. This also explains why the headers are all the same, they all link against the same library.
All the apps, no matter what SDK should be the same after compiling.
In the case of nVIDIA, additionally to their OpenCL.lib, they provide some functions to ease the platform acquisition (oclGetPlatformID() in oclUtils.h) and so on, that are not available on other drivers, and it is recomended NOT to use them unless you want to end up having to pack another nVIDIA propietary library to your app.
But if you really want to be generic, then you should go for dynamic library loading (dload(), LoadLibrary()). Which will make your app work even in situations where the system does not even have OpenCL installed.
You are correct, all SDKs use (or should use) the Khronos headers. You could also just use the Khronos headers yourself.
On Windows, you'd link to OpenCL.lib, which is just a static wrapper around OpenCL.dll (installed by your graphics driver in \Windows\System32). Everyone's wrapper should be similar for the same version of OpenCL. It is also supplied by Khronos (the ICD is open source) but it is easier to use one from an SDK.
OpenCL.dll uses the ICD mechanism to load the runtime(s) installed by each driver. If you have multiple runtimes installed, they each show up as a cl_platform.
If you are targeting OpenCL 1.1 (due to NVIDIA) I suggest using the version 1.1 header to ensure you don't accidentally use newer API.
While the OpenCL aims to abstract code from hardware, there are several different types of GPU architectures. These differences force writing specific code for specific hardware. Hence it is not easy to write a portable code. IMHO, you are better off selecting one hardware and utilize developer friendly SDK for that platform.
What is the use case you are trying to solve?
The binaries can be compiled at runtime (at least in Java). Therefore a OpenCL-C-runtime (?) is needed, but the compiled kernel are mostly hardware-dependent.

AndroidAnnotations minimum API supported

I would like to know the minimum Android API level AndroidAnnoatations supports ? I could not find any info on their website .
regards,
Felix T
I think there is NO specific minimum Android API Level that AndroidAnnotations can work with.
Since it's a compilation tool, I mean it's related with the Java files and it's related not with Android API.
If you build an Android project with AndroidAnnotations, then some intermediate java files will be generated, which will be the final java file for Java compiler, and that's all. It's just used for convinience of your development. The generated apk file will not have information about AndroidAnnotations - it's something like a conversion tool (shorter exprssion to long complicated expression, which is not visible to you).
I think that you can use AndroidAnnotation from Android API Level 1.
Maybe some of the annotations cannot be used in Android API 1 project, but even if such case happens, just removing only that specific annotation in your .java file will make it work.

MonoTouch Migration Analyser

Is there any migration analysers available for MonoTouch ?
I have seen one for Mono, but not for MonoTouch.
Short answer: No, there is none at the moment.
Long answer
The situation is a bit different from Mono. In general you test a complete and compiled (against a specific version of the framework) .NET application with MoMA, to get a report of what pieces are missing (or incomplete) in Mono that could affect the execution of your application on other platforms (e.g. OSX and Linux).
Testing a complete applications for MonoTouch would reports tons of issues - since the UI toolkit is totally different. E.g. anything about System.Windows.Forms, WPF... would always missing.
However if your assemblies are already split into (something like) an MVC design it would be possible to test some (the non-UI parts) of them against definitions based on the MonoTouch base class library.
Finally if someone has an immediate need (or looking for a nice project) MoMA is available as open source and the evaluation versions of MonoTouch contains all the assemblies needed to build the definitions files. A bit of extra filtering could make this into a very nice tool.
Alternative
You can see a list of the assemblies that are part of MonoTouch and some platform restrictions (compared to .NET) you should be aware.

Enable generics in blackberry jde 4.5.0

When i compiled my application in blackberry it shows the following error.
generics are not supported in -source 1.3
(use -source 5 or higher to enable generics)
how to solve this
Java 1.3 is barbaric and no one should ever have to suffer its indignities. Fortunately, there is a solution!
Generics, enums, changing return signature in overrides, and pretty much everything that makes java usable was introduced in java 1.5. (see http://en.wikipedia.org/wiki/Java_version_history). Fortunately, most of java 1.5 was designed to be backwards compatible and not require JVM / bytecode changes. (or maybe this was unfortunate, as it lead Java's implementation of generics to be much weaker than C#. just try creating a generic class with static methods / fields that use the generic parameter)
This IBM article does a good job of explaining the background:
http://www.ibm.com/developerworks/java/library/j-jtp02277.html
But this JVM similarity allowed for creation of tools such as:
http://retrotranslator.sourceforge.net/
This is the section from my Ant buildfile that calls retrotranslator:
&lt java jar="${transformer.jar.exe}"
fork="true"
classpath="${epic-framework.dir}/tools/retrotranslator-runtime13-1.2.9.jar:${epic-framework.dir}/tools/retrotranslator-runtime-1.2.9.jar"
args="-srcjar ${build.dir}/classes5.jar -target 1.3 -destjar ${build.dir}/classes5to3.jar"
/>
Run the converted jar through preverify.exe and then give it to rapc.exe and you will have a working Blackberry app written with Java 1.5.
Edit: I missed a key detail in my original post. In addition to being Java 1.3, the Blackberry class hierarchy is missing many classes that would normally be a part of a Java SE 1.3 JDK. The one you will hit first is StringBuilder -- javac transforms ("string" + "otherstr" + "blah blah") into StringBuilder.append("string").append("otherstr").append("blah blah"). That class doesn't exist on BB, so you break. However, BB has StringBuffer, so writing an adapter between the two is pretty easy. The one catch is that BB disallows apps from adding classes into java.*. This can be very effectively fixed in the build process: 1) build your app against Java 1.5 w/ java.lang.StringBuilder on the classpath, 2) string transform java.lang.Stringbuilder (and everything else in your compat shim) to live in com.mycorp.java.lang.StringBuilder and build it into a JAR file. 3) Use that JAR file w/ retrotranslator and retrotranslator will update all bytecode references to java.lang.StringBuilder so that they now point to com.mycorp.java.lang.StringBuilder. Now you have a java 1.3 compatible bytecode that can be run on a Blackberry.
If anyone is interested in this stuff, contact me. I could look into open sourcing the compat library I have.
This is a limitation of J2ME, which uses a subset of the J2SE (no collections, reflection, etc.) and a Java language level of 1.3. Any code written for J2SE will most likely need to be manually ported.
It seems the JDK5 is not yet supported.
Same question was asked on the blackberrry forum but about enum support:
Sadly, the BlackBerry api is very behind in terms of Java versioning. There's no Generics, no Maps, no Enums - it's based around JDK 1.3.
I believe there is no way of enabling this feature within your BlackBerry app. If you find one, I'd be very interested to hear about it.

Resources