Why can't a Yocto SDK build a Yocto SDK? - sdk

There are set of related questions here, because I suspect I am asking the wrong question. The related questions may help someone discern what my fundamental misunderstanding is.
I have worked through:
https://www.yoctoproject.org/docs/2.6/ref-manual/ref-manual.html
https://www.yoctoproject.org/docs/2.6/dev-manual/dev-manual.html
https://www.yoctoproject.org/docs/2.6/sdk-manual/sdk-manual.html
I'm looking for an single build environment from which I can use bitbake, and build a product for different target architectures.
This after all seems to be what the Yocto/OE holy grail is.
It seems like the most functional x86_64 environment is had from:
git clone git://git.yoctoproject.org/poky
It is more capable than the SDK's, but how do I cross-build this environment for another platform?
Is there an SDK that is as functional as this git clone'd environment? Meaning it has a working bitbake and I can cross-build bootable images for different targets?
Questions:
Why can't an SDK build an SDK? (e.g. http://downloads.yoctoproject.org/releases/yocto/yocto-2.6/buildtools/)
Why doesn't an SDK even include bitbake? (The ext SDK does, but doesn't like to add it to the path).
Why does an extensible SDK with properly sourced env (and bitbake added to the path) seem to prefer the distro-installed build tools instead of the ones in the SDK? (when using bitmake directly instead of devtool)
Why is an SDK apparently tied to build for a particular machine or architecture, and apparently unable to cross-build for different architectures? The process for building an SDK even wishes the final architecture to be specified in advance
What I'm used to is a build-sysroot with the cross-toolchain running under some sort of pseudo/proot/chroot with my sources mounted into it.
I realise that Yocto/bitbake does this under the hood, all the recipe caching seems great, the git clone checkout seems powerful, the devtool workflow seems great, but then it all falls down when I try to standardise generation of this environment, or make it cross-compile.
(I’m expecting to source the environment file from a target directory containing some local conf files to specialise the build, and then use bitbake to make the build)
What have I missed? - thanks for reading this far ;-)

SDK is such a generic word that in the context of yocto, it can be miss interpreted and so your question is legit.
Yocto is a wonderful tool to build completely custom images and can be adjusted at all level (bootloader, kernel, applications) based on source fetched online.
The SDK you can generate with yocto is as quoted from the documentation:
The Standard SDK provides a cross-development toolchain and libraries
tailored to the contents of a specific image.
Based on my small experience with Yocto, you use meta layers to create and customize your environment. When your environment is setup, you can generate an SDK to easily cross compile your aplicative programs for your target machine.
Yocto tool is way too powerful, heavy and complicated for developers who just focus on the aplicative part of a project. The SDK on the other side is perfect for that use but you can't change anything in the toolchain with it, you can only use it. If a bug or a patch needs to be applied in runtime libs for example, you need to regenerate the SDK and give this new versions to developers.
With that short explanations:
It is more capable than the SDK's, but how do I cross-build this
environment for another platform?
You need to customize your Yocto meta layers to change from a platform to another.
Is there an SDK that is as functional as this git clone'd environment?
Meaning it has a working bitbake and I can cross-build bootable images
for different targets?
No, i don't think so
Why can't an SDK build an SDK?
Because that's not the philosophy of the SDK, sdk is a generated toolchain for a specific image to cross compile your programs, no more.
Why doesn't an SDK even include bitbake?
Bitbake is a tool to parse yocto recipe (so meta layers) and so, there is no need to have this tool in the SDK
Why is an SDK apparently tied to build for a particular machine or
architecture, and apparently unable to cross-build for different
architectures? The process for building an SDK even wishes the final
architecture to be specified in advance
I think i already gave an answer to this question but, about the second part of your question. It is possible to be a little bit agile and start both the BSP and applications in parallel. Every week, you release a new SDK with BSP new changes an the toolchain is always up-to-date for developers (This is a very idealistic vision i admit)

Reading from https://www.yoctoproject.org/docs/2.6.1/ref-manual/ref-manual.html#cross-development-toolchain
it seems that an SDK and eSDK are examples of a relocatable toolchain;
A relocatable toolchain used outside of BitBake by developers when
developing applications that will run on a targeted device.
This sentence particularly gives the game away:
You can also find more information on using the relocatable toolchain
in the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual.
SO I guess the git-clone-poky checkout which builds the SDK and eSDK is:
A toolchain only used by and within BitBake when building an image for a target architecture
No doubt I am interested in:
toolchain concepts as they apply to the Yocto Project
and should:
see the "Cross-Development Toolchain Generation" section in the Yocto
Project Overview and Concepts Manual https://www.yoctoproject.org/docs/2.6.1/overview-manual/overview-manual.html#cross-development-toolchain-generation
Certainly the first image makes it clear that the SDK is for building apps, not the image. I want to build the image (which of course may contain apps).
And so I may wish to make an SDK for other app builders, and incorporate their app into my sources and do the final build for them.
It may also be that the toolchain used for building an image can be run within the SDK so as to use the toolchain of the SDK rather than the host linux distro toolchain no, you can't

Related

What is the difference between cc_toolchain_suite and register_toolchains?

In the context of C++ toolchain, I am trying to understand the difference of the concept between cc_toolchain_suite and register_toolchains, to me it seems they achieve the same purpose: select a toolchain based on command line parameters.
See https://docs.bazel.build/versions/master/toolchains.html for register_toolchains
See https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html for cc_toolchain_suite
Can someone please help understand the subtlety behind these 2 concepts?
TL;DR The cc_toolchain_suite is part of the legacy toolchain configuration system. It still exists in Bazel because the migration to the new API is not complete. The register_toolchains is part of the newer, unified toolchain API. When possible use register_toolchains instead of cc_toolchain_suite/--*crosstool_top
Originally the concept of a 'toolchain' was not standardised in Bazel, so a Java toolchain would be implemented very differently than a cc toolchain.
Before the unified starlark toolchain API, cc toolchains were specified in proto-text formatted 'CROSSTOOL' files.
With the introduction of the platforms API and the unified toolchains API, the concepts in the CROSSTOOL files were converted almost 1:1 to the new unified platforms/toolchains starlark API. This was mostly to ensure that there was compatibility between the old/new API's.
One of the concepts in the older 'CROSSTOOL' configuration system was a 'toolchain suite', that allowed you to define a group of toolchains targeting different CPU's (This was before the platforms API was introduced).
As far as I understand the only reason that cc_toolchain_suite is still a part of Bazel's starlark API is that some of the apple/android toolchains have not yet been completely migrated across.
Here are a few examples of where I've opted to using the newer register_toolchains approach. Note that these toolchains do not use cc_toolchain_suite anymore.

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

Cocoapods vs Gradle - iOS

Currently I am trying to figure out how to use Cocoapods. Many blogs quoted that Cocoapods is the best dependency management tool at present.
However, I am also using Gradle plugin for building my application.
Now the question is, can Gradle do the same dependency management for my private files & libraries (.a files) as Cocoapods.
Long story short (Jan. 2015):
Gradle: build system + dependency management.
Cocoapods: dependency management for Xcode internal builds.
There is probably more to this (for others that want to start commenting "but Cocoapods can also.."), but for a start that summarises it.
If you are new to iOS and/or Xcode you should probably not use a mixture of Gradle and Xcode as it adds extra complexity to an already complex build environment. If you are familiar with Gradle and you also (!) have some knowledge of Xcode, then I would recommend to use Gradle. Advantage? You have full control over your builds and it saves you from messing around in endless Xcode build-config-dialogs. On top you gain access to other repositories (maybe not that interesting for you) AND you can script your builds in a cross-platform environment. I also use a non-MacOS build server (Linux+Jenkins) which is able to interpret Gradle-build scripts, which is another plus.
If you want access to a versioning system other than git, you also gain access to that... when I type the lines right here, I wonder why I ever built an app without Gradle :-). I have even more things that come to my mind, now that I think about it: mixed programming-language builds (Java/C#/objC...), unit-test integration that does not require Xcode, easy reuse of build configurations from project to project ... .
Cocoapods is pretty much tied to Xcode, since it generates Xcode project files. The problem with Xcode is that it works best when it's building the app, not some external build system. I suppose it's possible to make a Gradle plugin that uses Cocoapods repositories with Gradle's Objective-C support, but I haven't seen anything like that.

Could Free Pascal benefit of something like Apache Maven?

Apache Maven is a very popular build and dependency management tool in the Java open source ecosphere. I did some tests to find out if it can handle compiled Free Pascal / Delphi units and found it easy to implement. So it would be possible to
release open source libraries precompiled for Free Pascal (or Delphi) in a public Maven repository
include metadata in this repository which contains dependency information
use Maven on the command line to download the open source library from the public repository, and automatically resolve all dependencies
local repositories, working as proxies, could be used to cache frequently used binaries
automatic checksum generation and verification (provided by Maven) would reduce the risk of downloading corrupted binaries
source code and even documentation files could be provided with the binaries
binaries can be provided with or without debug information
continuous integration servers like Hudson, TeamCity or CruiseControl can be used to build projects whenever changes have been submitted to the source control system and notify developers about build errors
This way of dependency management could be very beneficial for open source projects which use many third party libraries with complex dependencies. It would avoid typical conflicts caused by using wrong versions.
For the developer, the workflow for editing and building a project would be reduced to a minimum:
checkout the project source from internal version control system
edit source file(s)
run mvn package to automatically download all required third party libraries (precompiled units) if they are not yet in the workstation's local repository
compile and run
The only additional file for Apache Maven which is required in the project folder is the POM.XML file containing the project information.
Edit: while Maven is usable for some of the required tasks, implementing a solution like Maven in native Free Pascal would have some advantages: no Java SDK required, support for all development platforms where Free Pascal is available, maintenance and plugin development in Pascal.
Usage of a Maven-like tool would not be helpful for open source projects only - commercial projects could access and use the artifacts in public Maven repositories in the same way as well.
Maven features are listed at http://maven.apache.org/maven-features.html
Update:
one use case could be the build of Lazarus, where Maven would download all required libraries and invoke the compiler with the necessary build path arguments. Changes in the dependencies on lower levels would be propagated automatically up to the parent build.
Possible benefits:
less time needed to set up a new work
station, no manual installation of
third party libraries required
less errors caused by wrong library
versions, detection of version
conflicts (for example if two
libraries depend on different
versions of a third library)
artifacts which are created inhouse
can be added to the local maven
repository and shared between
developers and project, central
storage of all artifacts with
metadata
builds are reproducible, just by
using the same source and project
metadata file (pom.xml)
can reduce development time and
increase project stability
Update #2: FPMake
the FPMake build system for Free Pascal seems to be a tool with much potential, in many details it is quite similar to Maven:
FPMake is a pascal based build system developed for and distributed with FPC
FPMake standardizes the building by defining some limits like standard directories
the command fppkg <packagename> will look in a database for the package, extract it, and then compile fpmake.pp and run it
it has standard build targets (clean, build, install, ...)
it can create a 'manifest' file suitable for import into a repository (like mvn deploy or mvn install), the manifest is an XML file which looks very similar to a pom.xml in Maven:
FPMake manifest file:
<packages>
<package name="my-package">
<version major="0" minor="7" micro="6" build="1"/>
<filename>my-package-0.7.6-1.zip</filename>
<author>my name</author>
<license>GPL</license>
<homepageurl>http://www.freepascal.org/</homepageurl>
<email>myname#freepascal.org</email>
<description>this is the package description</description>
<dependencies>
<dependency>
<package packagename="rtl"/>
</dependency>
</dependencies>
</package>
</packages>
Freepascal has been working on a package system of its own in a cross between apt-get and freebsd ports style. (download source/build/install automatically), called fppkg.
However work has stalled. People investing time are the bottleneck, not people wanting to choose tools.
As far as Maven goes, I don't like auxilary tools that need installation of huge external runtimes. It might be fine for a big major app (like Open Office), but not for an util.
I also prefer a tool that is designed to the FPC reality and workflow.
Documentation tools, build tools, download systems, testsuite systems are already all there, it just need a person that dedicates a lot of time into it to make it happen.
Some typical problems when introducing a new technology in a project as FPC, and why it has a tendency to make its own tools:
need to train 20+ committers in parttime.
The only COMMON programming language you can assume is Free Pascal. Even Delphi inner workings can't be taken for granted to be known (many committers came directly to FPC or even still via TP or a Mac Pascal)
Obviously that makes something with plugins in a different language annoying.
Bash script is a close second. (g)make third, but already a magnitude less.
All servers are *nix-like (FreeBSD, OS X, Linux), but not all run Apache. (e.g. my FreeBSD mirror runs XSHTTPD)
somebody most knowledgable must be dedicated maintainer for a long time. Fix problems, update/ do migrations etc. Perferably more than one for obvious reasons.
a major pain are Linux distributions (and FreeBSD to a lesser degree), most maintainers of *nix packages are not capable of more than "./configure;make;make install", and must be spoonfed with a near buildable repository and auxilary files.
In-distribution packaging of FPC/Lazarus has always been important, and is still increasing
All distributions have their own special rules about metadata, depedancies, and how sources must be published. Particularly Debian/Ubuntu is very bureaucratic and slow.
Most don't like third party auto-installers on top of their systems (since that bypasses their dependancy control)
This all leads to the effective practice that own tools in Pascal with minimal scripting work best. Some tools used:
Gmake is mainly used to parameterise the build process on a per directory level, a successor, fpcmake (not really a make derivative despite the name) has begun, but the migration hasn't completed.
Latex and a latex to html conversion (tex4ht, but debian uses hevea) are used in the documentation building (the non library documentation)
The community site (netscape community server which uses TCL scripting, a heavy complex application server) has been a trouble ever since it started, but specially lately since the maintainer became less active.
Mantis has been a problem (specially the email module would crash or lame the server due to the volume), but it has been whipped into shape during successive updates and hard work of several lazarus devels. Currently it is a decent workhorse.
lazarus.freepascal.org PHPBB forum OTOH is relatively painless since a lot of younger people know how to deal with it.
The same goes for subversions (though the more advanced scale needs some adjusting, not everybody is deep into the ins and outs of mergetracking)
If somebody was really serious about Maven, I usually would ask him:
to CRITICIALLY investigate the use for the project. In a very concrete way, with schedule and time estimates. Birds-eye level "everything's possible" overviews are essentialy worthless.
Give some thought on future change of used technologies. Every technology is eventually replaced, even the in-house ones, in 18 year+ projects. A new technology must not make migrations of other infrastructural components hard or involved. The new technology to end all new technologies doesn't exist.
Make a migration plan. Migration is often underrated and underestimated.
And in the end, there is always the 1000000 Euro question, who will do the daily maintenance?
Keep in mind that in a company you just kick the person responsible for the application server. But in an informal environment this is way harder, specially long term, since people's lives, occupations and time spent on the project vary.
Sounds like an interesting plan, but the Delphi community (and FPC even more so, I'd imagine!) values libraries as source far more than precompiled libraries. The general consensus is that anyone who uses a binary-only library is a fool, for two reasons: You can't fix any bugs you find in it, and compiler changes will break compatibility.

Managing/Using libraries with Debug builds vs Release builds

I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write.
First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode)
Then when you run your app in release mode just before deploying, which build of the libraries do you use?
How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do?
I would first determine what requirements are needed from the library:
Debug/Release
Unicode support
And so on..
With that determined you can then create configurations for each combination required by yourself or other library users.
When compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking.
I know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable.
As Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built.
For example our projects require the following configurations to be available depending on the executable being built.
Debug+Unicode
Debug+ASCII
Release+Unicode
Release+ASCII
The users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations.
Regarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled.
What platform are you using to compile/link your projects?
EDIT: Expanding on your updated comment..
If the Configuration Manager option is not available to you then I recommend using the following properties from the project:
Linker->Additional Library Directories or Linker->Input
Use the macro $(ConfigurationName) to link with the appropriate library configuration e.g. Debug/Release.
$(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.lib
Build Events or Custom Build Step configuration property
Execute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring.
xcopy $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.dll $(IntDir)
The macro $(ProjectDir) will be substituted for the current project's location and causes the operation to occur relative to the current project.
The macro $(ConfigurationName) will be substituted for the currently selected configuration (default is Debug or Release) which allows the correct items to be copied depending on what configuration is being built currently.
If you use a regular naming convention for your project configurations it will help, as you can use the $(ConfigurationName) macro, otherwise you can simply use a fixed string.
I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a "3rdParty" or "libs" folder at the same level as my "src" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the "lib" folder and reload the project.
I am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment.
And then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like:
For library "stack.dll" look in "......\3rdParty\" for release and "......\3rdPartyD\" for debug.
Anything that those something like I don't know. What do you suggest?
Remember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?

Resources