Make meta-electron compatible with jethro Yocto version - clang

I'm trying to cross-compile Electron on the DIGI's ConnectCore6. To do so, I'm using Yocto and the 5 layers provided by DIGI (all based on the jethro version of Yocto).
Or, to implement Electron on the SBC, I have to use the meta-electron layer, which has four dependencies :
openembedded-core
meta-openembedded
meta-clang (see this)
meta-browser (see this)
Unfortunately, these dependencies have to be considered in their respective master version, not the jethro ones. Also, DIGI provides the poky layer instead of the openembedded-core layer.
So, to try to use Electron on the ConnectCore6, I downloaded meta-clang (master version), meta-browser (jethro version), meta-electron (master version) and I added these layers to my bblayers.conf.
But, because of the jethro version of my poky layer, I have the following error for meta-clang, which can't find musl in the poky layer.
ERROR: No recipes available for:
/usr/local/dey-2.0/sources/meta-clang/recipes-core/musl/musl_%.bbappend
musl is available in the master branch of poky, but not in the jethro branch. Of course, I tried to copy-paste the musl directory from master to jethro branch of poky but this just bring more errors and more missing recipes (bsd-headers-devs, musl-dev, ...).
Do you know how to fix this last error and/or how to make musl compatible with the jethro version of poky ? I really need help on this point. Thank you.

You can try adding meta-musl layer into the mix (with jethro branch), it adds musl support for oe-core jethro. These days almost all of it is merged into main oe-core repository, but for your particular case it might help.

Related

How to make Bazel correctly cache dependencies built by itself?

I have a (relatively small) Bazel rule for some configure/make based project, say xmlsec1 - you can take any other, the important thing seems to be the external tooling behind foreign_cc:
xmlsec1.BUILD:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "configure_make")
filegroup(name="all_srcs", srcs=glob(["**"]))
configure_make(
name="xmlsec1",
lib_name="xmlsec1",
lib_source=":all_srcs",
configure_command="configure",
configure_in_place=True,
out_binaries=["xmlsec1"],
targets=["install"],
)
xmlsec1.bzl:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
def xmlsec1():
http_archive(
name = "xmlsec1",
url = "https://www.aleksey.com/xmlsec/download/xmlsec1-1.2.37.tar.gz",
sha256 = "5f8dfbcb6d1e56bddd0b5ec2e00a3d0ca5342a9f57c24dffde5c796b2be2871c",
build_file = "#//:xmlsec1.BUILD",
)
All works fine for me until Bazel's remote cache gets activated and I'm building among different Linux distributions.
To avoid cache collisions I'm running bazel build with --action_env=SYSTEM_DIGEST="$(cat /etc/os-release)", resulting in different hashes on different distros.
While this approach seems to work for the artifacts defined in xmlsec1 (I can see this from the execution logs and observing expected re-builds), the foreign_cc part seems to built without those action_env variables.
This is what I get when I try to build #xmlsec1//:xmlsec1 (line breaks for readability):
+ /home/me/.cache/bazel/_bazel_me/8f6a55c898f3ec22f87d9cee5890b9e5/sandbox/processwrapper-sandbox/5/execroot/my_project_packages/bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_foreign_cc/toolchains\
/make/bin/make install
/home/me/.cache/bazel/_bazel_me/8f6a55c898f3ec22f87d9cee5890b9e5/sandbox/processwrapper-sandbox/5/execroot/my_project_packages/bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_foreign_cc/toolchains\
/make/bin/make: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /home/me/.cache/bazel/_bazel_me/8f6a55c898f3ec22f87d9cee5890b9e5/sandbox/processwrapper-sandbox/5/execroot/my_project_packages/bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_foreign_cc/toolchains/make/bin/make)
/home/me/.cache/bazel/_bazel_me/8f6a55c898f3ec22f87d9cee5890b9e5/sandbox/processwrapper-sandbox/5/execroot/my_project_packages/bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_foreign_cc/toolchains\
/make/bin/make: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /home/me/.cache/bazel/_bazel_me/8f6a55c898f3ec22f87d9cee5890b9e5/sandbox/processwrapper-sandbox/5/execroot/my_project_packages/bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_foreign_cc/toolchains/make/bin/make)
I get this linker error only with bazel remote cache being activated and (this is the interesting part) having xmlsec1 built on a recent distribution (say Ubuntu 22.04) and then trying to build it on Centos-8.
So I guess this is what's going on:
make from foreign_cc get's built and linked against a recent version of GLIBC, ignoring values provided with --action_env
foreign_cc artifacts are being stored in the bazel remote cache
another build on an older distro (Centos-8) also tries to build make and has no reason not to take the artifacts from the cache, since --action_env values are also ignored here, resulting in the same hash
since the binaries are linked against a version of GLIBC which is not available yet on Centos-8 they are not compatible and crash with the error you see above.
So my question(s) is(/are):
Is that intended behavior? why is --action_env being ignored for builds Bazel runs implicitly (and not for those defined explicitly)?
Is there a way to apply those rules for Bazels own dependencies?
Is there a better way to define system properties with effects on all Builds?

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

How to deploy an Agda library on Travis CI?

I've read the .travis.yml in the agda-stdlib project, while it's very different and complex from a simple library that was written in Agda purely (without those Haskell codes and Shell scripts).
I'm confused with the stdlib's .tarvis.yml. I've installed agda via cabal install, but the stdlib is trying to clone and compile Agda on Travis CI, and there're a lot of commands that seems to be irrealavent to building it.
Also, agda-stdlib seems to be available on Ubuntu's source. This could be the 3rd approach to install it.
Also, the stdlib doesn't have dependencies, but I have. I don't know how to add a dependency either.
Conclusion of my question:
In the 3 choices of installing agda listed above, which one should I choose?
How to add an dependency that let the agda compiler knows I'm actually using it?
The standard library is a bit of a special case: it evolves in lock-step with the development version of Agda. As such it is often the case that it cannot be compiled with a version of Agda readily available in your distribution of choice (e.g. because it uses syntax that was not available beforehand!) and it is forced to pull the latest version from github.
Installing Agda
If your library is compatible with a distributed version then it will be far simpler for you to simply pull it from the repositories via apt-get install agda.
Alternatively Scott Fleischman has a basic example on how to use a docker image to typecheck your development: https://github.com/scott-fleischman/agda-travis
Installing your dependencies
If your project relies on dependencies then you do need to install them. In practice it'll probably mean fetching a bunch of tarballs via wget, and having a ~/.agda/libraries pointing at their library files.
Cf. the manual on library management

Delphi distributed building failure

I've created a tiny project [0] to reproduce an error in a controlled environment. The facts, I'm using jenkins to build my project, a big one, I'd like to make some parallel builds. Let me make it graphically
[MyBasicPackage] -----> [MyPackageTester] ------> [MyBasicApp]
.
.
+-----> [...]
+-----> [...]
this is the organization I've made on [0], I have a class TMyUnit (MyBasicPackage) registered on spring container to be tested. I build it and generate its .dcu, .bpl, and so on.
The second stage I build my MyPackageTester that requires MyBasicPackage. Finally I build the app that requires MyPackageTester. So far so good.
When I try to build my MyBasicPackage on, say PC-00, get the artifacts and try to build the the MyPackageTester on PC-06 (same arch, same OS, same IDE, same spring4d version), and a nice error arise:
Unit TMyUnit was compiled with a different version of Spring.Container.Registration
so, I update my spring4d on both machines (PC-00 and PC-06) and build them. Run... and same error arise.
check the library path options (C:\Program Files (x86)\Embarcadero\Studio\14.0\Componentes\spring4d\Library\DelphiXE6\Win32\Debug), delete dcu files and build them once again on both machines, same error.
copy dcu files from PC-00 to PC-06 to avoid any kind of system configuration and the same error arise.
Probably I'm trying to do something that's not possible so far. I've googled a couple of days without luck.
Any ideas?
Please feel free to fork or pull request the example ;)
Regards
[0] https://github.com/graguirre/DelphiDepencyExample
In your case you need to build with the Spring.Core runtime package. Not only will that prevent this error but your code will actually work.
If you do not then all modules will hold their own version of the GlobalContainer instance you are using and nothing will work.
Maybe one solution is put all your libraries in a centralized repository and pull them to compile your projects. It should resolve the different version error.

Dealing with a large c++ library in a Rails deployment

I have a Rails project that is going to be using OpenCV, and it depends on a certain version of it (2.4.6.1).
I'm looking for deployment advice. The Ubuntu opencv package is an earlier version and therefore not suitable.
I can see a number of possibilities, but I'm trying to think of what will work best.
Just write it up in a README and expect people to follow it: download this, apt-get that, etc...
Add opencv, tagged at the version we need, as a git subtree, and include a Rake task to build it.
Write a script to download and compile the needed code.
Something else ?
None of them seem all that great, to tell the truth.
Can your application be made to work with OpenCV 2.4.2? That is available in Ubuntu 13.04, and you could request it be backported to 12.04. If not, you could update the source package to 2.4.6.1 (which would require learning about debian packaging but might not be too difficult since you would be modifying an existing package instead of starting from scratch), upload it to a PPA, and instruct your users on Ubuntu to install OpenCV from there. You could also package your rails application and put it in the PPA, which would make overall installation even easier.

Resources