How do I install a project built with bazel? - bazel

I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?

Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.

Related

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

How to modify CONFIG_CPU_TYPE in .config

I'm building OpenWrt from source to run in QEMU for a research project. I want to use an x86-64 build since it will be a little easier for my purposes. when I go through the process of downloading and building all is successful except it won't run in qemu as the default CPU type is "pentium4".
As I understand OpenWrt the build configuration is stored in .config. The .config file is created/modified one of three ways:
make menuconfig
make kernel_config
make defconfig
I've used the search option in menuconfig and kernel_config and I see no setting for CONFIG_CPU_TYPE. I manually changed the setting as well as the associated options and get a successful build with target_i386_i486_musl.
I'd rather not modify .config directly if I don't have to. What is the correct way to change this setting?
Manually changing/adding parameter in .config won't help, as those parameters are not defined in the framework.
If you want to build it from the source then may be start by selecting
Target System (x86)
Subtarget (x86_64)
Target Profile (Generic)
in make menuconfig. OR
You can directly download the x86 image and use it(external links).

trying to understand LD_PRELOAD and SUID/SGID with checkinstall or porg

I want to use porg in my LFS distro. It's similar to checkinstall, it uses LD_PRELOAD.
1. If you read the README:
CheckInstall currently is unable to track any file system changes made
by statically linked programs
I think it refers to the commands like mkdir, mv, ln, etc. So I should not have any problems with this. Am I right?
2. Then, the main problem:
NOTE ON SUID/SGID PROGRAMS: CheckInstall can't track their actions
because of some limitations in the LD_PRELOAD system that
installwatch uses. This is good for security reasons, but it can
generate unexpected results when the installation process uses
SUID/SGID binaries.
What does it mean? I don't care if I lose track of some files. I do care if there will be unexpected results, or if I can't install correctly the package.
Also, how many packages have this problem?
In case coreutils (mkdir, mv, etc. ) on your system are statically linked (i.e. running file on them reports "statically linked") porg will not be able to track their operations and thus some installed files may go untracked. Statically linked executables are second-class citizens in Linux and LD_PRELOAD does not support them.
Setuid executables indeed sanitize LD_PRELOAD before usage - they ignore all files which have slashes in name (so that only files from standard system paths can be loaded) and also require that shared library itself has setuid bit set. So in your case you'll need to locate porg's preloaded library and set setuid bit on it (via chmod a+s libxyz.so). BTW it may make sense to ask porg authors to do this change in their distro. I don't think this will cause any problems in a typical package as installers typically don't need to run setuid programs (like mount, passwd, sudo).

How to install waf?

I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.

apache karaf clean start

I'm trying to clean start karaf on Windows using clean option.
It does delete data directory with bundles cache but it copy new bundles into data directory from system directory not local maven repository. But system directory has stale jars in comparison to local maven repository which cause karaf to start with stale bundles.
Is this a 'feature' of clean option? Am I missing something? How could I start Karaf with latest code from maven repo not dealing with file system?
You can't as the system directory is per default the one to use.
The clean does mean to clear up bundles in a awkward state and is only rarely used. Sometimes this happens if you start and stop the karaf container very close to each other then bundle might be in an incomplete state and since those bundle state is kept only a clean will help. Another way of cleaning is to delete the data folder.
So what you look like to be intending is to update certain bundles that are installed from the systems folder. For that you need to install the never version cause Karaf nows which versions are in the systems folder, those bundles are defined in the framework feature which is the basic feature to be used by Karaf itself.
If you have your own bundles in the system folder there is no way of updating those as those are regarded to be bootfeatures. In case you want to update those you'll need to make sure those features aren't boot features anymore and after that just do install the never versions of your bundles and uninstall the older ones. This can be done with the command shell.
One side note, it's usually best to ask those questions on the users mailinglist of Karaf, you get more people to answer your questions there.

Resources