trying to understand LD_PRELOAD and SUID/SGID with checkinstall or porg - setuid

I want to use porg in my LFS distro. It's similar to checkinstall, it uses LD_PRELOAD.
1. If you read the README:
CheckInstall currently is unable to track any file system changes made
by statically linked programs
I think it refers to the commands like mkdir, mv, ln, etc. So I should not have any problems with this. Am I right?
2. Then, the main problem:
NOTE ON SUID/SGID PROGRAMS: CheckInstall can't track their actions
because of some limitations in the LD_PRELOAD system that
installwatch uses. This is good for security reasons, but it can
generate unexpected results when the installation process uses
SUID/SGID binaries.
What does it mean? I don't care if I lose track of some files. I do care if there will be unexpected results, or if I can't install correctly the package.
Also, how many packages have this problem?

In case coreutils (mkdir, mv, etc. ) on your system are statically linked (i.e. running file on them reports "statically linked") porg will not be able to track their operations and thus some installed files may go untracked. Statically linked executables are second-class citizens in Linux and LD_PRELOAD does not support them.
Setuid executables indeed sanitize LD_PRELOAD before usage - they ignore all files which have slashes in name (so that only files from standard system paths can be loaded) and also require that shared library itself has setuid bit set. So in your case you'll need to locate porg's preloaded library and set setuid bit on it (via chmod a+s libxyz.so). BTW it may make sense to ask porg authors to do this change in their distro. I don't think this will cause any problems in a typical package as installers typically don't need to run setuid programs (like mount, passwd, sudo).

Related

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

How to force Nix to "install packages" by building them locally instead of downloading a pre-built binary?

By "install packages" I mean to evaluate Nix build expressions (using nix-env, nix-shell -p, etc.) to build from source instead of using a substitute.
Also cross-posted to Unix& Linux because, as Charles Duffy pointed out, it is more on topic if it is about command-line tools or configuration. Still leaving this here because I assume forcing a package to always compile from source is possible by only using the Nix language, I just don't yet know how. (Or if it is in fact not possible, someone will point it out, and then this question does belong here.)
Either set the substitute option to false in nix.conf (the default is true) or use --option substitute false when invoking a Nix command.
nix-env --options substitute false -i hello
nix-shell --options substitute false -p hello
Might not be the droids you are looking for
As Robert Hensing (comment, chat), Henri Menke (comment), and Vladimír Čunát (comment) pointed out, this may not be the thing that you are really after.
To elaborate: I have been using the most basic Nix features confidently, but got to a point where I need to maintain and deploy a custom fork of a large application written in C, which is quite intimidating at the outset.
Tried to attack the problem the simplest way to just fetch my fork and re-build it with the new source, so I boiled it down to this question. Although, I suspect that the right direction for me is something along the lines of Nixpkgs/Create and debug packages in the NixOS Wiki.
Only re-build the package itself
Vladimír Čunát commented that "disabling substitutes makes you rebuild everything that's missing locally, even though I suspect that people asking such a question often only want to rebuild the specified package itself."
(This is probably achieved with nix-build or "just" overriding the original package but could be wrong. The latter is mention (maybe demonstrated even?) in the NixOS wiki article Development environment with nix-shell but haven't been able to read it thoroughly yet.)
Test for reproducibility
One might arrive to formulating this same question if they want to make sure that subsequent builds are deterministic. As Henri Menke comments, one should use nix-build --check for that.
The --check option is easy to miss; it's not documented in man nix-build or at nix-build in the Nix manual, but at nix-store --realize because (as man nix-build explains it):
nix-build is essentially a wrapper around nix-instantiate (to
translate a high-level Nix expression to a low-level store derivation)
and nix-store --realise (to build the store derivation) [and so] all
options not listed here are passed to nix-store --realise, except
for --arg and --attr / -A which are passed to nix-instantiate.
See detailed examples in the Nix manual at 18.1. Spot-Checking Build Determinism and the next section right after it.
The relevant parts for the substitute configuration option under the nix.conf section from the Nix manual:
Name
nix.conf — Nix configuration file
Description
Nix reads settings from two configuration files:
The system-wide configuration file sysconfdir/nix/nix.conf (i.e. /etc/nix/nix.conf on most systems), or $NIX_CONF_DIR/nix.conf if NIX_CONF_DIR is set.
The user configuration file $XDG_CONFIG_HOME/nix/nix.conf, or ~/.config/nix/nix.conf if XDG_CONFIG_HOME is not set.
You can override settings on the command line using the --option flag,
e.g. --option keep-outputs false.
The following settings are currently available:
[..]
substitute
If set to true (default), Nix will use binary substitutes if available. This option can be disabled to force building from source.
(Formerly known as use-binary-caches.)
Notes
Setting substitute to false (either with --options or in nix.conf) won't recompile the package if the command issue multiple times. That is, hello above would be compiled from source the first time, and then it will access the already present store path if the command issued again.
This is where it gets fuzzy: it is clear that no recompilation takes place because unless the package's Nix build expression doesn't change, the store output hash won't change either, making the next compilation output equivalent to the previous one, hence the action would be superfluous.
So if one would do some light hacking on a package, and just wanted to try it out locally (e.g., with nix-shell) then one would have to use -I nixpkgs=a/local/nixpkgs/dir to pick up those changes - and eventually do a recompilation? Or should one use nix-build?
See also question How to nix-build again a built store path?

Is there a tool for generating crosstool files for installed compilers?

Bazel use CROSSTOOL files to figure out how to builds things. This can be used to (for example) switch between GCC and Clang by setting --crosstool_top. The problem is that it's far from trivial to construct those files.
Does anyone know of any tools that can inspect a Linux installation and generate the needed crosstool files for any "common" compiler(s) that happens to be installed? Something that would be able to find and support any installed versions of Clang and GCC would be enought, any other compilers (icc, etc.) would be fantastic.
(Alternatively: are there any repo's with pre-constructed crosstool files for default installations of all the common compilers?)
Note
I've already found #bazel_tools//tools/cpp:cc_configure.bzl et al. but those seem to only generate configs for the default system compiler and I'm specifically looking for support for the non default compiler(s).
It's only a variation on cc_configure, but you can use environment variables to tweak the generation. Maybe using CC will be enough? If not, what else would you need (pull requests welcomed)?
There is no repo with premade crosstools yet, there will eventually be (maybe in the form of docker containers, we'll see) but currently there's not.

How do I install a project built with bazel?

I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?
Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.

Where does the Linux kernel look for executables

First some background. I recently installed some software (TeX live actually), and the binaries were placed by the installer in a non-standard location (/usr/local/texlive/2011/bin/x86_64-linux). No problem, because I can change the $PATH in to include this directory. However, I use latex inside Makefiles, and Make said it could not find 'latex'. I eventually realized that Make asks the kernel to run latex in a shell-independent way. Thus I moved all my latex executables to /usr/local/bin and everything works, presumably because the kernel can now see the executables.
If this is correct, my question is: where does the kernel look for executables when asked to run a program when this is independent of a shell?
The kernel doesn't look for executables - it is always told the (absolute or relative) path. All program executions (I believe) basically come down to calling the execve() function, which needs to be told the path of the executable.
When you call programs using just their names, it's up to whatever is interpreting your commands (shell, make) to locate the program. Alternatively, library functions such as execlp() can be used, which do the path resolution themselves (see "Special semantics").

Resources