Why doesn't -L automatically include -rpath when shared library is used? - environment-variables

I don't get why it is necessary to provide either rpath or set env varible using LD_LIBRARY_PATH when -L already tells where the shared Library path is.
this answer says: -L tells ld where to look for libraries to link against when linking.
But why at the same time the corresponding -rpath is not set automatically? why do we need to do it manually again?
P.S: I guess if you had that feature, then the executable would be useless in some other environment. But if it is so then what ld actually does during linking, or why it is necessary to give -L path if -rpath in some other environment is different.

It's definitely not a "misfeature of the linker".
-L (a much older flag) simply tells the linker to look in that directory for any library that matches any subsequent -l flag, including static libraries/archives (remember, in times of yore, this was the only kind of library). It intentionally remains a link-time only flag; it has no effect after the linker's business is complete.
-rpath is an entirely different beast. Its purpose is to embed in an executable or shared/dynamic library a list of one or more paths to search for dynamic libraries, when the necessary symbols are looked up in shared libraries at runtime (the r in -rpath).
There are many instances where one would want -L but not -rpath; particularly when one specifically wants to link a static library rather than defer to the dynamic linker. It is relatively trivial on most platforms to override a built-in rpath, i.e., with the LD_PRELOAD environmental variable, and some actually consider this a security issue.
Perhaps ake a look at http://sta.li/faq for a compelling case against dynamic linkage.

Related

iOS App has Application binary rpath set, and it's considered a vulnerability

I got the following ticket from my company's security testers:
Description: The binary has Runpath Search Path (#rpath) set. In certain cases an attacker can abuse this feature to run arbitrary executable for code execution and privilege escalation. Remove the compiler option -rpath to remove #rpath. Section "macho"
But this is an issue I never heard of. We are using Carthage for part of our dependencies and others are bundled with the app and used directly.
Current rpath values ar set as follows:

warnings when trying to statically link cross compiled Fortran 90 code to run on Aarch64-linux - one being "relocation truncated to fit"

I am able to cross compile some Fortran 90 code (large block written by someone else so do not want to convert it) using x86_64 GNU/Linux as the build system and aarch64-linux as the host system and using dynamic linking. However, I want to generate a statically linked binary so added -static to the mpif90 call. When I do this, I get this warning:
/home/me/CROSS-REPOS/glibc-2.35/math/../sysdeps/ieee754/dbl-64/e_log.c:106: warning: too many GOT entries for -fpic, please recompile with -fPIC
When I add this flag as in "mpif90 -static -fPIC" the same error appears. Also tried -mcmodel=large option as in "mpif90 -static -mcmodel=large" to no avail.
Then checked the options for "/home/me/CROSS-JUL2022/lib/gcc/aarch64-linux/12.1.0/../../../../aarch64-linux/bin/ld", I see this one, --long-plt (to generate long PLT entries and to handle large .plt/.got displacements). But trying "mpif90 -static -Wl,--long-plt" says --long-plt is not an option. How to invoke this --long-plt option then?
One other thing, I know static linking will make the binaries a fair amount bigger but do not want to carry libs over to the Android device. Furthermore, some reading is indicating that dynamic linking on the Android device could lead to some security issues. Thanks for any suggestions.

How to link wasi-libc with shared memory flag?

I want to import shared memory in my WASM module and trying to link my object files all compiled with -matomics and -mbulk-memory, and wasi-libc -lc, lc++ and -lc++abi libraries. But getting an error:
wasm-ld: error: --shared-memory is disallowed by errno.o because it was not compiled with 'atomics' or 'bulk-memory' features.
As i understand wasm-ld links some libc object files, compiled without flags above, so it can't be linked. How can i provide these flags to linker? Or need to build wasi-libc from source with these flags?
Problem has been solved by specifying --no-check-features flag at linking.
wasi-libc does not support shared memory or multi-threading today. If you want to try to make something work you would, at minimum, need to recompile the the core libraries (libc, compiler-rt, libcxx, libcxxabi) with the -pthread compiler flag.
If you want to use mutli-threading with WebAssembly as of today (2021) the only reasonable option it use emscripten and its Web Worker-based multi-threading approach.

How can I find the flag dependency or conflict in LLVM?

As I know, GCC has this website to figure out the relationship between different flags using while optimization. GCC example website. Like fpartialInlining can only be useful when findirectInlining is turned on.
I think the same thing would happen in clang, in other words, I think the different passes may have this kind of dependcy/confilcts relationship in LLVM(CLANG).
But after checking all the document provided by developers, I find it just say something about the functionality in these passes. LLVM PASS DOC
So my question can be divided into 2 parts I think:
Does the dependency exists in LLVM PASS or there is no such dependency/conflicts
If there is, how can I find them.
You can find which passes are using in which optimization levels by clang while compiling any c or c++ code with clang and try to figure out dependencies. For example:
clang -O2 --target=riscv32 -mllvm -debug-pass=Structure example.c
(You can use also -debug-pass=Arguments instead of -debug-pass=Structure. It depends readability.)
this will give which passes used by clang at 2. optimization level for riscv32 target. If you don't give a target it sets default as your host machine target, and keep in mind that some used passes changes related to different targets at same optimization levels.

How to get bitcode llvm after linking?

I am trying to get LLVM IR for a file which is linked with some static libararies.
I tried to link using llvm-link . It just copy the .bc files in one file ( not like native linking).
clang -L$(T_LIB_PATH) -lpthread -emit-llvm gives an error: emit-llvm can not be used with linking. When passing -c option, it gives warning that the linking options were not used.
My main goal is to get .bc file with all resolved symbols and references. How can I achieve that with clang version 3.4.?
You may have a look at wllvm. It is a wrapper on the compiler, which enable to build a project and extract the LLVM bitcode of the whole program.
You need to use wllvm and wllvm++ for C and C++, respectively (after setting some environment variables).
Some symbols come from source code via LLVM IR. IR is short for intermediate representation. Those symbols are easy to handle, just stop in the middle of the build process.
Some others come from a library and probably were generated by some other compiler, one that never makes any IR, and in any case the compiler was run by some other people at some other location. You can't go back in time and make those people build IR for you, even if their compiler has the right options. All you can do is obtain the source code for the libraries and build your entire application from source.

Resources