How to get the path where the library is installed - path

I am working in Linux and I have a library written in Fortran 90 (written by 3rd party), that is reading from a file in the current working directory. I would like to be able to call the resulting library from other folders, hence I need to read the path where the library is installed. How can I know the path to the compiled library within the Fortran code?
I need to store in a variable the path within the code.
For who knows python, I want to achieve the same as
import os
os.path.dirname(os.path.abspath(__file__))
but in f90 (see Get location of the .py source file)
Using the suggestions in the comment I have done the following:
export DATAPATH=`pwd`
make
in the Makefile
ifort -O3 -fpic -fpp -DDATAPATH -c mysource.f90
in mysource.f90
subroutine mysub
character(len=100)::dpath
#ifdef DATAPATH
dpath=DATAPATH
#endif
open(10,file=trim(dpath)//'initialise.dat')
....
....
the problem is that at compile time I get
mysource.f90(42): error #6054: A CHARACTER data type is required in this context. [1]
dpath=1
----------^
compilation aborted for mysource.f90 (code 1)

If you wish you can fix the path at compile time. Something like
gfortran -cpp mylib.f90 -DPREFIX=\"/usr/local/\"
open(newunit=u,file=PREFIX//'mylib/initialise.dat')
You must than make sure the library is indeed installed in that place PREFIX/mylib/
You can create an environment variable containing the path of your data. This variable can be set by hand, in your .bashrc or .bash_profile or in the system /etc/profile.d/ or /etc/bash.bashrc, there are manyways and they depend if the library is just for one user or for all users of some large computer.
For example
export MYLIB_PATH='/usr/local/mylib'
Then you can read the variable in Fortran as
CALL get_environment_variable("MYLIB_PATH", mylib_path, status=stat)
and the path is now in variable mylib_path. You can check the success by checking if stat==0.
This is not the only possible approach. You can also have a configuration file for your library in your home directory:
mkdir $HOME/.config/mylib/
echo "/usr/local/mylib" > $HOME/.config/mylib/path
and then you can try to read the path from this file if the environment variable was not set
if (stat/=0) then
CALL get_environment_variable("HOME", home_dir)
open(newunit=path_unit, file=home_dir//'/.config/mylib/path',status='old',action='read',iostat=stat)
if (stat/=0) complain
read(path_unit,'(a)',iostat=stat) mylib_path
if (stat/=0) complain
end if

So when you compiled with -DDATAPATH you have not passed the variable DATAPATH into your code only declared a symbol called DATAPATH as being true, so ifort will substitute DATAPATH as 1. What you need to do is pass it as a value:
-DDATAPATH="$(DATAPATH)"
For the compilation to work.

Related

Lua require does not work but file is in the trace

I'm trying to require files in Lua, in one case it is working, but when I want to simplify the requirements in updating the LUA PATH the file is not found, but it is in the trace!
To reproduce my require problem I did the test with the package.searchpath function, which takes the required key and the Lua path in arguments.
So the code :
print('MY LUA PATH')
local myLuaPath = "?;?.lua;D:\\Projets\\wow-addon\\HeyThere\\?;D:\\Projets\\wow-addon\\HeyThere\\src\\HeyThere\\?;D:\\Projets\\wow-addon\\HeyThere\\test\\HeyThere\\?"
print(myLuaPath)
print('package search core.add-test')
print(package.searchpath('core.add-test', myLuaPath))
print('package search test.HeyThere.core.add-test')
print(package.searchpath('test.HeyThere.core.add-test', myLuaPath))
The result :
MY LUA PATH
?;?.lua;D:\Projets\wow-addon\HeyThere\?;D:\Projets\wow-addon\HeyThere\src\HeyThere\?;D:\Projets\wow-addon\HeyThere\test\HeyThere\?
package search core.add-test
nil no file 'core\add-test'
no file 'core\add-test.lua'
no file 'D:\Projets\wow-addon\HeyThere\core\add-test'
no file 'D:\Projets\wow-addon\HeyThere\src\HeyThere\core\add-test'
no file 'D:\Projets\wow-addon\HeyThere\test\HeyThere\core\add-test'
package search test.HeyThere.core.add-test
test\HeyThere\core\add-test.lua
So the first try with 'core.add-test' should work with the 'D:\Projets\wow-addon\HeyThere\test\HeyThere\?' value in the path but fails...
In the trace, there is the file I want!
no file 'D:\Projets\wow-addon\HeyThere\test\HeyThere\core\add-test'
But with the same LUA PATH but starting in a parent folder the file is found... Second test with 'test.HeyThere.core.add-test' found from the 'D:\Projets\wow-addon\HeyThere\?'
-> test\HeyThere\core\add-test.lua
Can someone explains to me why it doesn't work the first time?
EDIT :
My current directory is D:\Projets\wow-addon\HeyThere
My lua.exe is in D:\Projets\wow-addon\HeyThere\bin\lua but is added to my PATH variable (I'm on Windows)
I set the LUA_PATH environment variable and execute
lua "test\test-suite.lua" -v
The code inside test-suite.lua is the test code described above
As #EgorSkriptunoff suggested, adding file extansion in the path resolve the problem...
Ex:
Wrong path D:\Projets\wow-addon\HeyThere\?
Good path D:\Projets\wow-addon\HeyThere\?.lua
The extension should be in the path variable because in the require the dot is replace and used as a folder separator.

If I provide CLANG_DATABASE_PATH in Doxyfile do I still have to provide INPUT and INCLUDE_PATH?

I am using Doxygen for documentation of my project. I have a compile_commands.json file describing the source code of my project inside the directory C:\dev\project_dir. I set the following variables:
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
So how does this work? Do I also have to set the variables INPUT and INCLUDE_PATH? It seems that all the files and the instructions to compile them, including where to get header files from, are written in the compilation database.
And if I do have to set the variables INPUT and INCLUDE_PATH also, what should I set them to? The compilation database lists the source and header files of the project, which are scattered among multiple different directories. How should I proceed in this situation.
I found the answer.
So I set the following variables in the Doxyfile.
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
And I set the following variables as blank:
INPUT =
INCLUDE_PATH =
INPUT specifies the paths to source code *.c *.cpp files and/or directories to be processed. INCLUDE_PATH specifies the paths to header code *.h *.hpp files and/or directories to be processed.
The CLANG_ASSISTED_PARSING = YES enables using clang compiler as the parser instead of the default doxygen parser. So if INPUT and INCLUDE_PATH are not set, then it gets the source code files and header code files from the compilation database itself. The CLANG_DATABASE_PATH specifies the directory in which the compilation database is stored. It grabs the file named compile_commands.json by default from that directory, implying that the name of the compilation database is fixed. If you name your compilation database JSON file anything else other than compile_commands.json doxygen won't be able to find it.
So if a clang compilation database JSON file is used all the *.c *.cpp files that are being compiled are placed in the INPUT. And all the header code files are placed in the INCLUDE_PATH. The clang used by doxygen parses the JSON, and every time it encounters a -I compiler flag it recognizes that file as a header file, adding it to the INCLUDE_PATH. This means that setting INPUT and INCLUDE_PATH are not mandatory. So if the compilation database is properly formatted, and all the header files are explicitly marked with the -I, only setting the CLANG_DATABASE_PATH is sufficient.
But there is a certain situation when the INCLUDE_PATH also needs to be set explicitly. For example if you have a source code file, which includes a header file, which includes another header file inside of it.
first.h
int one(int);
second.h
#include "first.h"
int two(int);
code.cpp
#include "second.h"
int main(void) {}
And the command in the compilation database is such:
clang -I path/to/second.h -c code.cpp
So in this case doxygen would read that file, and it would internally set the following variables as such:
INPUT = code.cpp
INCLUDE_PATH = path/to/second.h
This means that although doxygen will index second.h, it will miss first.h since that header file isn't explicitly provided in the -I compilation database. That would be an error. So we need to list it explicitly in the doxyfile, an an additional include path.
INPUT =
INCLUDE_PATH = path\to\first.h
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir

How to write Bazel rules that work with external repositories?

The Bazel Starlark API does strange things with files in external repositories. I have the following Starlark snippet:
print(ctx.genfiles_dir)
print(ctx.genfiles_dir.path)
print(output_filename)
ret = ctx.new_file(ctx.genfiles_dir, output_filename)
print(ret.path)
It is creating the following output:
DEBUG: build_defs.bzl:292:5: <derived root>
DEBUG: build_defs.bzl:293:5: bazel-out/k8-fastbuild/genfiles
DEBUG: build_defs.bzl:294:5: google/protobuf/descriptor.upb.c
DEBUG: build_defs.bzl:296:5: bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
That extra external/com_google_protobuf comes seemingly out of nowhere, and it makes my rule fail:
I tell protoc to generate into ctx.genfiles_dir.path (which is bazel-out/k8-fastbuild/genfiles).
So protoc generates bazel-out/k8-fastbuild/genfiles/google/protobuf/descriptor.upb.c
Bazel fails because I didn't generate bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
Likewise, when I try to call file.short_path on a source file from an external repository, I get a result like ../com_google_protobuf/google/protobuf/descriptor.proto. This seems quite unhelpful, so I just wrote some manual code to strip off the leading ../com_google_protobuf/.
Am I missing something? How can I write this rule in a way that doesn't feel like I'm fighting Bazel the whole time?
Am I missing something?
The basic problem, as you already realized, is that you have two path "namespaces" the one that protoc sees (i.e. import paths) and the one that bazel sees (i.e. the path you pass to declare_file().
2 things to note:
1) All paths declared with declare_file() get the path <bin dir>/<package path incl. workspace>/<path you passed to declare_file()>
2) All actions are executed from <bin dir> (unless output_to_genfils=True in which case this switches to <gen dir> as in your example.
Trying to solve the exact same problem you encountered, I resorted to stripping the known path from the output_file's path to determine which directory to pass as p:
# This code is run from the context of the external protobuf dependency
proto_path = "google/a/b.proto"
output_file = ctx.actions.declare_file(proto_path)
# output_file.path would be `<gen_dir>/external/protobuf/google/a/b.proto`
# Strip the known proto_path from output_file.path
protoc_prefix = output_file.path[:-len(proto_path)]
print(protoc_prefix) # Prints: <gen_dir>/external/protobuf
command = "{protoc} {proto_paths} {cpp_out} {plugin} {plugin_options} {proto_file}".format(
...
cpp_out = "--cpp_out=" + protoc_prefix,
...
)
Alternatives
You may also be able to construct the same path with ctx.bin_dir, ctx.label.workspace_name, ctx.label.package, and ctx.label.name.
Misc.
proto_library recently gained an attribute strip_import_prefix. When used, the above is not correct, as all dependent files are symlinked into a new directory from which they have the relative paths declared with strip_import_prefix.
The path format is:
<bin dir>/<repo>/<package>/_virtual_base/<label name>/<path `import`ed in .proto files>
i.e.
<bin dir>/external/protobuf/_virtual_base/b_proto/google/a/b.proto
Assuming you are building an external repo called protobuf, which contains a BUILD file at its root with a target named b_proto, which in turn, relies on a proto_library wrapping google/a/b.proto AND uses the strip_import_prefix attribute.

Remove the path in objcopy symbol names

I need to include a binary program in my project. I use objcopy to create an object file from a binary file. The object file can be linked in my program. objcopy creates appropriate symbols to access the binary data.
Example
objcopy -I binary -O elf32-littlearm --binary-architecture arm D:\Src\data.jpg data.o
The generated symbols are:
_binary_D__Src_data_jpg_end
_binary_D__Src_data_jpg_size
_binary_D__Src_data_jpg_start
The problem is that the symbols include the path to the binary file D__Src_. This may help when binary files are included from different location. But it bothers me that the symbols changes when I get the file from a different location. Since this shall run on several build stations, the path can't be stripped with the --redefine-sym option.
How do I get rid of the path in the symbol name?
I solved this problem by using this switch in objcopy:
--prefix-sections=abc
This gives a way to uniquely identify the data in your binary object file (ex. binary.o)
In your linker script you can then define your own labels around where you include the binary.o. Since you are no longer referencing anything in binary.o the binary will be thrown out by the linker if you use -gc-sections switch. The section in binary.o will now be abc.data. Use KEEP in your linker script to tell the linker not to throw out binary.o. Your linker script will contain the following:
__binary_start__ = .;
KEEP(*(abc.data))
binary.o
*(abc.data)
. = ALIGN(4);
__binary_end__ = .;
The switch --localize-symbols works for me.

Shared libraries with a path on names

I'm making a project which uses lots of it's own shared libraries; my intention is to create a library name within a directory, so, e.g., instead of -lfoo (to find /usr/lib/libfoo.so or /opt/lib/libfoo.so or so on), I would use -lfoo/bar (to find /usr/lib/libfoo/bar.so or /opt/lib/libfoo/bar.so or so on).
I made a real small code to test:
const char *mylib(void) {
return "it woooorks! =D";
};
And compiled it with: gcc -fPIC -shared -Wl,-rpath,libfoo/ lib.c -o /usr/lib/libfoo/bar.so.
Then, in the test program, I use gcc -lfoo/bar test.c, and it compiles (it finds the mylib() symbol from my library), but when I try to run the program (./a.out), the dynamic linker complains that it can't find the library. In my case, using Mac OS X Lion:
dyld: Library not loaded: bar.so
Referenced from: /Users/takanuva/tmp/lib/./a.out
Reason: image not found
Trace/BPT trap: 5
What am I doing wrong? Maybe the answer is "everything", so... how should I accomplish the desired effect, to look for libfoo/bar.so instead of libfoo.so on the library paths?
Thanks in advance. :)
If the library is not located in a standard path (and /opt is such a path that is normally not searched), you must specify the paths with -L/-rpath, as in gcc blah.c -L/opt/whatever -Wl,-rpath,/opt/whatever -lbar. Do NOT attempt to use path separators in the -l argument.

Resources