Vala loading headers locally - vala

My problem is that the generated .c file loads my headers inside <> instead of ""
The generated .c file has <my_header.h> instead of "my_header.h".
my_header.h is in the directory where are the vala files.
I tried using --includedir=. but that did not help.
This happens only with valac-0.16.0
Valac 0.16.1 does not have this bug.
I have to use valac-0.16.0,so swicthing the compiler version is not an option.
I fixed this using this script :
#!/usr/bin/ruby
files = Dir.glob("*.c")
files.each do |f|
file = File.open(f,"r")
data = file.read()
data = data.sub("<my_header.h>","\"my_header.h\"")
file2 = File.open(f,"w+")
file2.write(data);
end
But this might fail when packaging it into a .deb file,so my question is still on.

You can pass -X -I. to the Vala compiler, which will pass -I. directly to the C compiler.

Related

If I provide CLANG_DATABASE_PATH in Doxyfile do I still have to provide INPUT and INCLUDE_PATH?

I am using Doxygen for documentation of my project. I have a compile_commands.json file describing the source code of my project inside the directory C:\dev\project_dir. I set the following variables:
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
So how does this work? Do I also have to set the variables INPUT and INCLUDE_PATH? It seems that all the files and the instructions to compile them, including where to get header files from, are written in the compilation database.
And if I do have to set the variables INPUT and INCLUDE_PATH also, what should I set them to? The compilation database lists the source and header files of the project, which are scattered among multiple different directories. How should I proceed in this situation.
I found the answer.
So I set the following variables in the Doxyfile.
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
And I set the following variables as blank:
INPUT =
INCLUDE_PATH =
INPUT specifies the paths to source code *.c *.cpp files and/or directories to be processed. INCLUDE_PATH specifies the paths to header code *.h *.hpp files and/or directories to be processed.
The CLANG_ASSISTED_PARSING = YES enables using clang compiler as the parser instead of the default doxygen parser. So if INPUT and INCLUDE_PATH are not set, then it gets the source code files and header code files from the compilation database itself. The CLANG_DATABASE_PATH specifies the directory in which the compilation database is stored. It grabs the file named compile_commands.json by default from that directory, implying that the name of the compilation database is fixed. If you name your compilation database JSON file anything else other than compile_commands.json doxygen won't be able to find it.
So if a clang compilation database JSON file is used all the *.c *.cpp files that are being compiled are placed in the INPUT. And all the header code files are placed in the INCLUDE_PATH. The clang used by doxygen parses the JSON, and every time it encounters a -I compiler flag it recognizes that file as a header file, adding it to the INCLUDE_PATH. This means that setting INPUT and INCLUDE_PATH are not mandatory. So if the compilation database is properly formatted, and all the header files are explicitly marked with the -I, only setting the CLANG_DATABASE_PATH is sufficient.
But there is a certain situation when the INCLUDE_PATH also needs to be set explicitly. For example if you have a source code file, which includes a header file, which includes another header file inside of it.
first.h
int one(int);
second.h
#include "first.h"
int two(int);
code.cpp
#include "second.h"
int main(void) {}
And the command in the compilation database is such:
clang -I path/to/second.h -c code.cpp
So in this case doxygen would read that file, and it would internally set the following variables as such:
INPUT = code.cpp
INCLUDE_PATH = path/to/second.h
This means that although doxygen will index second.h, it will miss first.h since that header file isn't explicitly provided in the -I compilation database. That would be an error. So we need to list it explicitly in the doxyfile, an an additional include path.
INPUT =
INCLUDE_PATH = path\to\first.h
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir

How to get the path where the library is installed

I am working in Linux and I have a library written in Fortran 90 (written by 3rd party), that is reading from a file in the current working directory. I would like to be able to call the resulting library from other folders, hence I need to read the path where the library is installed. How can I know the path to the compiled library within the Fortran code?
I need to store in a variable the path within the code.
For who knows python, I want to achieve the same as
import os
os.path.dirname(os.path.abspath(__file__))
but in f90 (see Get location of the .py source file)
Using the suggestions in the comment I have done the following:
export DATAPATH=`pwd`
make
in the Makefile
ifort -O3 -fpic -fpp -DDATAPATH -c mysource.f90
in mysource.f90
subroutine mysub
character(len=100)::dpath
#ifdef DATAPATH
dpath=DATAPATH
#endif
open(10,file=trim(dpath)//'initialise.dat')
....
....
the problem is that at compile time I get
mysource.f90(42): error #6054: A CHARACTER data type is required in this context. [1]
dpath=1
----------^
compilation aborted for mysource.f90 (code 1)
If you wish you can fix the path at compile time. Something like
gfortran -cpp mylib.f90 -DPREFIX=\"/usr/local/\"
open(newunit=u,file=PREFIX//'mylib/initialise.dat')
You must than make sure the library is indeed installed in that place PREFIX/mylib/
You can create an environment variable containing the path of your data. This variable can be set by hand, in your .bashrc or .bash_profile or in the system /etc/profile.d/ or /etc/bash.bashrc, there are manyways and they depend if the library is just for one user or for all users of some large computer.
For example
export MYLIB_PATH='/usr/local/mylib'
Then you can read the variable in Fortran as
CALL get_environment_variable("MYLIB_PATH", mylib_path, status=stat)
and the path is now in variable mylib_path. You can check the success by checking if stat==0.
This is not the only possible approach. You can also have a configuration file for your library in your home directory:
mkdir $HOME/.config/mylib/
echo "/usr/local/mylib" > $HOME/.config/mylib/path
and then you can try to read the path from this file if the environment variable was not set
if (stat/=0) then
CALL get_environment_variable("HOME", home_dir)
open(newunit=path_unit, file=home_dir//'/.config/mylib/path',status='old',action='read',iostat=stat)
if (stat/=0) complain
read(path_unit,'(a)',iostat=stat) mylib_path
if (stat/=0) complain
end if
So when you compiled with -DDATAPATH you have not passed the variable DATAPATH into your code only declared a symbol called DATAPATH as being true, so ifort will substitute DATAPATH as 1. What you need to do is pass it as a value:
-DDATAPATH="$(DATAPATH)"
For the compilation to work.

GNU m4 macros auto generated file

I download latex package on which I want do some changes, but in this packege exist file include.m4 and I don't know what it does and how it was generated. Here its lines:
m4_changequote([[, ]])m4_dnl
m4_dnl
m4_define([[m4_FILE_INIT]], [[m4_dnl
%
% This is automaticaly generated file, do not edit it.
%
]])m4_dnl
m4_dnl
m4_define([[m4_FILE_ID]], [[m4_dnl
m4_patsubst([[$1]], [[\$Date::? \([0-9]+\)-\([0-9]+\)-\([0-9]+\).*]], [[\1/\2/\3]])m4_dnl
v[[]]m4_ESKDX_VERSION]])m4_dnl
m4_dnl
m4_define([[m4_FILE_DATE]], [[m4_dnl
m4_patsubst([[$1]], [[\$Date::? \([0-9]+\)-\([0-9]+\)-\([0-9]+\).*]], [[\1/\2/\3]])]])m4_dnl
m4_dnl
Can you explain with which tool it was generated?
Thk. So this file is not autogenerated? ANd can you help me understand these lines from Makefile:
M4FLAGS = -P -Dm4_ESKDX_INIT="m4_include($(TOP_DIR)/include.m4)" \
-Dm4_ESKDX_VERSION=$(VERSION) -Dm4_ESKDX_DATE=$(RELEASE_DATE)
And rule:
%.def: %.def.in $(M4DEPS)
m4 $(M4FLAGS) $< >$#
%.sty: %.sty.in $(M4DEPS)
m4 $(M4FLAGS) $< >$#
%.cls: %.cls.in $(M4DEPS)
m4 $(M4FLAGS) $< >$#
As I can see GNU m4 options '-D' substitutes macro m4_ESKDX_INIT in .sty .cls files to m4_include(../include.m4) and then options '-P' first expands file include.m4 and furthemore expands macros in include.m4.
This is a macro for the GNU m4 macro processor. This file is designed to be used with the -P or --prefix-builtins commandline option. The m4_ part will be stripped away when m4 evaluates this file. This file doesn't do anything itself, it just defines three macros (FILE_INIT, FILE_ID and FILE_DATE) which presumably will be used in another step. You might want to look in the other files for references to this one. The basic idea will be to load this file before running another file through m4 and it will replace those macros as it goes.
The message about automatically generated is supposed to end up in the final file as a comment. As we can see in the rules in the Makefile, each of the .def, .sty and .cls files are generated from an equivalently named .in file (so result.cls will be built from result.cls.in. by evaluating the macros in these files and replacing them with the equivalents.
So, to modify these files, you will want to edit the .in files.

Remove the path in objcopy symbol names

I need to include a binary program in my project. I use objcopy to create an object file from a binary file. The object file can be linked in my program. objcopy creates appropriate symbols to access the binary data.
Example
objcopy -I binary -O elf32-littlearm --binary-architecture arm D:\Src\data.jpg data.o
The generated symbols are:
_binary_D__Src_data_jpg_end
_binary_D__Src_data_jpg_size
_binary_D__Src_data_jpg_start
The problem is that the symbols include the path to the binary file D__Src_. This may help when binary files are included from different location. But it bothers me that the symbols changes when I get the file from a different location. Since this shall run on several build stations, the path can't be stripped with the --redefine-sym option.
How do I get rid of the path in the symbol name?
I solved this problem by using this switch in objcopy:
--prefix-sections=abc
This gives a way to uniquely identify the data in your binary object file (ex. binary.o)
In your linker script you can then define your own labels around where you include the binary.o. Since you are no longer referencing anything in binary.o the binary will be thrown out by the linker if you use -gc-sections switch. The section in binary.o will now be abc.data. Use KEEP in your linker script to tell the linker not to throw out binary.o. Your linker script will contain the following:
__binary_start__ = .;
KEEP(*(abc.data))
binary.o
*(abc.data)
. = ALIGN(4);
__binary_end__ = .;
The switch --localize-symbols works for me.

How to quell qmake's "WARNING: Failure to find:"?

I'm using PRE_TARGETDEPS to generate source files, and I'm adding the generated source files to SOURCES for compilation.
The output of my generator obviously doesn't exist at the time qmake is run, so qmake outputs WARNING: Failure to find: for each of the to-be-created source files.
How can I quell this warning, since I know my PRE_TARGETDEPS is going to produce those files?
Or, is there a better way to generate intermediate files using qmake?
Example
Here's a complete test.pro file that exhibits the problem:
TEMPLATE = lib
preprocess.commands += cat test.cc.in | sed 's/a/0/g' > test.0.cc ;
preprocess.commands += cat test.cc.in | sed 's/a/1/g' > test.1.cc ;
preprocess.depends = test.cc.in
QMAKE_EXTRA_TARGETS += preprocess
PRE_TARGETDEPS += preprocess
SOURCES = test.0.cc test.1.cc
Place this in an empty folder, and also create an empty test.cc.in file. Run qmake, and you'll see these warnings:
WARNING: Failure to find: test.0.cc
WARNING: Failure to find: test.1.cc
How can I quell this warning
From my reading of the qmake code, it looks like you can:
either have qmake ignore any filenames that don't exist - in which case they won't get built by your later steps
or have it write these warnings
I don't think either of these would be satisfactory for you.
Here's my reasoning.... I had a hunt for the text Failure to find in a Qt distribution I had to hand: qt4.8.1.
It appeared 3 times in in qmake/generators/makefile.cpp. The two blocks of code look like this:
QStringList
MakefileGenerator::findFilesInVPATH(QStringList l, uchar flags, const QString &vpath_var)
{
....
debug_msg(1, "%s:%d Failure to find %s in vpath (%s)",
__FILE__, __LINE__,
val.toLatin1().constData(), vpath.join("::").toLatin1().constData());
if(flags & VPATH_RemoveMissingFiles)
remove_file = true;
else if(flags & VPATH_WarnMissingFiles)
warn_msg(WarnLogic, "Failure to find: %s", val.toLatin1().constData());
....
else if(flags & VPATH_WarnMissingFiles)
warn_msg(WarnLogic, "Failure to find: %s", val.toLatin1().constData());
and this is called with:
l = findFilesInVPATH(l, (comp.flags & Compiler::CompilerRemoveNoExist) ?
VPATH_RemoveMissingFiles : VPATH_WarnMissingFiles, "VPATH_" + comp.variable_in);
So the flags parameter passed in to the first block will be either RemoveMissingFiles or WarnMissingFiles, depending on comp.flags & Compiler::CompilerRemoveNoExist.
Or, is there a better way to generate intermediate files using qmake?
I'm not sure that it's better - i.e. it's certainly complex - but this is what is done where I work...
In the .pro file, a system call is done, that:
generates the required files,
and then writes out their names to stdout.
Here's an example from the .pro, to show how it would be called:
SOURCES += $$system( python my_script_name.py )
You can of course pass arguments in to the python script, if you like
Things to note/limitations:
This means that the python script gets run whenever you run qmake, but not during individual make invocations
Each invocation of python really slows down our qmake steps - taking roughly twice as long as running qmake without launching python - but you could always use a different scripting language
This would fix your problem, in that by the time qmake processes the SOURCES value, the files have been created by the script.

Resources