My riscv_vector.h file seems incomplete, but code compiles successfully. Where are the missing intrinsic function definitions? - clang

I am using llvm 16.0.0. In order to use RISCV Vector’s C Intrinsic functions, I add the following header in my C code:
#include<riscv_vector.h>
This file is supposed to have definitions of all the intrinsic functions. But when I open riscv_vector.h file (located at path:install_directory/lib/clang/16.0.0/include/riscv_vector.h), I cannot find definitions of all Intrinsic functions there. Only intrinsics for the configuration instruction (vset{i}vl{i}) are defined there. Where are the definitions of other RVV C intrinsic functions located. Am I missing something?
However, the code compiles without any error and produces expected output.

Related

Parse c-clang index.h file with with clang itself

I am trying to parse c-clang index.h file with ClangSharp (just for testing purposes of ClangSharp parser on C#) and I found that it misses parsing of functions because of CINDEX_LINKAGE macro in the function declaration.
If I remove it, parser will correctly find FunctionDecl and parse it without errors.
I cannot understand how this macro preventing functions from being parsed. Does someone know how to workaround this?
Issue was in the #include line itself. By default, clang header includes setup to search in the directory on one level up, but clang itself by some reason does not understand such
include format.

Add #include's to the headers of a program using llvm clang

I need to add headers to an already existing program by transforming it with LLVM and Clang.
I have used clang's rewriter to accomplish a similar thing in the changing function names and arguments, etc.
But the header files aren't present in clang's AST. I already know we need to use PPCallbacks (https://clang.llvm.org/doxygen/classclang_1_1PPCallbacks.html) but I am in dire need of some examples on how to make it work with the rewriter if at all possible.
Alternatively, adding a #include statement just before the first
using namespace <namespace>;
Also works. I would like to know an example of this as well.
Any help would be appreciated.
There is a bit of confusion in your question. You need to understand in details how the preprocessor works. Be aware that most of C++ compilation happens after the preprocessing phase (so most C++ static analyzers work after that phase).
In other words, the C++ specification (and also the C specification) defines first what is preprocessing, and then what is the syntax and the semantics of the preprocessed form.
In other words, when compiling foo.cc your compiler see the preprocessed form foo.ii that you could obtain with clang++ -C -E foo.cc > foo.ii
In the 1980s the preprocessor /lib/cpp was a separate program forked by the compiler (and some temporary foo.ii was sitting on the disk and removed at end of compilation). Today, it is -for performance reasons- some initial processing done inside the compiler. But you could reason as if it was still separate.
Either you want to alter the Clang compiler, and it deals (like every other C++ compiler or C++ static analyzer) mostly with the preprocessed form. Then you don't want to add new #include-s, but you want to alter the flow of AST given to the compiler (after preprocessing), and that is a different question: you then want to add some AST between existing AST elements (independently of any preprocessor directives).
Or you want to automatically change the C++ source code. The hard part is determining what you want to change and at what place. I suppose that you have used complex stuff to determine that a #include <vector> has to be inserted after line 34 of file foo.cc. Once you've got that information (and getting it is the hard thing), doing the insertion is pretty trivial. For example, you could read every C++ source line, and insert your line when you have read enough lines.

Why does Eunit not require test functions to be exported?

I'm going through the EUnit chapter in Learn You Some Erlang and one thing I am noticing from all the code samples is the test functions are never declared in -export() clauses.
Why is EUnit able to pick these test functions up?
From the documentation:
The simplest way to use EUnit in an Erlang module is to add the following line at the beginning of the module (after the -module declaration, but before any function definitions):
-include_lib("eunit/include/eunit.hrl").
This will have the following effect:
Creates an exported function test() (unless testing is turned off, and the module does not already contain a test() function), that can be used to run all the unit tests defined in the module
Causes all functions whose names match ..._test() or ..._test_() to be automatically exported from the module (unless testing is turned off, or the EUNIT_NOAUTO macro is defined)
Glad I found this question because it gives me a meaningful way to procrastinate and I was wondering how functions get created and exported dynamically.
Started by looking at the latest commit affecting EUnit in the Erlang/OTP Github repo, which is 4273cbd. (The only reason for this was to find a relatively stable anchor instead of git branches.)
0. Include EUnit's header file
According EUnit's User's Guide, the first step is to -include_lib("eunit/include/eunit.hrl"). in the tested module, so I assume this is where the magic happens.
1. otp/lib/eunit/include/eunit.hrl (lines 79 - 91)
%% Parse transforms for automatic exporting/stripping of test functions.
%% (Note that although automatic stripping is convenient, it will make
%% the code dependent on this header file and the eunit_striptests
%% module for compilation, even when testing is switched off! Using
%% -ifdef(EUNIT) around all test code makes the program more portable.)
-ifndef(EUNIT_NOAUTO).
-ifndef(NOTEST).
-compile({parse_transform, eunit_autoexport}).
-else.
-compile({parse_transform, eunit_striptests}).
-endif.
-endif.
1.1 What does -compile({parse_transform, eunit_autoexport}). mean?
From the Erlang Reference Manual's Module chapter (Pre-Defined Module Attributes):
-compile(Options).
Compiler options. Options is a single option or a list of options. This attribute is added to the option list when
compiling the module. See the compile(3) manual page in Compiler.
On to compile(3):
{parse_transform,Module}
Causes the parse transformation function
Module:parse_transform/2 to be applied to the parsed code before the
code is checked for errors.
From the erl_id_trans module:
This module performs an identity parse transformation of Erlang code.
It is included as an example for users who wants to write their own
parse transformers. If option {parse_transform,Module} is passed to
the compiler, a user-written function parse_transform/2 is called by
the compiler before the code is checked for errors.
Basically, if module M includes the {parse_transform, Module} compile option, then all of M's functions and attributes can be iterated through using your implementation of Module:parse_transform/2. Its first argument is Forms, which is M's module declaration described in Erlang's abstract format (described in Erlang Run-Time System Application (ERTS) User's Guide.
2. otp/lib/eunit/src/eunit_autoexport.erl
This module only exports parse_transfrom/2 to satisfy {parse_transform, Module} compile option and its first order of business is to figure out what are the configured suffixes for test case functions and generators. If not set manually, using _test and _test_ respectively (via lib/eunit/src/eunit_internal.hrl).
It then scans all the functions and attributes of your module using eunit_autoexport:form/5, and builds a list of to be exported functions where the suffixes above match (plus the original functions. I may be wrong on this one...).
Finally, eunit_autoexport:rewrite/2 builds a module declaration from the original Forms (given to eunit_autoexport:parse_transform/2 as the first argument) and the list of functions to be exported (that was supplied by form/5 above). On line 82 it injects the test/0 function mentioned in the EUnit documentation.

Defining preprocessor symbols for CLion analyzer

In my project there's a file enclosed in an ifdef preprocessor directive
#ifdef SOME_SYMBOL
... entire file ...
#endif
SOME_SYMBOL is defined by another file that's compiled before this one, and the code works as expected, but the static analyzer isn't aware of this symbol and so it treats SOME_SYMBOL is undefined. The entire file has no syntax highlighting and some of the analysis is just skipped (e.g. syntax error highlighting).
Is there a way to tell the analyzer to treat this symbol as defined without defining it in CMakeLists.txt?
I don't have the option of defining SOME_SYMBOL in CMakeLists.txt since the project depends on it being undefined in some compilation paths (changing this would be near impossible).
Update:
Seems like this is currently an open issue with JetBrains. See Issue CPP-2286
Clion now has a macro which you can use to detect the IDE:
https://youtrack.jetbrains.com/issue/CPP-1296#comment=27-1846360
#ifdef __JETBRAINS_IDE__
// Stuff that only clion will see goes here
#endif
This allows you to put in defines to make clion render your code properly in cases where it can't be clever enough to figure it out.
The __JETBRAINS_IDE__ macro's value is a version string for the IDE. Specific versions of the macro exist for different Jetbrains IDEs: __CLION_IDE__, __STUDIO_IDE__ (for Android Studio), and __APPCODE_IDE__ (for AppCode).
Yay!
To get syntax highlighting:
Go to Settings ⇒ Editor ⇒ Colors&Fonts ⇒ C/C++ and remove all ticks for 'Conditionally non-compiled code'. This way all code will show up with the usual highlighting.
The task has no solution for common case.
But! You can find the target and related resolve context, where SOME_SYMBOL is defined.
...in the status bar you can find the Resolve Context chooser for switching between the Debug, Release, RelWithDebInfo and MinSizeRel contexts to resolve your code in the IDE with the desired definitions.

Unresolved external UrlCombineW

I have a file which looks something like this:
#include <Shlwapi.h>
...
void SomeFunction()
{
UrlCombineW(...)
}
This compiled just fine until I installed another Delphi component in C++ Builder IDE but now reports unresolved external for UrlCombineW. The above call was fine before installing this component.
It seems that the component is overwriting this in some way so I need to explicitly tell the compiler where to look for UrlCombineW. This is a function from Shlwapi.dll.
Compiler does not complain, but how do I explicitly tell the linker where to look for this function and avoid unresolved external error?
Expanding my comment to an answer.
You need to link to Shlwapi.lib in order for the linker to find the functions. (This explanation glosses over a few things, but a .lib, a library file, can be either a static or import library. A static library contains the functions themselves - it's basically a collection of .obj files bundled together; an import library says that functions X, Y and Z are found in a specific DLL.) Either way, if you link the .lib in you will get the functions that you need.
There are a couple of ways to do tell the linker to link in the file:
Use #pragma comment(lib, "Filename.lib") in a .cpp file somewhere. For your case, this is #pragma comment(lib, "Shlwapi.lib").
Add it to the project options, which in turn adds it to the linker command line. In C++ Builder you do this by actually adding the .lib file to the project, ie drag and drop it onto the project in the Project Manager, or use File > Add To Project.
Which you prefer is up to you. I tend to link to localized things locally - so in my code, there's only one unit which uses Shlwapi.h and the fact it does so is an implementation detail hidden from the outside, it's not shown in the interface. Therefore, in that file, I link using #pragma comment at the point I include the header. On the other hand, if you have something used far more widely - to pick the widest example, kernel32.lib - I would add that the project itself. (Note this is an example, you don't actually need to explicitly link to kernel32, that will be done for you!)

Resources