Can I control HSA config generated by AMDGPU backend? - clang

I am using llvm clang to offline compile my opencl code into assembly. My target is amdgpu--amdhsa. The assembly file generated by clang has config of "enable_sgpr_dispatch_ptr = 1". Can I do something to turn that off in the generated assembly file? Also, it seems that the order of kernel arguments is in the reverse order of AMDCL2 convention. i.e. user argument is placed at the first place while hidden arguments like "HiddenGlobalOffsetX" are placed after user arguments. Can I change the order of the arguments so that the first argument will be hidden arguments before user arguments?

Related

How can I create a link to a random text in the same document? [duplicate]

I'm using :class: and getting a lot of warnings
WARNING: py:class reference target not found: mypkg.submodule.class.
I can't find anywhere in the documentation what exactly the requirements are for a correct cross-reference.
This is currently an incomplete list of requirements I think there are:
The module of the object needs to be importable
The object needs to exist inside of the module
The object needs to be documented somewhere else in the build with a :py:class::, :py:func:: or similar directive
This directive can be generated by the autodoc extension, in which case the object needs to have a docstring associated to it.
For something to be cross-referenced it has to first be "declared".
The Python domain (name py) provides the following directives for module declarations:
There are 2 cases to consider:
domain directives (.. domain:directive_name::) and
roles (:domain:role_name:).
The case of :class: you specify is actually the shortened syntax of writing the role :py:class: not to be confused with the directive declaration .. py:class::.
This directive can be generated by the autodoc extension, in which case the object needs to have a docstring associated to it.
The directive declarations are done implicitly by autodoc, but for objects without docstrings to be declared by autodoc you must use :undoc-members: option with the autodoc directives.
Members without docstrings will be left out, unless you give the undoc-members flag option:
.. automodule:: noodle
:members:
:undoc-members:
One effect of declaring an object is that it is inserted in the index. So you can check the index to make sure it has been declared and inserted. (However note that labels used in referencing arbitrary locations are not inserted in the index.)

How to pass values for options arguments when running a Dataflow app in Eclipse

I try to test my first Dataflow app by running it in Eclipse.
When I try to pass 4 values for the arguments on "Run Configuration" on "Arguments" tab as following:
projects/poc/subscriptions/poc-TestApp1 poc myDataSet my_logs
I get the error:
Argument 'projects/poc/subscriptions/poc-TestApp1' does not begin
with '--'
adding -- to all arguments produced a different error.
Based on your question, it seems that you have custom argument parsing code in your program (I suppose you're extracting your arguments as args[0], args[1] etc. in your main() function?), but still use PipelineOptionsFactory.fromArgs(args) to configure the options for Dataflow itself.
Dataflow does not support this mixed way of specifying command-line arguments - you need to define your own PipelineOptions to represent your configuration parameters, and specify them prefixed with --.
Please see here for details, in particular here for creating custom options.

Why does Eunit not require test functions to be exported?

I'm going through the EUnit chapter in Learn You Some Erlang and one thing I am noticing from all the code samples is the test functions are never declared in -export() clauses.
Why is EUnit able to pick these test functions up?
From the documentation:
The simplest way to use EUnit in an Erlang module is to add the following line at the beginning of the module (after the -module declaration, but before any function definitions):
-include_lib("eunit/include/eunit.hrl").
This will have the following effect:
Creates an exported function test() (unless testing is turned off, and the module does not already contain a test() function), that can be used to run all the unit tests defined in the module
Causes all functions whose names match ..._test() or ..._test_() to be automatically exported from the module (unless testing is turned off, or the EUNIT_NOAUTO macro is defined)
Glad I found this question because it gives me a meaningful way to procrastinate and I was wondering how functions get created and exported dynamically.
Started by looking at the latest commit affecting EUnit in the Erlang/OTP Github repo, which is 4273cbd. (The only reason for this was to find a relatively stable anchor instead of git branches.)
0. Include EUnit's header file
According EUnit's User's Guide, the first step is to -include_lib("eunit/include/eunit.hrl"). in the tested module, so I assume this is where the magic happens.
1. otp/lib/eunit/include/eunit.hrl (lines 79 - 91)
%% Parse transforms for automatic exporting/stripping of test functions.
%% (Note that although automatic stripping is convenient, it will make
%% the code dependent on this header file and the eunit_striptests
%% module for compilation, even when testing is switched off! Using
%% -ifdef(EUNIT) around all test code makes the program more portable.)
-ifndef(EUNIT_NOAUTO).
-ifndef(NOTEST).
-compile({parse_transform, eunit_autoexport}).
-else.
-compile({parse_transform, eunit_striptests}).
-endif.
-endif.
1.1 What does -compile({parse_transform, eunit_autoexport}). mean?
From the Erlang Reference Manual's Module chapter (Pre-Defined Module Attributes):
-compile(Options).
Compiler options. Options is a single option or a list of options. This attribute is added to the option list when
compiling the module. See the compile(3) manual page in Compiler.
On to compile(3):
{parse_transform,Module}
Causes the parse transformation function
Module:parse_transform/2 to be applied to the parsed code before the
code is checked for errors.
From the erl_id_trans module:
This module performs an identity parse transformation of Erlang code.
It is included as an example for users who wants to write their own
parse transformers. If option {parse_transform,Module} is passed to
the compiler, a user-written function parse_transform/2 is called by
the compiler before the code is checked for errors.
Basically, if module M includes the {parse_transform, Module} compile option, then all of M's functions and attributes can be iterated through using your implementation of Module:parse_transform/2. Its first argument is Forms, which is M's module declaration described in Erlang's abstract format (described in Erlang Run-Time System Application (ERTS) User's Guide.
2. otp/lib/eunit/src/eunit_autoexport.erl
This module only exports parse_transfrom/2 to satisfy {parse_transform, Module} compile option and its first order of business is to figure out what are the configured suffixes for test case functions and generators. If not set manually, using _test and _test_ respectively (via lib/eunit/src/eunit_internal.hrl).
It then scans all the functions and attributes of your module using eunit_autoexport:form/5, and builds a list of to be exported functions where the suffixes above match (plus the original functions. I may be wrong on this one...).
Finally, eunit_autoexport:rewrite/2 builds a module declaration from the original Forms (given to eunit_autoexport:parse_transform/2 as the first argument) and the list of functions to be exported (that was supplied by form/5 above). On line 82 it injects the test/0 function mentioned in the EUnit documentation.

How to set ISPP defines based on default Inno Setup variables?

I was trying to:
#define CommonAppData {commonappdata}
but it yields:
Compiler Error
[ISPP] Expression expected but opening brace ("{") found.
How to achieve this with Inno Setup PreProcessor?
{commonappdata} cannot be expanded at compile time, i.e. when the pre-processor runs because it is only known at runtime: It identifies the common application data directory on the machine where the compiled installer is run.
Maybe if you could clarify how you intend to use that define we might be able to help. If for example what you're really interested in is not the common app data directory on the target machine but the one on the developer machine, then you can probably use this:
#define CommonAppData GetEnv("COMMONAPPDATA")
If however you intend to use that define for populating Inno properties that are themselves capable of expanding the constant at runtime then you should use this:
#define CommonAppData "{commonappdata}"
Hope this helps.
#define is a inno setup pre-processor directive, in a pre-compile phase. It works much like a C pre-processor.
By defining a pre-processor variable, we force the compiler to see a script after the ispp defines are resolved:
Inno Setup Preprocessor (ISPP) is an add-on for Jordan Russell's Inno Setup compiler. More technically speaking, it is an additional layer between GUI (your Inno Setup script) and the compiler, which before passing the text intercepts and modifies it in a way it is told by using special directives in the script text.
That said, I can't find a source in documentation nor have time to digg into the source code, but I'm pretty sure inno setup variables are not available during this pre-compile time.
If you just want the defined variable to contain the string {commonappdata}, use it directly in your source... if you want the defined variable to have the run-time value of commonappdata, it doesn't seem possible to me, because that value is determined at runtime as its current value depends on the target machine (windows version, language, etc.).
If you think it twice, it doesn't make sense to try to use that value at pre-compile or compile time... this is just the whole fact that brings inno setup constants like {commonappdata}, {destdir} and the like to existence... that you can express in a standard way at compile time a unknown but meaningful value, which will be known and evaluated at runtime.
You'll probably need to escape the brace. Something like:
#define CommonAppData {{commonappdata}

How can I tell if a module is being called dynamically or statically?

How can I tell if a module is being called dynamically or statically?
If you are operating on z/OS, you can accomplish this, but it is non-trivial.
First, you must trace up the save area chain and use CSVQUERY to find out which program owns each save area. Every other program will be a Cobol runtime module, like IGZCPAC. Under IMS, CICS, TSO, et al, those modules might be different. That is the easy part.
Once you know who owns all the relevant save areas, you can use the OS LOADER / BINDER / LINKER utilities to discover what artifacts are in the same modules. This is the non-easy part.
The ONLY way is to look at the output of the linkage editor (IEWL) or the load module itself. If the module is being called DYNAMICALLY then it will not exist in the main module, if it is being called STATICALLY then it will be seen in the load module. Calling a working storage variable, containing a program name, does not make a DYNAMIC call. This type of calling is known as IMPLICITE calling as the name of the module is implied by the contents of the working storage variable. Calling a program name literal.
Calling a working storage variable,
containing a program name, does not
make a DYNAMIC call.
Yes it does. Call variablename is always DYNAMIC.
Call 'literal' is dynamic or static according to the DYNAM/NODYNAM compiler option.
Caveat: This applies for IBM mainframe COBOL and I believe it is also part of the standard. It may not apply to other non-standard versions of COBOL.
For Micro Focus COBOL statically linking is controlled via call-convention on the call (bit 3) or via the compiler directive LITLINK.
When linking statically the case of the program-id/entry-point and the call itself is important, so you may want to ensure it is exact and use the CASE directive.
The reverse of LITLINK directive is the NOLITLINK directive or a call-convention without bit 3 set!
On Windows you can see the exported symbols in your .dll by using the "dumpbin /exports" utility and on Unix via the 'nm' utility.
A import .lib for the .dll created via "cbllink" can be created by using the '-K'command line option on cbllink.
Look at the call statement. If the called program is described in a literal then it's a static call. It's called a dynamic call if the called program is determined at runtime:
* Static call
call "THEPROGRAM"
* Dynamic call
call wsProgramName

Resources