Relocation overflows with common section - memory

I have a project in which I have a main program main.f95 which uses a bunch of modules: each subroutine called by main.f95 is contained in its own module. I've done this to avoid interface blocks.
There are two additional modules: global.f95 which contains 8 scalar integers declared as parameters, and Param.f95 which contains 33 scalar reals (using NAG working precision, i.e. double). Finally, one of the subroutines mentioned above, Set_Param.f95, assigns values to the scalars declared in Param.f95. This happens right at the beginning of main.f95.
Finally, I'm using the NAG Fortran library (mark 26) and ifort64-18.
I'm getting the following error at compilation (linking?):
Set_param_mod.o: In function `set_param_mod_mp_set_param_':
Set_param_mod.f95:(.text+0x47): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_rho_i_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0x73): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_theta_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0x84): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_delta_k_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0x95): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_delta_i_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0xa6): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_theta_i_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0xb7): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_theta_k_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0xc8): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_c_e_' defined in COMMON section
in Param.o
Set_param_mod.f95:(.text+0xd2): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_wage_' defined in COMMON
section in Param.o
Set_param_mod.f95:(.text+0xde): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_csi_' defined in COMMON section
in Param.o
Set_param_mod.f95:(.text+0xea): relocation truncated to fit:
R_X86_64_PC32 against symbol `param_mp_b1_' defined in COMMON section
in Param.o
Set_param_mod.f95:(.text+0xf6): additional relocation overflows
omitted from the output
If I'm reading this right, the overflow occurs when Set_Param.f95 tries to assign values to the corresponding scalar variables in Param.f95.
Reading up on other threads here and on the Intel developer forum, it would seem like this should only be occurring if I had >2GB of static data (possibly in COMMON blocks, of which I have none; plus, as listed above, my variables get nowhere near 2GB). Furthermore, the main prescriptions given by those threads are:
i) declare all your big arrays allocatable in your main program (which
I've done; I have not, however, declared them allocatable in
individual subroutines) and
ii) declare all your data in modules (which I've also done)
Some answers mention that the problem might arise with "global" variables, which I'm not sure what they mean by in Fortran, especially because of prescription ii) above.
Given I am pretty lost at this point, I wonder whether the problem stems from having put every subroutine in a separate module, rendering the temporary arrays within "global" in whatever sense that is meant in those other threads?
Any other leads?

Related

Why are all of my symbols not showing up on Latex editor?

I am trying to create a symbols list. I have been provided a LaTex template from the university to write my thesis on. I am not well versed in LaTex, and need some help to determine why my list of symbols are not showing up in the compiled document.
I have a "List of Symbols Glosseries.tex" page that contains a bunch of sample symbols. I have added one test symbol at the bottom labeled "ET." However, in the compiled document, I only see 7 symbols, when in the "List of Symbols.tex" document there are 10.
This is the symbols code:
% This uses the glossaries package. With this package, you can include multiple types of lists, track page numbers if desired, and define new lists. More information can be found at the following sites: https://mirrors.mit.edu/CTAN/macros/latex/contrib/glossaries/glossariesbegin.pdf and https://mirrors.rit.edu/CTAN/macros/latex/contrib/glossaries/glossaries-user.pdf https://www.overleaf.com/learn/latex/Glossaries
%*****************************************
%Define your list of glossary items below. Remember that the entries that you enter in this file will not automatically appear in the List of Symbols. You also have to reference the symbol in the body of your thesis by using the \gls command.
%symbols
\newglossaryentry{deg}{name=$^\circ$, description={Degree}}
\newglossaryentry{grav}{name={1D}, description={Normal gravity environment}}
\newglossaryentry{wf}{name={\textit{f}}, description={Wear factor}}
\newglossaryentry{alp}{name={$\alpha$},description={Alpha}}
\newglossaryentry{theta}{name={$r_O$}, description={ecosystem respiration at reference temperature $T_a=0{^\circ}$C}}
\newglossaryentry{te}{name={$\tau_e$}, description={precision of the normal distribution of the likelihood}}
\newglossaryentry{q10}{name={$Q_{10}$}, description={multiplication factor to respiration with 10$^\circ$C increases in $T_a$}}
\newglossaryentry{phi}{name={$\phi$}, description={vapour pressure deficit response function}}
\newglossaryentry{del}{name=$\delta$, description={Transition coefficient constant for the design of linear-phase FIR filters which are used to take up space when testing the list of symbols}}
%this is the new symbol I am testing
\newglossaryentry{ET}{name=$\lambda$\text{E}, description={Latent Heat of Vaporization supplied to the atmosphere in watts per meter squared}}
Here is the code referenced in the "main.tex" which calls the list of symbols and generates a page with each symbol and description under the "list of symbols" title:
%**************************
%List of Symbols Section
%**************************
\singlespacing
\renewcommand*{\arraystretch}{2}
\printglossary[title=\centering List of Symbols, title=List of Symbols,style=mystyle,nonumberlist]
Here is the portion of the .pdf generated that is missing symbols:
Why is this happening? I appreciate any information provided.

Partial SSA in LLVM

I came across this concept of partial SSA in LLVM where LLVM identifies two classes of variables: (1) top-level variables are those that cannot be referenced indirectly via a
pointer, i.e., those whose address is never exposed via the address-of operator or returned via a dynamic memory allocation;
(2) address-taken variables are those that have had their address exposed and therefore can be indirectly referenced via a pointer
This definition is verbatim from This paper.
The paper further explains with an example that I can't seem to wrap my head around.
Is there an easier example for this or maybe any other resource I can look into?
Any help would be greatly appreciated.

YAML: encoding vs semantic differences

I would like to get some better understanding about what aspects of YAML refer to the encoding of data vs what aspects refer to semantic.
A simple example:
test1: dGVzdDE=
test2: !!binary |
dGVzdDE=
test3:
- 116
- 101
- 115
- 116
- 49
test4: test1
Which of these values (if any) are equivalent?
I would argue that test1 encodes the literal string value dGVzdDE=. test2 and test3 both encode the same array, just using a different encoding. I am unsure about test4, it contains the same bytes as test2 and test3 but does this make it the equivalent value or is a string in YAML different from a byte array?
Different tools seem to produce different answers:
https://onlineyamltools.com/convert-yaml-to-json suggests that test2 and test3 are equivalent, but different from test4
https://yaml-online-parser.appspot.com/ suggests that test2 and test4 are equivalent, but different from test4
to yq all entries are different yq < test.yml:
{
"test1": "dGVzdDE=",
"test2": "dGVzdDE=\n",
"test3": [
116,
101,
115,
116,
49
],
"test4": "test1"
}
What does the YAML spec intend?
Equality
You're asking for equivalence but that's not a term in the spec and therefore cannot be discussed (at least not without definition). I'll go with discussing equality instead, which is defined by the spec as follows:
Two scalars are equal only when their tags and canonical forms are equal character-by-character. Equality of collections is defined recursively.
One node in your example has the tag !!binary but the others do not have tags. So we must check what the spec says about tags of nodes that don't have explicit tags:
Tags and Schemes
The YAML spec says that every node is to have a tag. Any node that does not have an explicit tag gets a non-specific tag assigned. Nodes are divided into scalars (that get created from textual content) and collections (sequences and mappings). Every non-plain scalar node (i.e. every scalar in quotes or given via | or >) that does not have an explicit tag gets the non-specific tag !, every other node without explicit tag gets the non-specific tag ?.
During loading, the spec defines that non-specific tags are to be resolved to specific tags by means of using a scheme. The specification describes some schemes, but does not require an implementation to support any particular one.
The failsafe scheme, which is designed to be the most basic scheme, will resolve non-specific tags as follows:
on scalars to !!str
on sequences to !!seq
on mappings to !!map
and that's it.
A scheme is allowed to derive a specific tag from a non-specific one by considering the kind of non-specific tag, the node's position in the document, and the node's content. For example, the JSON Scheme will give a scalar true the tag !!bool due to its content.
The spec says that the non-specific tag ! should only be resolved to !!str for scalars, !!seq for sequence, and !!map for mappings, but does not require this. This is what most implementations support and means that if you quote your scalar, you will get a string. This is important so that you can give the scalar "true" quoted to avoid getting a boolean value.
By the way, the spec does not say that every step defined there is to be implemented slavishly as defined in the spec, it is more a logical description. A lot of implementations do not actually transition from non-specific tags to specific tags, but instead directly choose native types for the YAML data they load according to the scheme rules.
Applying Equality
Now that we know how tags are assigned to nodes, let's go over your example:
test1: dGVzdDE=
test2: !!binary |
dGVzdDE=
The two values are immediately not equal because even without the tag, their content differs: Literal block scalars (introduced with |) contain the final linebreak, so the value of test2 is "dGVzdEDE=\n" and therefore not equal to the test1 value. You can introduce the literal scalar with |- instead to chop the final linebreak, which I suppose is your intent. In that case, the scalar content is identical.
Now for the tag: The value of test1 is a plain scalar, hence it has a non-specific tag ?. The question is now: Will this be resolved to !!binary? There could be a scheme that does this, but the spec doesn't define one. But think about it: A scheme that assigns every scalar the tag !!binary if it looks like base64-encoded data would be a very specific one.
As for the other values: The test3 value is a sequence, so obviously not equal to any other value. The test4 value contains content not present anywhere else, therefore also not equal.
But yaml-online-parser does things!
Yes. The YAML spec explicitly states that the target of loading YAML data is native data types. Tags are thought of as generic hints that can be mapped to native data types by a specific implementation. So an !!str for example would be resolved to the target language's string type.
How this mapping to native types is done is implementation-defined (and must be, since the spec cannot cater to every language out there). yaml-online-parser uses PyYAML and what it does is to load the YAML into Python's native data types, and then dump it again. In this process, the !!binary will get loaded into a Python binary string. However, during dumping, this binary string will get interpreted as UTF-8 string and then written as plain scalar. You can argue this is a bug, but it certainly doesn't violate the spec (as the spec doesn't know what a Python binary string is and therefore does not define how it is to be represented).
In any case, this shows that as soon as you transition to native types and back again, everything goes and nothing is certain because native types are outside of the spec. Different implementations will give you different outputs because they are allowed to. !!binary is not a tag defined in the JSON scheme so even translating your input to JSON is not well-defined.
If you want an online tool that shows you canonical YAML representation without loading data into native types and back, you can use the NimYAML testing ground (my work).
Conclusion
The question of whether two YAML inputs are equal is an academic one. Since YAML does allow for different schemes, the question can only be definitely answered in the context of a certain scheme.
However, you will find very few formal scheme definitions outside of the YAML spec. Most applications that do use YAML will document their input structure in a less formal way, and most of the time without discussing YAML tags. This is fine because as discussed before, loading YAML does not need to directly implement the logical process described in the spec.
Your answer for practical purposes should come from the documentation of the application consuming the YAML data. If the documentation is very good, it will answer this, but a lot of YAML-consuming applications just use the default settings of the YAML implementation they use without telling you about this.
So the takeaway is: Know your application and know the YAML implementation it uses.

Inconsistently handled emoji sequences on iOS?

On both iOS and macOS, sequences of regional indicator symbols are rendered as national flag emoji, and if the sequence is invalid the actual symbols are presented instead:
However, if the sequence happens to contain a pair of regional indicator symbols that don't map to a flag emoji, the potential flags are rendered on a first-match basis:
iOS/macOS rendering the symbols: F F I S E S.
In Swift 3, consecutive regional indicator symbols were all lumped into one Character, meaning that one Character object could contain a theoretically limitless amount of UnicodeScalar objects, as long as they were all regional indicator symbols. In essence, Swift 3 didn't break regional indicator symbols at all.
In Swift 4, on the other hand, one Character object contains at most two regional indicator symbols in its Unicode scalar representation. Additionally, and understandably, the validity of the sequence isn't considered, so regional indicator symbol sequences are simply broken up at every two scalars and considered a Character. Now, iterating the same string as above and printing each character produces the following:
Swift 4 string containing the symbols: F F I S E S.
Which brings us to the actual question – is the issue with how iOS and macOS renders the sequences, or how Swift 4 constructs the Character representation in strings?
I'm curious as to which party would be the most appropriate to report this peculiarity to.
Here is a minimal reproducible snippet for the behaviour in Swift 4:
// Regional indicator symbols F F I S E S
var string = "\u{1f1eb}\u{1f1eb}\u{1f1ee}\u{1f1f8}\u{1f1ea}\u{1f1f8}"
for character in string {
print(character)
}
After some investigation, it appears that neither is wrong, although the method implemented in Swift 4 is more true to recommendations.
As per the Unicode standard (emphasis mine):
The representative glyph for a single regional indicator symbol is just a dotted box containing a capital Latin letter. The Unicode Standard does not prescribe how the pairs of regional indicator symbols should be rendered. However, current industry practice widely interprets pairs of regional indicator symbols as representing a flag associated with the corresponding ISO 3166 region code.
– The Unicode Standard, Version 10.0 – Core Specification, page 836.
Then, on the following page:
Conformance to the Unicode Standard does not require conformance to UTS #51. However, the interpretation and display of pairs of regional indicator symbols as specified in UTS #51 is now widely deployed, so in practice it is not advisable to attempt to interpret pairs of regional indicator symbols as representing anything other than an emoji flag.
– The Unicode Standard, Version 10.0 – Core Specification, page 837.
From this I gather that while the standard doesn't set any rules for how the flags should be rendered, the chosen path for handling the rendering of invalid flag sequences in iOS and macOS is inadvisable. So, even if there exists a valid flag further in the sequence, the renderer should always consider two consecutive regional indicator symbols as a flag.
Finally, taking a look at UTS #51, or "the emoji specification":
Options for presenting an emoji_flag_sequence for which a system does not have a specific flag or other glyph include:
Displaying each REGIONAL INDICATOR symbol separately as a letter in a dotted square, as shown in the Unicode charts. This provides information about the specific region indicated, but may be mystifying to some users.
For all unsupported REGIONAL INDICATOR pairs, displaying the same “missing flag” glyph, such as the image shown below. This would indicate that the supported pair was intended to represent the flag of some region, without indicating which one.
– Unicode Technical Standard #51, revision 12, Annex B.
So, in conclusion, best practice would be representing invalid flag sequences as a pair of regional indicator symbols – exactly as is the case with Character objects in Swift 4 strings – or as a generic missing flag glyph.

Does a parser or a lexer generate a symbol table?

I'm taking a compilers course and I'm recapping the introduction. It's a general overview of how the compiler process works.
I'm a bit confused however.
In my course it states: "in addition a lexical analyzer will typically access the symbol table to store/get information on certain source language concepts". So this leads me to believe that a lexer will actually build a symbol table. The way I see it he creates tokens and stores the min a table and states what type of symbol it is. Like "x -> VARIABLE", for example.
Then again, when reading through Google hits and I can only seem to find vague information about the fact that the parser generates this? But the parsing phase comes after the lexer phase. So I'm a bit confused.
Symbol Table Population after parsing; Compiler building
(States that the parser populates the table)
http://www.cs.dartmouth.edu/~mckeeman/cs48/mxcom/doc/Symbols.html
Says "The symbol table is built by walking the syntax tree.". The syntax tree is generated by the parser, right? (Parse tree). So how can the lexer, which runs before the parser use this symbol table?
I understand that a lexer can not know the scope of a variable and other information that is contained within a symbol tabe. Therefore I understand that the parser will add this information to the table. However, a lexer does know wether a word is a variable, declaration keyword etc. Thus it should be able to build up a partial (?) symbol table. So could it perhaps be that they each build part of the symbol table?
I think part of the confusion stems from the fact that "symbol table" means different things to different people, and potentially at different stages in the compilation process.
It is generally agreed that the lexer splits the input stream into tokens (sometimes referred to as lexemes or terminals). These, as you say, can be categorized as different types, numbers, keywords, identifiers, punctuation symbols, and so on.
The lexer may store the recognized identifier tokens in a symbol table, but since the lexer typically does not know what an identifier represents, and since the same identifier can potentially mean different things in different compilation scopes, it is often the parser - which has more contextual knowledge - that is responsible for building the symbol table.
However, in some compiler designs the lexer simply builds a list of tokens, which is passed on to the parser (or the parser requests tokens from the input stream on demand), and the parser in turn generates a parse tree (or sometimes an abstract syntax tree) as the output, and then the symbol table is built only after parsing has completed for a certain compilation unit, by traversing the parse tree.
Many different designs are possible.

Resources