function 'Func/Arity' already imported from 'Module' - erlang

I defined both area/1 and perim/1 in modules sqaure and circle.
I want to import and use them in another module. Here is my import statements:
-import(square, [area/1, perim/1]).
-import(circle, [area/1, perim/1]).
I got these error messages.
~/test.erl:4: function area/1 already imported from square
~/test.erl:4: function perim/1 already imported from square
I know erlang does not support namespace. But since we can qualify a function call by specifying the module (i.e. square:area vs circle:area), I fail to see how the lack of namespace is the source of the error here.
So, what exactly caused the above error and how can I fix it?

In Erlang, "importing" a function from another module means being able to call it as if it were a local function, without the module prefix. So with this directive:
-import(square, [area/1, perim/1]).
you could write area(42) and it would mean the same as square:area(42).
However, if you include area and perim functions from two modules, it would be ambiguous which one you'd actually call when writing area(42).
As you correctly note, you can always qualify the function call with the name of the module, i.e. square:area(42) and circle:area(42) - so I would suggest doing so consistently and removing both import directives. This is also recommended by rule 6.6 of the Erlang Programming Rules - "Don't use import".

Related

Identifier shadows elements in Dart:core

If an identifier is defined in the current dart file, then that identifier shadows elements with the same identifier that are defined in other files included using the import directive. When using such an identifier in the current file, no errors occur, the version from the current file is used.
If some id is defined in file 1 and file 2, but is not defined in the current file, then an error will occur when trying to apply the identifier, which can be eliminated by using import prefixes or show, hide commands.
The logic described above is clear. But when working with the dart:core library, the following error may occur. Let's consider the cause of the error using the print function as an example.
If the print function is not defined in the current file and that function is defined in one of the import files, then the dart:core version of the library will be grayed out. No compilation errors that require an import prefix or the show/hide command will be generated.
The described situation can lead to confusion. The same applies to some other dart:core libraries. There are examples below.
// test2.dart
void print() {
}
// test.dart
import 'test2.dart';
void main() {
// some useful function from test2.dart
print2();
// static error due to inappropriate argument types,
// not due to collision with print function from dart:core
print('work is done');
}
This error persists when explicitly including the dart:core file.
In connection with what has been said, there are the following questions.
Will the dart developers read my message, or where do they need to forward it to read it?
Is this really a bug or was it meant to be? I propose to correct it so that the language is more simple and understandable.
I expected that if an identifier in one of the files matches an identifier from the dart:core library, the behavior will be the same as if a simple other package, file.
The rules for name conflicts are special for names coming from platform (dart:) libraries.
Normally if you import different declarations with the same name through two different imports, it's a conflict. You get an error if you try to refer to the name, because the compiler cannot figure out which one you mean.
However, if you import different declarations with the same name, and some, but not all, come from dart: libraries, then the declarations from the dart: libraries are ignored, and the non-platform libraries take precedence. (And if there is only one declaration from a non-dart: library, it just works as normal.)
The reason for this exception is that Dart libraries typically come from packages, where you can control which version of the library you get.
You won't get new declarations without doing a dart pub update or similar. That means that a new name conflict are only introduced at times where you are actively developing, and you can then handle.
Platform libraries come from the SDK, and your SDK can be updated independently of the packages you are working on. Even if you tested your application thoroughly and locked all your dependencies to specific version that you know will work, it might end up being run on a newer SDK.
Because of that, adding new declarations to the platform libraries was considered more dangerous than adding them to normal packages. To avoid it being practically impossible to add anything to the platform libraries, it was instead decided to not make such new conflicts into errors.
If the platform libraries introduce a new declaration, it should not affect your imports. It won't introduce a new conflict.
(It can cause other errors, because no API change in Dart is ever entirely safe, but at least it won't cause import conflicts.)

Why does Eunit not require test functions to be exported?

I'm going through the EUnit chapter in Learn You Some Erlang and one thing I am noticing from all the code samples is the test functions are never declared in -export() clauses.
Why is EUnit able to pick these test functions up?
From the documentation:
The simplest way to use EUnit in an Erlang module is to add the following line at the beginning of the module (after the -module declaration, but before any function definitions):
-include_lib("eunit/include/eunit.hrl").
This will have the following effect:
Creates an exported function test() (unless testing is turned off, and the module does not already contain a test() function), that can be used to run all the unit tests defined in the module
Causes all functions whose names match ..._test() or ..._test_() to be automatically exported from the module (unless testing is turned off, or the EUNIT_NOAUTO macro is defined)
Glad I found this question because it gives me a meaningful way to procrastinate and I was wondering how functions get created and exported dynamically.
Started by looking at the latest commit affecting EUnit in the Erlang/OTP Github repo, which is 4273cbd. (The only reason for this was to find a relatively stable anchor instead of git branches.)
0. Include EUnit's header file
According EUnit's User's Guide, the first step is to -include_lib("eunit/include/eunit.hrl"). in the tested module, so I assume this is where the magic happens.
1. otp/lib/eunit/include/eunit.hrl (lines 79 - 91)
%% Parse transforms for automatic exporting/stripping of test functions.
%% (Note that although automatic stripping is convenient, it will make
%% the code dependent on this header file and the eunit_striptests
%% module for compilation, even when testing is switched off! Using
%% -ifdef(EUNIT) around all test code makes the program more portable.)
-ifndef(EUNIT_NOAUTO).
-ifndef(NOTEST).
-compile({parse_transform, eunit_autoexport}).
-else.
-compile({parse_transform, eunit_striptests}).
-endif.
-endif.
1.1 What does -compile({parse_transform, eunit_autoexport}). mean?
From the Erlang Reference Manual's Module chapter (Pre-Defined Module Attributes):
-compile(Options).
Compiler options. Options is a single option or a list of options. This attribute is added to the option list when
compiling the module. See the compile(3) manual page in Compiler.
On to compile(3):
{parse_transform,Module}
Causes the parse transformation function
Module:parse_transform/2 to be applied to the parsed code before the
code is checked for errors.
From the erl_id_trans module:
This module performs an identity parse transformation of Erlang code.
It is included as an example for users who wants to write their own
parse transformers. If option {parse_transform,Module} is passed to
the compiler, a user-written function parse_transform/2 is called by
the compiler before the code is checked for errors.
Basically, if module M includes the {parse_transform, Module} compile option, then all of M's functions and attributes can be iterated through using your implementation of Module:parse_transform/2. Its first argument is Forms, which is M's module declaration described in Erlang's abstract format (described in Erlang Run-Time System Application (ERTS) User's Guide.
2. otp/lib/eunit/src/eunit_autoexport.erl
This module only exports parse_transfrom/2 to satisfy {parse_transform, Module} compile option and its first order of business is to figure out what are the configured suffixes for test case functions and generators. If not set manually, using _test and _test_ respectively (via lib/eunit/src/eunit_internal.hrl).
It then scans all the functions and attributes of your module using eunit_autoexport:form/5, and builds a list of to be exported functions where the suffixes above match (plus the original functions. I may be wrong on this one...).
Finally, eunit_autoexport:rewrite/2 builds a module declaration from the original Forms (given to eunit_autoexport:parse_transform/2 as the first argument) and the list of functions to be exported (that was supplied by form/5 above). On line 82 it injects the test/0 function mentioned in the EUnit documentation.

Why does Erlang have arity in its imports?

I find Erlang's module arity import /n where n is the number of arguments rather bizarre.
In Java and various other languages you can do something like:
import static com.stuff.Blah.myFunction;
Which will import all overloaded Blay.myFunction(..) regardless of parameters.
Besides I guess being explicit why did the language designers decide this was a good idea (I'm not trying to criticize the language... just curious)?
Does it have to do with code swapping?
Or does it have to do with hiding guard methods for recursion? If so why not allow arity on export but no need for arity on import?
Why would I want to be that explicit? That is import the two argument function but not the the three argument of myFunction?
You should be aware of what importing functions in Erlang really does. It is a pure textual transformation. If I do an -import(foo, [bar/1,baz/2]). it means that when I write a call like bar(5) or baz(a, 3) the compiler transforms these to foo:bar(5) and foo:baz(a, 3). That is all it does, nothing else. It doesn't check anything:
It doesn't check if the module foo contains the functions bar/1 or baz/2.
It doesn't even check if the module foo exists.
Really all it does is hide that you are calling a function in another module. That is why the recommendation from experienced Erlangers is "don't use it". It was a mistake. Unfortunately it is much easier to add stupid things than to get rid of them so we were never able to remove it.
"Does it have to do with code swapping?"
Yes, sort of. The unit of all code handling in Erlang is the module. So you compile modules, load modules, purge and delete modules. This means that there are no inter-module dependencies at all in the system and the compiler makes no assumptions about other modules when it is compiling a module. No assumptions are made that the environment in which a module is compiled will be the same in which it is run. That is why it is at runtime the system checks whether the function you are trying to call in another exists, or even if the module itself exists. That is why the import was a purely textual transformation.
Erlang was originally developed in Prolog.
In Prolog, the arity adds additional meaning to what you consider to be the 'arguments, as I understand from a function' in a procedural programming language. But that model does not apply here.
The so-called clauses 'married(X,Y).' and 'married(X,Y,Z).' imply a different kind of relationship 'married', which can be declared as married/2 and married/3.
In procedural programming, 'add(a,b)' or 'add(a,b,c)' are intended to generate the addition of a different number of arguments. That's not immediately the case in Prolog, where it is possible to have the relationship 'a and b, added' or 'a, b and c, added' mean something else. Needless to say, Prolog allows you to declare 'add' as you would expect a function would do. But it allows for more. More available meaning, means more need to control it.
And as in any module system, selecting what you want to expose to external clients makes sense: hence the declaration of arity.
Does it have to do with code swapping?
Kind of. The modules in Erlang are compiled separately (which is part of what allows code swapping), unlike Java classes, so the compiler doesn't know how many versions of the imported function with different arities exist. It could assume that all calls of a function with the given name come from the same module, of course, but the designers likely decided it wasn't particularly useful.
In fact, you rarely want to use imports at all, at least in my experience, just as you rarely use static imports in Java. Just write module:function, like Class.staticMethod.
Or does it have to do with hiding guard methods for recursion?
No, since not importing functions doesn't hide them in any way.

Erlang: "extending" an existing module with new functions

I'm currently writing some functions that are related to lists that I could possibly be reused.
My question is:
Are there any conventions or best practices for organizing such functions?
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
Note that I'm not looking to fork and change the standard lists module, but to find a way to define new functions in a new module also called Lists, but avoid the consequent naming collisions by using some kind namespacing scheme.
Any advice or references would be appreciated.
Cheers.
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
No, so far as I know.
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
They are experimental and not generally used. You could have a module called lists in a different namespace, but you would have trouble calling functions from the standard module in this namespace.
I give you reasons why not to use lists:your_function() and instead use lists_extension:your_function():
Generally, the Erlang/OTP Design Guidelines state that each "Application" -- libraries are also an application -- contains modules. Now you can ask the system what application did introduce a specific module? This system would break when modules are fragmented.
However, I do understand why you would want a lists:your_function/N:
It's easier to use for the author of your_function, because he needs the your_function(...) a lot when working with []. When another Erlang programmer -- who knows the stdlb -- reads this code, he will not know what it does. This is confusing.
It looks more concise than lists_extension:your_function/N. That's a matter of taste.
I think this method would work on any distro:
You can make an application that automatically rewrites the core erlang modules of whichever distribution is running. Append your custom functions to the core modules and recompile them before compiling and running your own application that calls the custom functions. This doesn't require a custom distribution. Just some careful planning and use of the file tools and BIFs for compiling and loading.
* You want to make sure you don't append your functions every time. Once you rewrite the file, it will be permanent unless the user replaces the file later. Could use a check with module_info to confirm of your custom functions exist to decide if you need to run the extension writer.
Pseudo Example:
lists_funs() -> ["myFun() -> <<"things to do">>."].
extend_lists() ->
{ok, Io} = file:open(?LISTS_MODULE_PATH, [append]),
lists:foreach(fun(Fun) -> io:format(Io,"~s~n",[Fun]) end, lists_funs()),
file:close(Io),
c(?LISTS_MODULE_PATH).
* You may want to keep copies of the original modules to restore if the compiler fails that way you don't have to do anything heavy if you make a mistake in your list of functions and also use as source anytime you want to rewrite the module to extend it with more functions.
* You could use a list_extension module to keep all of the logic for your functions and just pass the functions to list in this function using funName(Args) -> lists_extension:funName(Args).
* You could also make an override system that searches for existing functions and rewrites them in a similar way but it is more complicated.
I'm sure there are plenty of ways to improve and optimize this method. I use something similar to update some of my own modules at runtime, so I don't see any reason it wouldn't work on core modules also.
i guess what you want to do is to have some of your functions accessible from the lists module. It is good that you would want to convert commonly used code into a library.
one way to do this is to test your functions well, and if their are fine, you copy the functions, paste them in the lists.erl module (WARNING: Ensure you do not overwrite existing functions, just paste at the end of the file). this file can be found in the path $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/src/lists.erl. Make sure that you add your functions among those exported in the lists module (in the -export([your_function/1,.....])), to make them accessible from other modules. Save the file.
Once you have done this, we need to recompile the lists module. You could use an EmakeFile. The contents of this file would be as follows:
{"src/*", [verbose,report,strict_record_tests,warn_obsolete_guard,{outdir, "ebin"}]}.
Copy that text into a file called EmakeFile. Put this file in the path: $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/EmakeFile.
Once this is done, go and open an erlang shell and let its pwd(), the current working directory be the path in which the EmakeFile is, i.e. $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/.
Call the function: make:all() in the shell and you will see that the module lists is recompiled. Close the shell.
Once you open a new erlang shell, and assuming you exported you functions in the lists module, they will be running the way you want, right in the lists module.
Erlang being open source allows us to add functionality, recompile and reload the libraries. This should do what you want, success.

How can I tell if a module is being called dynamically or statically?

How can I tell if a module is being called dynamically or statically?
If you are operating on z/OS, you can accomplish this, but it is non-trivial.
First, you must trace up the save area chain and use CSVQUERY to find out which program owns each save area. Every other program will be a Cobol runtime module, like IGZCPAC. Under IMS, CICS, TSO, et al, those modules might be different. That is the easy part.
Once you know who owns all the relevant save areas, you can use the OS LOADER / BINDER / LINKER utilities to discover what artifacts are in the same modules. This is the non-easy part.
The ONLY way is to look at the output of the linkage editor (IEWL) or the load module itself. If the module is being called DYNAMICALLY then it will not exist in the main module, if it is being called STATICALLY then it will be seen in the load module. Calling a working storage variable, containing a program name, does not make a DYNAMIC call. This type of calling is known as IMPLICITE calling as the name of the module is implied by the contents of the working storage variable. Calling a program name literal.
Calling a working storage variable,
containing a program name, does not
make a DYNAMIC call.
Yes it does. Call variablename is always DYNAMIC.
Call 'literal' is dynamic or static according to the DYNAM/NODYNAM compiler option.
Caveat: This applies for IBM mainframe COBOL and I believe it is also part of the standard. It may not apply to other non-standard versions of COBOL.
For Micro Focus COBOL statically linking is controlled via call-convention on the call (bit 3) or via the compiler directive LITLINK.
When linking statically the case of the program-id/entry-point and the call itself is important, so you may want to ensure it is exact and use the CASE directive.
The reverse of LITLINK directive is the NOLITLINK directive or a call-convention without bit 3 set!
On Windows you can see the exported symbols in your .dll by using the "dumpbin /exports" utility and on Unix via the 'nm' utility.
A import .lib for the .dll created via "cbllink" can be created by using the '-K'command line option on cbllink.
Look at the call statement. If the called program is described in a literal then it's a static call. It's called a dynamic call if the called program is determined at runtime:
* Static call
call "THEPROGRAM"
* Dynamic call
call wsProgramName

Resources