I know one Dafny source file can be included in another, leading to textual concatenation of the files prior to verification. But I don't have a clear mental model of the relationships between includes, imports, and which files are verified when. Perhaps an expert can elaborate. Thanks.
This is at least partially documented in the Dafny Reference Manual in various sections.
Section 2.0 discusses include:
Included files also obey the Dafny grammar. Dafny parses and processes the transitive closure of the original source files and all the included files, but will not invoke the verifier on these unless they have been listed explicitly on the command line.
(emphasis mine)
Section 3.1 discusses import, which is a feature of Dafny's module system. You can only use import inside of a module, to make available the definitions in another module. Note that the Dafny module system is independent of how files are stored on disk, unlike some other languages. So if the module you are importing is defined in another file, you may have to include that file if you haven't already.
import has no direct influence on whether the imported module is verified or not. Modules in other files will not be verified unless their file is listed on the command line. Modules in the current file are always verified (whether or not they are imported by anyone).
Related
I have a legacy Windows project using the legacy 32 Bit C++ compiler. For various reasons I need to use the Windows 8+ function PathCchCanonicalizeEx. C++Builder seems to provide the header and some module definition file for that, but I can't find any library to link against:
[ilink32 Error] Error: Unresolved external 'PathCchCanonicalizeEx' referenced from C:\[...]\WIN32\DEBUG\TMP\FILE.OBJ
How am I supposed to fix this? Do I need to add a Windows 8.1 SDK? Is the necessary lib simply named differently and I can't find it? Something completely different?
According my tests, one has two options:
IMPLIB/MKEXP
I'm developing/testing a some Windows 10 21H2, which provides an implementation for PathCchCanonicalizeEx in some DLL already. So if that source DLL is known, one can use IMPLIB or MKEXP to create an import library manually. I did that and after adding the created library from IMPLIB to my project, the linker errors were instantly gone.
Though, it's not that easy to know where PathCchCanonicalizeEx is placed in. One pretty easily finds the api-ms-win-core-path-l1-1-0.dll, but that thing is NOT an actual file on the disk and therefore can't be used by IMPLIB or MKEXP. That name is only a virtual concept for the library loader to address the same named API set of modern Windows, the extension .dll doesn't mean it's a file at all.
You can use an API set name in the context of a loader operation such as LoadLibrary or P/Invoke instead of a DLL module name to ensure a correct route to the implementation no matter where the API is actually implemented on the current device. However, when you do this you must append the string .dll at the end of the contract name. This is a requirement of the loader to function properly, and is not considered actually a part of the contract name. Although contract names appear similar to DLL names in this context, they are fundamentally different from DLL module names and do not directly refer to a file on disk.
https://learn.microsoft.com/en-us/windows/win32/apiindex/windows-apisets#api-set-contract-names
What you really need to work with is KernelBase.dll, which is even documented by MS.
implib "KernelBase x86.lib" C:\Windows\SysWOW64\KernelBase.dll
implib "KernelBase x86-64.lib" C:\Windows\System32\KernelBase.dll
Module Definition File
The downside of manually creating LIB files is that one needs to maintain those with the project. Things depend on if the target is 32 or 64 Bit, DEBUG or RELEASE, so paths might become a bit complex, one might need to create relative paths for libraries in the project settings using placeholders for the target and stuff like that.
It seems that all of this can be avoided with Module Definition Files, which's purpose is to provide IMPORT and EXPORT statements to either consume exported functions by other DLLs or make that possible for others with own functions. I've successfully resolved my linker problems by simply creating a file named like my app using the extension .def alongside my other project files. That file needs to be added to the project, though.
dbxml.cbproj
dbxml.cbproj.local
dbxml.cpp
dbxml.def
dbxml.res
[...]
The following content made the app use the correct function from the correct DLL. Though, what didn't work was using the API set name, which resulted in an error message by the linker.
IMPORTS
KernelBase.PathCchCanonicalizeEx
IMPORTS
api-ms-win-core-path-l1-1-0.PathCchCanonicalizeEx
[ilink32 Error] Invalid command line switch for "ilink32". Parameter "ItemSpec" cannot be null.
[ilink32 Error] Fatal: Error processing .DEF file
The latter is after restarting C++Builder, so I guess the format of the file is simply wrong because of the API set name.
Is there a way to share types across fsx files?
When using #load to load the same file containing a type from multiple FSX files they seem to be prefixed into a different FS_00xx namespace each time, which means you can't pass them around.
Are there any ways around this behaviour without resorting to compiling into an assembly?
As for
http://msdn.microsoft.com/en-us/library/dd233169.aspx
[.fsx files are] used to include informal testing code in F# without adding the test code to your application, and without creating a separate project for it. By default, script files are not included in the build of a project even when they are part of a project.
This means that if you have a project with enough structure to be having such dependency problems, you should not use .fsx files, instead write modules/namespaces using .fs files. That is, you really should compile them types into an assembly.
The f# interactive interpreter generates assembly for each loaded files. If you load a file twice, the bytecode is generated twice, and the types are different even if they have the same definition and the same name. This means that there is no way for you to share types between two .fsx files, unless one of them includes the other.
When you #load a file which has the same types as ones already present in your environment, the f# interactive interpreter can use two different strategy:
refuse to load the file if conflicts with existing names arises (complaining that some stuff is already defined)
put the names in FS_00xx namespace (so that they are actually different types from the ones you already loaded), eventually opening the resulting namespace so that names are available from interactive session.
Since fsx files are supposed to be used as informal test it is more user-friendly to use the second approach (there are also technical reason for which the second approach is used, mainly dependent on .net VM type system, and the fact that existing types cannot be changed at runtime).
[Note: This is a more specific answer to a more specific question that is a duplicate of this one.]
I don't think there is a nice and easy solution for this. The one solution I have been using in some projects (like the F# snippets web site) is to have only one top-level fsx file that loads a number of fs files. For example, see app.fsx.
So, you would have common.fs, intMapper.fs and stringMapper.fs that would be loaded from caller.fsx as follows:
#load "common.fs"
#load "stringMapper.fs"
#load "intMapper.fs"
open Common
Inside stringMapper.fs and intMapper.fs, you do not load common.fs. The common types will be loaded by caller.fsx before, so things will work.
The only issue with this is that intMapper.fs now isn't a standalone script file - and if you want to get autocomplete in an editor, you need to add a fsproj file that specifies the file order. In F# snippets project, there is a project file which specifies the order in whch the editor should see and load the files.
Have all the #load and #open directives in the file you actually run from fsi.exe (C in the example below), and make sure the loaded files themselves do not #load their own dependencies:
Files A.fsx, B.fsx, C.fsx. B depends on A. C depends on B and A.
B contains
//adding the code below would cause the types defined in A to be loaded twice
//#load "A.fsx"
//#open A
C contains
#load "A.fsx"
#open A
#load "B.fsx"
#open B
Unfortunately this then makes all the files hard to edit from Visual Studio - the editor doesn't know about their dependencies and shows all sorts of errors.
Therefore this is a bit of a hack, and the recommended way seems to be to have a single .fsx file and compile everything else into a .dll :
// file1.fsx
#r "MyAssembly.dll"
https://msdn.microsoft.com/en-us/library/dd233175.aspx
Beyond allowing one file to use another file's attributes, what actually happens behind the scenes? Does it just provide the location to access to that file when its contents are later needed, or does it load the implementation's data into memory?
In short;
The header file defines the API for a module. It's a contract listing which methods a third party can call. The module can be considered a black box to third parties.
The implementation implements the module. It is the inside of the black box. As a developer of a module you have to write this, but as a user of a third party module you shouldn't need to know anything about the implementation. The header should contain all the information you need.
Some parts of a header file could be auto generated - the method declarations. This would require you to annotate the implementation as there are likely to be private methods in the implementation which don't form part of the API and don't belong in the header.
Header files sometimes have other information in them; type definitions, constant definitions etc. These belong in the header file, and not in the implementation.
The main reason for a header is to be able to #include it in some other file, so you can use the functions in one file from that other file. The header includes (only) enough to be able to use the functions, not the functions themselves, so (we hope) compiling it is considerably faster.
Maintaining the two separately most results from nobody ever having written an editor that automates the process very well. There's not really a lot of reason they couldn't do so, and a few have even tried to -- but the editors that have done so have never done very well in the market, and the more mainstream editors haven't adopted it.
Well i will try:
Header files are only needed in the preprocessing phase. Once the preprocessor is done with them the compiler never even sees them. Obviously, the target system doesn't need them either for execution (the same way .c files aren't needed).
Instead libraries are executed during the linking phase.If a program is dynamically linked and the target environment doesn't have the necessary libraries, in the right places, with the right versions it won't run.
In C nothing like that is needed since once you compile it you get native code. The header files are copy pasted when u #include it . It is very different from the byte-code you get from java. There's no need for an interpreter(like the JVM): you just feed it your binary stuff to the CPU and it does its thing.
Why is there no compile time errors or warnings when I call a function in another module that doesn't exist or has the wrong arity?
The compiler has all of the exports information in a module to make this possible. Is it just not implemented yet or is there a technical reason why it is not possible that I am not seeing?
I don't know why it's missing (probably because modules are completely separate and compilation of one doesn't depend on the other really - but that's just speculation). But I believe you can find problems like this with dialyzer static analysis. Have a look at http://www.erlang.org/doc/man/dialyzer.html
It's part of the system itself, so try including it in your workflow.
It is as others have said. Modules are compiled separately and there is absolutely no guarantee that the environment which exists at compile-time is the same as the one that will exit at run-time. This implies that doing checks at compile-time about the existence of a module, or of a function in it, is basically meaningless. At run-time that module may or may not be loaded, the function you call may or may not be defined in the module, or it may do something completely different from what you expected.
All this is due to the very dynamic nature of Erlang systems. There is no real way as such to define what is in system at run-time. Hot code-loading is a part of this and works properly because of the dynamic nature of the system. It means you can redefine the system at run-time, you can load in new versions of existing modules with a different interface and you can load in completely new modules and remove existing modules.
For this to work all checks about the existence of a module or function must be done at run-time.
Tools like dialyzer can help with this but they do assume that you don't do anything "funny" at run-time and the system you check is the same as the system you run. Which is of course all good, but very static. And against Erlang's nature which is to be dynamic, in everything.
Unfortunately, in this case, you can't both have your cake and eat it.
You may use the xref application to check the usage of deprecated, undefined and unused functions (and more!).
Compile the module with debug_info:
Eshell V6.2 (abort with ^G)
1> c(test, debug_info).
{ok,test}
Check the module with xref:m/1:
2> xref:m(test).
[{deprecated,[]},
{undefined,[{{test,start,0},{erlang,foo,0}}]},
{unused,[]}]
You may want to check out more about xref here:
Erlang -- Xref - The Cross Reference Tool (Tools User's Guide)
Erlang -- xref (Tools Reference Manual)
It is due hot code loading. Each module can be loaded in any particular time. So when you have in your module A code which calls function B:F then you can't tell it is wrong in compile time when your source code of module B has no function B:F. Imagine this: You compile module A with call to B:F. You load module B into memory without function B:F. Then you load module A which contain call B:F but don't call it. Then compile new version of module B with B:F. Then load this new module and then you can call B:F and everything is perfectly right. Imagine your module A makes module B on fly and load it. You can't tell in any particular time that it is wrong that module A contain call to nonexistent function B:F.
In my opinion most, if not all, compiler does not verify that a function exists at compilation. What it is required in general is a prototype declaration of the function: the type of the return value, the list and type of all arguments. This is done in C/C++ by including some_file.h in each module definition (not the .c or .cpp).
In Erlang this type verification is done dynamically, while the program is running, so it is not necessary to include these definitions. It is even totally useless because Erlang allows to upgrade the application in run, so the function type may change, or the function may disappear, on purpose or by mistake, during application life time; it is why the Erlang designer have chosen to make this verification at run time and not at build time.
The error you speak about generally occurs during the link phase of the code generation, when the "compiler" tries to gather all together some individual pieces of object code to build an executable file or a library, during this phase the linker solves all the external addresses (for shared variable, static call...). This phase does not exist in Erlang, a module is totally self contained; it does no share anything with the rest of the application, no variable nor function address.
Of course, it is mandatory to use some tools and make some test before updating a running production program, but I consider that these verifications have exactly the same level of importance than the correctness of the algorithm itself.
When you compile e.g. module alpha which has a call to beta:some_function(...), the compiler cannot assume some specific version of beta to be in use at runtime. Maybe you will compile a newer version of beta after you compiled alpha and this will have the correct some_function exported. Maybe you will upload alpha to be used on a different host, which has all the other modules.
The compiler therefore just compiles the remote call and any errors (non-existent module or function) are resolved at run time, when some version of beta will be loaded.
J2ME lacks the java.util.Properties class. Although it is possible to put application settings in the JAD file this is not recommended for many properties. (Since, some platforms limits the size of JAD file.) I want to put a configuration file inside my jar file and parse it. And I do not want to go with XML because it will be overshooting for my case.
Question is, is there an already existing library for J2ME that can parse properties files or something similar such as INI file. Or would you recommend another method to solve the initial problem?
The best solution probably depends on what is going to be generating the properties files.
If you've got other non-JavaME projects using the same properties files, then stick with them, and write or find a parser. (There is a simple one from GoBible available on Google Code)
However you might find it just as easy to keep your configuration as static final String myproperty="myvalue"; in a Configuration.java file which you compile, and include in the jar instead, since you then do not need any special code to locate, open, read, and parse them.
You do then pick up a limitation on what you call them though, since you can no longer use the common dot separated namespacing idiom.