Rascal slow at importing modules - rascal

I am running Rascal from the REPL and it seems like it takes a pretty long time to import some modules. For example import lang::java::\syntax::Java15; takes seconds to run.
I've also noticed cases where modules that depend on other modules don't appear to be reloaded if they are changed. For example:
program 1:
module A::A
....
program 2:
module B::B
import A::A;
...
REPL:
import A::A;
import B::B;
Now I've made some changes to A and B and I import B again. I would imagine the changes to A would get propagated to the new version of B (since it is importing A) but this doesn't seem to happen.
Why is importing this slow and is there a way to speed this up?
How does importing packages with dependencies in the REPL work?
Thanks!

We recently changed quite a bit about this part of the implementation. So could you tell us which version you are using?
Importing is slow right now because we have a bottleneck in the parsing infrastructure, as far as I can remember. Speeding it up; you can do by not using a console in Debug mode (i.e. use Run As...), using more memory for Eclipse also helps (I use 1.8Gb heap and an 80mb stack).
The REPL works in Eclipse by monitoring which modules have changed since running the previous command on the REPL. When a new command is entered, such as an import command, first all modules which have changed and the modules they depend on are purged, this produces an initial worklist for reloading, which is then executed in a fixpoint fashion to load the new modules (each module only once), then finally the command is executed.

Related

:erlang.load_nif/2 finds shared library file inside original project but can't find it if the project gets imported

I've build a small elixir application that uses NIF functions to execute some c++ code.
The nifs are loaded via:
def load_nifs do
:erlang.load_nif('<relative_path_to_lib>/<lib_name>', 0)
:ok
end
and this works fine.
Now I want to integrate this app into another project. The problem now is that load_nif throws:
Failed to load NIF library: '<relative_path_to_lib>/<lib_name>.so: cannot open shared object file: No such file or directory''
although nothing has changed. I checked the deps folder and the shared library files are exactly where they are supposed to be, so the dependency seems to be loaded correctly. I also tried putting the .so files into the same folder as the module that calls load_nif (and omit <relative_path_to_lib>/) as well as providing an absolute path, all to no avail.
Any help is appreciated, Cheers.
Relevant info regarding my system:
OS: Ubuntu 22.04
Elixir version: Elixir 1.13.0 (compiled with Erlang/OTP 24)
Update:
The issue does not seem to be that files are located at the wrong place, as it finds the files during the first test run after compilation.
However, the error occurs when I repeat the run. It seems that the error message is wrong, since no files are deleted during the test.
If I repeat the function within one test multiple times there's no problem, so the issue is not created because the NIF function is executed multiple times, but because the test that contains the function is repeated multiple times.
Solution:
I still have no idea what causes this behavior but after putting the .so files into a priv directory and accessing them via
:erlang.load_nif(:code.priv_dir(:<app_name>), 0)
the tests pass.

Xcode 11 recompiles too much

Xcode 11 is recompiling (nearly?) my whole project, even if I just change a local private variable, or change a value of a constant in local scope, sometimes even in local private function scope. I sometime can get 2 or 3 changes with quick builds as expected, but soon enough it decides to recompile everything again (which takes too long).
Any ideas what might be going on? Is Xcode not able to determine what's changed, why does it recompile so much other stuff (even other modules).
Any advice is highly appreciated, thanks!
We had the same problem and we fixed it. Twice.
Incremental build (same build machine):
before: ~10m
after: ~35s
HOW?
Let's start with our experience first. We had a massive Swift/Obj-C project and that was the main concern: build times were slow and you had to create a new project to implement a new feature (literally). Bonus points for never-working syntax highlighting.
Theory
To truly fix this you have to truly understand how build system works.
For example, let's try this code snippet:
import FacebookSDK
import RxSwift
import PinLayout
and imagine you use all of these imports in your file. And also this file depends on another file, which depends on another libraries, which in turn uses another libraries etc.
So to compile your file Xcode has to compile every library you mentioned and every file it depends on, so if you change one of the "core" files Xcode has to rebuild literally whole project.
Xcode build is multi-threaded, but it consists of many single-threaded trees.
So on the first step of every incremental build Xcode is deciding which files have to be re-compiled and builds an AST tree. If you change a file which is acting as "dependable" on other files, so every other file which acts as "dependent" has to be recompiled.
So the first advice is to lower coupling. Your project parts have to be independent of each other.
Obj-C/Swift bridge
Problem with those trees if you're using a Obj-C/Swift bridge, Xcode has to go through more phases than usual:
Perfect world:
Builds Obj-C code
Build Swift code
Obj-C/Swift bridge:
[REPEATABLE STEP] Build Swift code, which is needed to compile Obj-C code
[REPEATABLE STEP] Build Obj-C code, which is needed to compile Swift code
Repeat 1 & 2 until you have only non-dependable Swift & Obj-C code left
Build Obj-C code
Build Swift code
So if you change something from step 1 or 2, you're basically in a trouble.
The best solution is to minimize Obj-C/Swift Bridge (and remove it from your project).
If you don't have an Obj-C/Swift Bridge, that's awesome and you're good to go to the next step:
Swift Package Manager
Time to move to SwiftPM (or at least configure your Cocoapods better).
Thing is, most frameworks with default Cocoapods configuration drag along with themselves a lot of stuff you don't need.
To test this create an empty project with only one dependency like PinLayout, for example and try to write this code with Cocoapods (default configuration) and SwiftPM.
import PinLayout
final class TestViewController: UIViewController {
}
Spoiler: Cocoapods will compile this code, because Cocoapods will import EVERY IMPORT of PinLayout (including UIKit) and SwiftPM will not because SwiftPM imports frameworks atomically.
Dirty hack
Do you remember Xcode build is multi-threaded?
Well, you can abuse it, if you are able to split your project to many independent pieces and import all of them as independent frameworks to your project. It does lower the coupling and that was actually the first solution we used, but it wasn't in fact very effective, because we could only reduce incremental build time to ~4-5m which is NOTHING compared to the first method.
There's no golden bullet here, but plenty of things to check:
Make sure you're actually using the Debug configuration in your scheme
See below for how to ensure you're using incremental builds versus whole module per matt's advice. Also make sure your Optimization Level for Debug builds is none.
If you're using type-inference heavy frameworks like RxSwift, adding explicit type annotations can speed up build times.
If the project is very big, you could consider refactoring out logical groups of source files into frameworks, but that may be too drastic of a change than you'd prefer
It might help if you provided some more specifics about the project: are you statically linking any libraries? Is it a framework or app target? How big and what swift version are you using? Do you have any custom Build Phases like linters or code generation that could be skipped sometimes?

How may I import a Node module in the client code of my Electron app?

I'm building a board game from ES6 modules using Electron 2 (for Chromium 61+) and the esm shim on the server side of things. This is the first time I've written isomorphic JavaScript, let alone ES6 modules; I intend to be able to run game logic on the client in single-player mode, and on the server in networked play mode. So far so good, I'm happy to report! And it's satisfying to not rely on any heavy transpilers.
Now, though, I have a problem: I intend to use types from Immutable JS on the client as well as the server, and I only know how to import them into the server code. Until now, all the import statements in the isomorphic code referred to other JS modules in the app, not to dependencies from npm. A module like the one below causes an "Uncaught TypeError: Failed to resolve module specifier 'immutable'" runtime error in the client when the app loads:
import Immutable from "immutable";
Immutable.List.of([]);
export { foo: {} };
In fact, I'm virtually certain that the import statement is failing because Chromium can't resolve "immutable" to a JS file. But how am I supposed to go about resolving it? And is there a way to resolve it that would work for any node module that is written to be isomorphic?
TL;DR - You can't without help of bundler like webpack as long as you're using npm modules.
Most of node.js package ecosystem is not ready for native module yet. About 99% of published package in npm currently using node.js's CommonJS module system, while there are very few module written to support esm (ES module syntax as well).
esm shim is intended to help latter - if module's written in esm and to be imported in current node.js version doesn't support it helps to resolve those modules. Opposite case doesn't work. Chromium can import your code directly which is written in native syntax, then try to resolve dependency module you specified and failed to resolve as 1. it doesn't know where to resolve (as it doesn't follow node.js's module resolution rules) 2. when it's available to resolve, actual import will fail cause module'll be cjs export instead of native.
Get back to TL;DR above - if the intention is achieving isomorphic code to run on both processes, use bundler accordingly.

How to reload all OTP code when developing an OTP application?

While I'm learning OTP I've been making a lot of changes to the .app and .erl files and re-running my application to see the effect of the changes.
I've tried the following sequence of commands to pick up all my new changes, but it doesn't seem to work:
Compile src files ...
erlc -o ebin src/*.erl
... followed by this is the erlang shell:
application:stop(my_app).
application:unload(my_app).
application:load(my_app).
application:start(my_app).
However, this doesn't seem to work. The only way I have found to work is to exit the erlang shell, recompile the app and then run application:start(my_app)..
Is there an easier way of picking up my changes?
Calling application:load(App) (after stopping and unloading) will reload the .app file but not the modules. As the documentation says: "Note that the function does not load the actual Erlang object code."
If you were to do an upgrade using releases, you would ship an .appup file that specified which modules to reload on upgrade to the new version (no need to reload all of them if only one or two have changed), but if you're just developing and don't want to stop and restart everything, you'll have to set up your own help functions for reloading code.
Edit: Since OTP 20 (2017), the interactive Erlang shell now has the lm() function for loading all modules whose .beam files have changed, so there is no need to roll your own utility function for this anymore. See https://erlang.org/doc/man/c.html#lm-0

No Erlang compile time errors for missing functions

Why is there no compile time errors or warnings when I call a function in another module that doesn't exist or has the wrong arity?
The compiler has all of the exports information in a module to make this possible. Is it just not implemented yet or is there a technical reason why it is not possible that I am not seeing?
I don't know why it's missing (probably because modules are completely separate and compilation of one doesn't depend on the other really - but that's just speculation). But I believe you can find problems like this with dialyzer static analysis. Have a look at http://www.erlang.org/doc/man/dialyzer.html
It's part of the system itself, so try including it in your workflow.
It is as others have said. Modules are compiled separately and there is absolutely no guarantee that the environment which exists at compile-time is the same as the one that will exit at run-time. This implies that doing checks at compile-time about the existence of a module, or of a function in it, is basically meaningless. At run-time that module may or may not be loaded, the function you call may or may not be defined in the module, or it may do something completely different from what you expected.
All this is due to the very dynamic nature of Erlang systems. There is no real way as such to define what is in system at run-time. Hot code-loading is a part of this and works properly because of the dynamic nature of the system. It means you can redefine the system at run-time, you can load in new versions of existing modules with a different interface and you can load in completely new modules and remove existing modules.
For this to work all checks about the existence of a module or function must be done at run-time.
Tools like dialyzer can help with this but they do assume that you don't do anything "funny" at run-time and the system you check is the same as the system you run. Which is of course all good, but very static. And against Erlang's nature which is to be dynamic, in everything.
Unfortunately, in this case, you can't both have your cake and eat it.
You may use the xref application to check the usage of deprecated, undefined and unused functions (and more!).
Compile the module with debug_info:
Eshell V6.2 (abort with ^G)
1> c(test, debug_info).
{ok,test}
Check the module with xref:m/1:
2> xref:m(test).
[{deprecated,[]},
{undefined,[{{test,start,0},{erlang,foo,0}}]},
{unused,[]}]
You may want to check out more about xref here:
Erlang -- Xref - The Cross Reference Tool (Tools User's Guide)
Erlang -- xref (Tools Reference Manual)
It is due hot code loading. Each module can be loaded in any particular time. So when you have in your module A code which calls function B:F then you can't tell it is wrong in compile time when your source code of module B has no function B:F. Imagine this: You compile module A with call to B:F. You load module B into memory without function B:F. Then you load module A which contain call B:F but don't call it. Then compile new version of module B with B:F. Then load this new module and then you can call B:F and everything is perfectly right. Imagine your module A makes module B on fly and load it. You can't tell in any particular time that it is wrong that module A contain call to nonexistent function B:F.
In my opinion most, if not all, compiler does not verify that a function exists at compilation. What it is required in general is a prototype declaration of the function: the type of the return value, the list and type of all arguments. This is done in C/C++ by including some_file.h in each module definition (not the .c or .cpp).
In Erlang this type verification is done dynamically, while the program is running, so it is not necessary to include these definitions. It is even totally useless because Erlang allows to upgrade the application in run, so the function type may change, or the function may disappear, on purpose or by mistake, during application life time; it is why the Erlang designer have chosen to make this verification at run time and not at build time.
The error you speak about generally occurs during the link phase of the code generation, when the "compiler" tries to gather all together some individual pieces of object code to build an executable file or a library, during this phase the linker solves all the external addresses (for shared variable, static call...). This phase does not exist in Erlang, a module is totally self contained; it does no share anything with the rest of the application, no variable nor function address.
Of course, it is mandatory to use some tools and make some test before updating a running production program, but I consider that these verifications have exactly the same level of importance than the correctness of the algorithm itself.
When you compile e.g. module alpha which has a call to beta:some_function(...), the compiler cannot assume some specific version of beta to be in use at runtime. Maybe you will compile a newer version of beta after you compiled alpha and this will have the correct some_function exported. Maybe you will upload alpha to be used on a different host, which has all the other modules.
The compiler therefore just compiles the remote call and any errors (non-existent module or function) are resolved at run time, when some version of beta will be loaded.

Resources