What is the workflow for compiling one file using rebar3? - erlang

rebar3 seems to recompile everything every time.
Often I am only modifying one file. That's the file I want to recompile, I know everything else is fine.
What is the workflow for doing this? Ideally I could do it from the Erlang shell. Rebar3 already knows my include paths and build directory with the beams in it, how can I take advantage of Rebar's knowledge so I don't have to type it all into the shell over again as arguments to c(File, Opts)?

Keep in mind that rebar3 avoids recompiling up-to-date modules (although it checks them).
That being said, I think the most popular option (and suitable for your needs) is using this plugin.
In my case, I have a set of scripts to set up inotifywait and bring up the whole release down and up again. I also often create shell functions if I need to compile often:
4> C = fun() -> c('my_awesome_module', []) end.
#Fun<erl_eval.45.97283095>
5> C().
{error,non_existing}

Related

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

Erlang Shell Working Directory

I am configuring Ideaj to open an Erlang REPL by setting it up as and external tool, however the working directory param is ignored. Is there a way once the REPL is open to switch the working directory?
Within the shell use the command cd("some/path") and it will work pretty much the same way you would expect from an ordinary shell.
This means you can move around your project directories and run c(module_name) and be in the local loading path as well -- which can be pretty convenient when hand-tweaking/testing things.
As an aside... most folks don't use an IDE with Erlang, because the shell has so much stuff already built into it, and your OS itself already has whatever other tools you usually want. I've yet to see someone start with an IDE and stick with it in Erlang (usually wind up becoming either Emacs users or go the vim + coreutils route).
Also, pwd() and ls() work as you'd expect.
Regarding IDEs- I find the Erlang Intellij plugin (http://ignatov.github.io/intellij-erlang/) very usable, and when doing more than relatively short one-offs in vim (with Erlang plugin) the code completion and Find Usages kinds of IDE functionality to be useful.
Give it a shot - YMMV.

Is it possible to run erlang without compilation?

Is there any VM for Erlang that allows you to do compilation on the fly instead of compiling before?
There is a possibility to compile from the shell, thanks Martin.
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
Is there any pros or cons with doing this?
Will you still be able to hot swap code?
Is it the regular use-case to handle code?
What benefits does the compiler give you in the end then?
From the Erlang shell, you can compile a module on the fly using c("path/to/module.erl"). You can also access this functionality through the compile module, specifically the compile:file/{1,2} functions.
For example, suppose we have a file mymod.erl:
-module(mymod).
-export([myfun/0]).
myfun() -> io:format("Hello Joe~n").
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
See Erldocs on the compile module for more information.
You can do a great deal with the Erlang compiler in runtime. For example, you can dynamically generate code for a module (use erl_syntax!) and then compile it without even writing it to a file using compile:forms/{1,2}.
(Insert standard speech on great power and great responsibility.)
Will you still be able to hot swap code?
Yes.
Is it the regular use-case to handle code?
No. Normally Erlang code is compiled ahead of time into BEAM bytecode. Depending on whether Erlang was started in embedded or interactive mode, the modules are either loaded on startup, or dynamically as they are referenced. If you are building a release, you basically have to compile ahead of time.
What benefits does the compiler give you in the end then?
Well, for one thing, we can build compact releases without unnecessary components like the compiler. Of course, we also get all the traditional benefits of ahead-of-time compilation, particularly that of not having to waste time compiling all the time.
To sum it up, unless you fully understand the implications and have a very good reason not to compile your code ahead of time, please follow the standard practices.
The Erlang VM can only run compiled code! If you want to interpret Erlang code then you need an interpreter. The module erl_eval implements an Erlang interpreter and is part of the standard Erlang/OTP distribution. It is used by the Erlang shell to interpret the expressions entered.
All code handling in the Erlang VM, whether compiling, loading or updating, is done at the module level so it is impossible to compile or load a just one function. The Erlang compiler is written in Erlang and always available and can compile to either a file or a binary which can be immediately loaded into the system. As #MartinTörnwall has pointed out compiling a module from the shell using c(module) is in essence compiling on the fly.
So there would be no problems in automatically compiling code on the fly when it is used, at the module level. It is just that the current system is not designed to work that way and by default when it tries to load a module it only looks for the pre-compiled object file, the .beam file.
Erlang has an interpreter escript. Entire Erlang archive can be written in script. Almost all features are available.
By default, the script will be interpreted. You can force it to be compiled by including the -mode(compile). in the script.
Though it depends on the way you design your application, regular practice is to have .erl files which are compiled and run than having escript files.
So now you have many options.
Compile .erl file to .beam using c(my_module) this auto loads the .beam file. So the existing VM can run it on the fly. On in code you can use compile module functions like file, purge and load to load and run it on the fly.
Compile and keep the .erl files using erlc, erl -make, rebar, etc (Erlang has rich support) and then run it. You can build archives, boot scripts, rel etc to manage running and release of the Erlang software. This usually is the practice for production.
Use escript and run everything in interpreted mode.
Use escript and give -mode(compile) option to tell Erlang VM that at runtime (when starting to run escript) compile the code and run the compiled code (in memory)
Is there any pros or cons with doing this?
Compiled code is faster than interpreted code. I dont see any other right now in Erlang as pretty much everything is supported in both. Erlang even supports combination (Calling compiled code from interpreted code)
Will you still be able to hot swap code?
Yes in all cases. Your code also should be able to handle this.
Is it the regular use-case to handle code?
Option 2 for production. Option for 1 for learning / simple development. Option 3 and 4 in need basis for specific requirements (May be one time running).
What benefits does the compiler give you in the end then?
To make it clear, erlc program provides a common way to run all compilers in the Erlang system and compile module gives an interface to Erlang compilers. Compiler gives intermediate binary .beam file which helps in running Erlang code faster than interpreted counterpart. They also catch syntax errors (compilation errors).

Erlang EUnit test module that depends on a library application

I have a medium-sized release with a handful of applications. I recently refactored some common functionality out into a library application within the release. This made my EUnit tests fail with undef messages whenever testing anything that required the library application.
The set up is something like this:
% In apps/utils/src/utils.erl
-module(utils).
-export([foo/0]).
foo() -> "OH HAI".
Then
% In apps/some_app/src/some_app.erl
-module(some_app).
-export([bar/0]).
bar() -> io:format("foo: ~s~n", [utils:foo()]).
% unit tests for bar()
Then the unit tests for some_app:bar() fail. I'm running them with rebar eunit skip_deps=true. I'm using skip_deps=true because my release uses some 3rd party applications (SQL, etc).
I assume that the tests start failing because EUnit is invoking the app under test without its dependencies? Is there any way to fix this? I have configured the .app file to explicitly declare the dependency. It works fine in the release, and it's been deployed for about a day now with no problem, but I'll feel a lot better if I can get the tests to pass again :)
(I could use a mocking app to stub out utils:foo/0, and I can see where that would be ideal idiomatically, but that seems like overkill in this case because utils:foo/0 (read: it's real-world counterpart) does some really simple stuff.)
I was able to get this to work by doing rebar compile eunit skip_deps=true.
The key is to have the compile in there and I have no idea why. I'm guessing that the compile step gets all of the modules into memory. I'd love to hear a good explanation.
I think you could have one of your applications load the utility by including it in the application portion of you .app file, as in:
{application,yourapp
[{description,"A description"},
{vsn,"1.0.0"},
{modules,[mod1, mod2, utils]},
SNIP
or in some other way add it to the path of the erlang node... maybe using the -pa flag on starting the node.

Erlang: Is there a way to reload changed modules into an already running node with rebar?

The rebar build tool seems pretty cool, but is there a way to recompile AND reload files into a currently running node with it?
I don't see a need to bring rebar into it.
I use reloader.erl from the Mochiweb project. Just build this module along with the rest of your program and call reloader:start() early in your app's initialization sequence.
After that call, any time a module in your program changes on disk, the reloader will replace the running copy automatically.

Resources