How to make release using rebar without erts included in?
A release is a self packaged erlang runtime within your application, so i think you just can't.
But you can simply give your application directory with the ebin directory in it as a release.
rebar is a good thing for managing your Erlang prjects indeed. But there are always other options which comes for Erlanf/OTP in the box. Here you can read very good tutorial about other options: http://learnyousomeerlang.com/release-is-the-word
p.s. I think it is worth knowing how the same thing could be done in other way.
Relx is very good at packaging Erlang releases with lots of options, one being what you want: {include_erts, false}.
https://github.com/erlware/relx
Related
Suppose I have a CLI application with subcommands and arguments (like application foo --bar baz). How can I package it for distribution without requiring user to install Erlang?
I know there's mix escript Mix task, but it builds a binary that requires Erlang to be installed, also Mix reference states that escripts should be used only for development purposes.
mix release, however, produces redundant shell scripts that I don't want to see in dist.
So, is there a way to make a standalone distributable of an Elixir CLI application?
P. S. This is actually my first experience with Elixir (and the whole Erlang ecosystem)
Elixir applications can be packaged as Erlang releases, see also here. These can include Erlang VM. Elixir since 1.9 supports building releases without extra tooling, using included mix task mix release - please check the documentation for greasy details: https://hexdocs.pm/mix/Mix.Tasks.Release.html
You might benefit from a quick look at this blog post for inspiration and conceptual overview, noticing that for simple CLI app it is much simpler: https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/
Bakeware generates a single executable file.
They have added a CLI app example here.
I am configuring Ideaj to open an Erlang REPL by setting it up as and external tool, however the working directory param is ignored. Is there a way once the REPL is open to switch the working directory?
Within the shell use the command cd("some/path") and it will work pretty much the same way you would expect from an ordinary shell.
This means you can move around your project directories and run c(module_name) and be in the local loading path as well -- which can be pretty convenient when hand-tweaking/testing things.
As an aside... most folks don't use an IDE with Erlang, because the shell has so much stuff already built into it, and your OS itself already has whatever other tools you usually want. I've yet to see someone start with an IDE and stick with it in Erlang (usually wind up becoming either Emacs users or go the vim + coreutils route).
Also, pwd() and ls() work as you'd expect.
Regarding IDEs- I find the Erlang Intellij plugin (http://ignatov.github.io/intellij-erlang/) very usable, and when doing more than relatively short one-offs in vim (with Erlang plugin) the code completion and Find Usages kinds of IDE functionality to be useful.
Give it a shot - YMMV.
Is there any VM for Erlang that allows you to do compilation on the fly instead of compiling before?
There is a possibility to compile from the shell, thanks Martin.
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
Is there any pros or cons with doing this?
Will you still be able to hot swap code?
Is it the regular use-case to handle code?
What benefits does the compiler give you in the end then?
From the Erlang shell, you can compile a module on the fly using c("path/to/module.erl"). You can also access this functionality through the compile module, specifically the compile:file/{1,2} functions.
For example, suppose we have a file mymod.erl:
-module(mymod).
-export([myfun/0]).
myfun() -> io:format("Hello Joe~n").
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
See Erldocs on the compile module for more information.
You can do a great deal with the Erlang compiler in runtime. For example, you can dynamically generate code for a module (use erl_syntax!) and then compile it without even writing it to a file using compile:forms/{1,2}.
(Insert standard speech on great power and great responsibility.)
Will you still be able to hot swap code?
Yes.
Is it the regular use-case to handle code?
No. Normally Erlang code is compiled ahead of time into BEAM bytecode. Depending on whether Erlang was started in embedded or interactive mode, the modules are either loaded on startup, or dynamically as they are referenced. If you are building a release, you basically have to compile ahead of time.
What benefits does the compiler give you in the end then?
Well, for one thing, we can build compact releases without unnecessary components like the compiler. Of course, we also get all the traditional benefits of ahead-of-time compilation, particularly that of not having to waste time compiling all the time.
To sum it up, unless you fully understand the implications and have a very good reason not to compile your code ahead of time, please follow the standard practices.
The Erlang VM can only run compiled code! If you want to interpret Erlang code then you need an interpreter. The module erl_eval implements an Erlang interpreter and is part of the standard Erlang/OTP distribution. It is used by the Erlang shell to interpret the expressions entered.
All code handling in the Erlang VM, whether compiling, loading or updating, is done at the module level so it is impossible to compile or load a just one function. The Erlang compiler is written in Erlang and always available and can compile to either a file or a binary which can be immediately loaded into the system. As #MartinTörnwall has pointed out compiling a module from the shell using c(module) is in essence compiling on the fly.
So there would be no problems in automatically compiling code on the fly when it is used, at the module level. It is just that the current system is not designed to work that way and by default when it tries to load a module it only looks for the pre-compiled object file, the .beam file.
Erlang has an interpreter escript. Entire Erlang archive can be written in script. Almost all features are available.
By default, the script will be interpreted. You can force it to be compiled by including the -mode(compile). in the script.
Though it depends on the way you design your application, regular practice is to have .erl files which are compiled and run than having escript files.
So now you have many options.
Compile .erl file to .beam using c(my_module) this auto loads the .beam file. So the existing VM can run it on the fly. On in code you can use compile module functions like file, purge and load to load and run it on the fly.
Compile and keep the .erl files using erlc, erl -make, rebar, etc (Erlang has rich support) and then run it. You can build archives, boot scripts, rel etc to manage running and release of the Erlang software. This usually is the practice for production.
Use escript and run everything in interpreted mode.
Use escript and give -mode(compile) option to tell Erlang VM that at runtime (when starting to run escript) compile the code and run the compiled code (in memory)
Is there any pros or cons with doing this?
Compiled code is faster than interpreted code. I dont see any other right now in Erlang as pretty much everything is supported in both. Erlang even supports combination (Calling compiled code from interpreted code)
Will you still be able to hot swap code?
Yes in all cases. Your code also should be able to handle this.
Is it the regular use-case to handle code?
Option 2 for production. Option for 1 for learning / simple development. Option 3 and 4 in need basis for specific requirements (May be one time running).
What benefits does the compiler give you in the end then?
To make it clear, erlc program provides a common way to run all compilers in the Erlang system and compile module gives an interface to Erlang compilers. Compiler gives intermediate binary .beam file which helps in running Erlang code faster than interpreted counterpart. They also catch syntax errors (compilation errors).
I have an Erlang application which has a dependency in its deps directory on another application.
From what I understand I can either;
a) start my dependent application from my including application by calling application:start(some_other_app) which starts the application and shows it running standalone within Observer.
b) include my dependent application in my .app file with {included_applications, [some_other_app]} so that the application is loaded and not started and then start the included application from my own top level supervisor. This again starts the included application and shows its running below my own supervision hierarchy in Observer.
My question is when should I use either approach? If I use option “a” and my dependent application exits will it be restarted or should I be using approach “b” so that any dependencies I have are monitored accordingly?
On a side note I use Rebar to package and manage my dependencies.
Thanks,
Andy.
You probably shouldn't do either a) nor b)
From Learn You Some Erlang
Look at the chapter: Included Applications
It is more and more recommended not to use included applications for a
simple reason: they seriously limit code reuse. Think of it this way.
We've spent a lot of time working on ppool's architecture to make it
so anybody can use it, get their own pool and be free to do whatever
they want with it. If we were to push it into an included application,
then it can no longer be included in any other application on this VM,
and if erlcount dies, then ppool will be taken down with it, ruining
the work of any third party application that wanted to use ppool.
For these reasons, included applications are usually excluded from
many Erlang programmers' toolbox. As we will see in the following
chapter, releases can basically help us do the same (and much more) in
a more generic manner.
In the chapter Release is the Word you can read about how several applications are bundled into a relase and how they are started.
Having your dependencies declared in your application descriptor is the way to go, so you should use option B in most of the scenarios.
The application controller will ensure that all your dependencies are present and started (in order) before starting your application and will also make your app fail if those terminate with errors. Also, the application controller will shutdown everything when needed.
Other than that, if you choose option A, when starting an application with application:start/1, you will get a temporary application by default, so you should use application:start/2, passing the permanent atom as the second argument.
EDIT: Having your dependencies in the application descriptor also helps visibility, it's easy to know your deps without scanning the source code.
I had a similar question, How do you include dependencies in an erlang project, and then how do you release them?
I had some help from various friends, and the erlang mailing list...and after re-reading some docs and more trial and error... I figured some stuff out. It's long, so check out the gist:
https://gist.github.com/3728780
-Todd
I want to use some libs in my application, like https://github.com/Panmind/erlang-ruby-marshal. That repo holds an src dir, but has no .app file (because it's not an application), so I can't use get-deps.
I tried another approach, adding a libs dir in sub_dirs and added the repo as a git submodule, but rebar won't compile any of its files. I guess that rebar only compiles otp applications, but not just .erl files that aren't tied to an application.
How do you manage those kind of dependencies? I would like to avoid copying the files to my app dir, because I don't think they belong there, and I kind of like the git submodule approach, that allows me to keep track of the lib version I am using.
Recent rebar supports raw option for dependencies. When this option is specific, rebar does not require the dependency to have a standard Erlang/OTP layout which assumes the presence of either "src/dependency_name.app.src" or "ebin/dependency_name.app" files (see more details here).
For example:
{deps, [
{erlang_ruby_marshal, "",
{git, "https://github.com/Panmind/erlang-ruby-marshal", {branch, master}},
[raw]}
]}.
Note that rebar will now be able to fetch it, but it still won't compile it. As other commenters pointed out, there's no reason why this dependency should not have an .app file. I would fork the repository and add the .app file to it.
This article goes through the bigger process of creating applications and releases with rebar.
More specifically, I think this option in rebar.config might be what you're looking for. The only way I've found so far is to have one entry for each application:
{sub_dirs, ["libs/app1",
"libs/app2",
...]}.
This requires a bit more manual work. Unfortunately rebar is very structured around the concept of one app only, and would need some better support for caring for a repository with a bunch of equally worth applications instead of a single application.
If you are using Linux, you can add the required modules as hard links, into the src directory of your application.
This is far from optimal but I have yet to find a better way to do this.
Ask the Agner guys to add it to their package management system. In the process they will create a fork and convert to make the project rebar compatible. Also, the original maintainer will quite possibly integrate the changes.