The problem
Most of the articles and books about Erlang I could find focus on creating long running server-like applications, leaving the process of command line tools creation not covered.
I have a multi-app rebar3 project consisting of 3 applications:
myweb - a Cowboy-based web service;
mycli - a command line tool to prepare assets for myweb;
mylib - a library used by both myweb and mycli, depends on a NIF.
As a result of the build I want to get such artifacts:
an executable for the web part that is going to serve HTTP requests;
an executable command line tool for the assets preparation;
a set of libraries used by the above.
Requirements
CLI should behave like a sane non-interactive command line tool: handle arguments, deal with stdin/stdout, return non-zero exit code on error, etc;
both server and CLI should be able to use NIFs;
it should be easy to package the artifacts as a set of deb/rpm packages, so both server and CLI should reuse common dependencies.
Things tried so far
Building an escript
One of the ways I've seen in the wild is to create a self-contained escript file. At least rebar3 and relx do so. So I gave it a try.
Pros:
has support for command line arguments;
in case of errors, it returns non-zero exit code.
Cons:
embeds all the dependencies in a single file making it impossible to reuse mylib;
since *.so files get embedded into the resulting escript file, they cannot be loaded at runtime, thus NIFs don't work (see erlang rebar escriptize & nifs);
rebar3 escriptize doesn't handle dependencies well (see bug 1139).
Unknowns:
Should the CLI app become a proper OTP application?
Should it have a supervision tree?
Should it be started at all?
If so, how do I stop it when the assets have been processed?
Building a release
Another way to build a command line tool was described in a How I start: Erlang article by Fred Hebert.
Pros:
Each of the dependency applications get into their own directory, making it easy to share and package them.
Cons:
there's no defined entry point like escript's main/1;
as a consequence both command line arguments and exit code must be handled manually.
Unknowns:
How to model the CLI OTP app in a non-interactive way?
How to stop the app when the assets have been processed?
Neither of the approaches above seem to work for me.
It would be nice to get the best of both worlds: get the infrastructure that is provided by escript such as main/1 entry point, command line parameters, and exit code handling while still having a nice directory structure that is easy to package and which doesn't hinder the use of NIFs.
Regardless if you are starting a long-running daemon-like application in Erlang, or a CLI command, you always need the following:
erts application - the VM and kernel in a particular version
Erlang OTP applications
Your applications' dependencies
CLI entry point
Then in either case the CLI entry point has to start the Erlang VM and execute the code that it supposed to execute in a given situation. Then it will either exit or continue running - the later for a long-running application.
The CLI entry point can be anything that starts an Erlang VM, e.g. an escript script, sh, bash, etc. The obvious advantage of escript over generic shell is that escript is already being executed in the context of an Erlang VM, so no need to handle starting/stopping the VM.
You can start Erlang VM in two ways:
Use system-wide Erlang VM
Use an embedded Erlang release
In the first case you don't supply erts nor any OTP application with your package, you only make a particular Erlang version a dependency for your application. In the second case you supply erts and all required OTP applications along with your application's dependencies in your package.
In the second case you also need to handle setting the code root properly when starting the VM. But this is quite easy, see the erl script that Erlang uses to start the system-wide VM:
# location: /usr/local/lib/erlang/bin/erl
ROOTDIR="/usr/local/lib/erlang"
BINDIR=$ROOTDIR/erts-7.2.1/bin
EMU=beam
PROGNAME=`echo $0 | sed 's/.*\///'`
export EMU
export ROOTDIR
export BINDIR
export PROGNAME
exec "$BINDIR/erlexec" ${1+"$#"}
This can be handled by scripts, for example the node_package tool that Basho uses to package their Riak database for all major operating systems. I am maintaining my own fork of it which I am using with my own build tool called builderl. I just say that so you know that if I managed to customize it you will well be able to do that as well :)
Once the Erlang VM is started, your application should be able to load and start any application, either supplied with Erlang or with your application (and that includes the mylib library that you mentioned). Here are some examples how this could be achieved:
escript example
See this builderl.esh example how I handle loading other Erlang applications from builderl. That escript script assumes that the Erlang installation is relative to the folder from which it's executed. When it's a part of another application, like for example humbundee, the load_builderl.hrl include file compiles and loads bld_load, which in turn loads all remaining modules with bld_load:boot/3. Notice how I can use standard OTP applications without specifying where they are - builderl is being executed by escript and so all the applications are loaded from where they were installed (/usr/local/lib/erlang/lib/ on my system). If libraries used by your application, e.g. mylib, are installed somewhere else, all you need to do is add that location to the Erlang path, e.g. with code:add_path. Erlang will automatically load modules used in the code from folders added to the code path list.
embedded Erlang
However, the same would hold if the application was a proper OTP release installed independently from the system-wide Erlang installation. That's because in that case the script is executed by escript belonging to that embedded Erlang release rather than the system-wide version (even if it's installed). So it knows the location of all applications belonging to that release (including your applications). For example riak does exactly that - in their package they supply an embedded Erlang release that contains its own erts and all dependent Erlang applications. That way riak can be started without Erlang being even installed on the host operating system. This is an excerpt from a riak package on FreeBSD:
% tar -tf riak2-2.1.1_1.txz
/usr/local/sbin/riak
/usr/local/lib/riak/releases/start_erl.data
/usr/local/lib/riak/releases/2.1.0/riak.rel
/usr/local/lib/riak/releases/RELEASES
/usr/local/lib/riak/erts-5.10.3/bin/erl
/usr/local/lib/riak/erts-5.10.3/bin/beam
/usr/local/lib/riak/erts-5.10.3/bin/erlc
/usr/local/lib/riak/lib/stdlib-1.19.3/ebin/re.beam
/usr/local/lib/riak/lib/ssl-5.3.1/ebin/tls_v1.beam
/usr/local/lib/riak/lib/crypto-3.1/ebin/crypto.beam
/usr/local/lib/riak/lib/inets-5.9.6/ebin/inets.beam
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.app
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.beam
(...)
sh/bash
This doesn't differ much in principle from the above apart from having to explicitly call the function that you want to execute when starting the Erlang VM (the entry point or the main function as you called it).
Consider this script that builderl generates to start an Erlang application just to execute a specified task (generate the RELEASES file), after which the node shuts down:
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee -noshell -noinput -eval \"{ok, Cwd} = file:get_cwd(), release_handler:create_RELEASES(Cwd, \\\"releases\\\", \\\"releases/$APP_VSN/humbundee.rel\\\", []), init:stop()\""
This is a similar script but doesn't start any specific code or application. Instead, it starts a proper OTP release, so which applications are started and in what order depends on the release (specified by the -boot option).
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee"
In the vm.args file you can provide additional paths to your applications if required, e.g.:
-pa lib/humbundee/ebin lib/yolf/ebin deps/goldrush/ebin deps/lager/ebin deps/yajler/ebin
In this example these are relative, but could be absolute if your application is installed into a standard well-known location. Also, this would be only required if you are using the system-wide Erlang installation and need to add the additional paths to locate your Erlang applications, or if your Erlang applications are located in non-standard location (e.g. not in lib folder, as Erlang OTP requires). In a proper embedded Erlang release, where the applications are located in the code root/lib folder, Erlang is able to load those applications without specifying any additional paths.
Summing up and other considerations
The deployment of Erlang applications doesn't differ much from other projects written in scripting languages, e.g. ruby or python projects. All those projects have to deal with similar issues and I believe each operating system's package management deals with them one way or another:
Get to know how your operating system deals with packaging projects that have run-time dependencies.
See how other Erlang applications are packaged for your operating system, there are plenty of them that are usually distributed by all major systems: RabbitMQ, Ejabberd, Riak among others. Just download the package and unpack it to a folder, then you will see where all the files are placed.
EDIT - reference the requirements
Coming back to your requirements, you have the following choices:
Install Erlang as an OTP release system-wide, as an embedded Erlang, or as a bag with applications in some random folders (sorry Rebar)
You can have multiple entry points in the form of sh or escript scripts executing a selection of applications from the installed release. Both will work as long as you configured the code root and paths to those applications correctly (as outlined above).
Then each of your applications: myweb and mycli, would need to be executed in its own new context, e.g. start a new VM instance and execute the required application (from the same Erlang release). In case of myweb the entry point can be a sh scripts that starts a new node according to the release (similar to Riak). In case of mycli the entry point can be an escript, which finishes executing once the task is completed.
But it's entirely possible to create a short-running task that exits the VM even if it's started from sh - see the example above. In that case mycli would require separate release files - the script and boot to boot the VM. And of course it's also possible to start a long-running Erlang VM from escript.
I provided an example project that uses all these methods at once, humbundee. Once it's compiled it provides three access points:
The cmd release.
The humbundee release.
The builder.esh escript.
The first one is used to start the node for installation and then shut it down. The second is used to start a long-running Erlang application. The third is a build tool to install/configure the node. This is how the project looks like once the release has been created:
$:~/work/humbundee/tmp/rel % ls | tr " " "\n"
bin
erts-7.3
etc
lib
releases
$:~/work/humbundee/tmp/rel % ls bin | tr " " "\n"
builderl.esh
cmd.boot
humbundee.boot
epmd
erl
escript
run_erl
to_erl
(...)
$:~/work/humbundee/tmp/rel % ls lib | tr " " "\n"
builderl-0.2.7
compiler-6.0.3
deploy-0.0.1
goldrush-0.1.7
humbundee-0.0.1
kernel-4.2
lager-3.0.1
mnesia-4.13.3
sasl-2.7
stdlib-2.8
syntax_tools-1.7
yajler-0.0.1
yolf-0.1.1
$:~/work/humbundee/tmp/rel % ls releases/hbd-0.0.1 | tr " " "\n"
builderl.config
cmd.boot
cmd.rel
cmd.script
humbundee.boot
humbundee.rel
humbundee.script
sys.config.src
The cmd entry point will use application deploy-0.0.1 and builderl-0.2.7 as well as release files cmd.boot, cmd.script, and some OTP applications. The standard humbundee entry point will use all applications apart from builderl and deploy. Then the builderl.esh escript will use application deploy-0.0.1 and builderl-0.2.7. All from the same embedded Erlang OTP installation.
A small escript that then goes into code from 'conventional' modules could be a solution.
As an example, Concuerror is expected to be used as a command line tool and uses an escript as its entry point. It handles command-line arguments via getopt. All the main code is in regular Erlang modules, which are included in the path with simple arguments to the escript.
As far as I understand, NIFs can then be loaded with regular -onload attributes (Concuerror does not use NIFs).
Related
I'm using rebar3 to produce a release, but how do I initialize mnesia on production?
If I write an "install" escript that does the mnesia:create_schema([node()]) - it will use a completely different node name from the one that release uses.
So I end up creating a schema for "nonode#nonode" while production mnesia when I start my app with my-app-1.0.0 start tries to access "myapp#localhost" node.
Also, it is kind of a chicken-and-egg problem:
I can't start up my app without mnesia tables
I can't install my mnesia tables without my app running (as the same node() name as the app will use).
Just wandering if there is a good way to handle this?
Here is my install escript that I run independently:
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -smp enable ls-mnesia debug verbose
-include("../include/rr.hrl").
main(_) ->
application:set_env(mnesia, dir, "/usr/local/src/db/mnesia"),
application:stop(mnesia),
install([node()|nodes()]).
install(Nodes) ->
case mnesia:create_schema(Nodes) of
ok ->
rpc:multicall(Nodes, application, start, [mnesia]),
read_store_create_tables(Nodes),
event_store_create_tables(Nodes),
rpc:multicall(Nodes, application, stop, [mnesia]);
Err ->
error_logger:warning_msg("Could not create schema: ~p~n", [Err]),
Err
end.
event_store_create_tables(Nodes) ->
{_, ok} = mnesia:create_table(rr_events,
[{attributes, record_info(fields, rr_events)},
{disc_copies, Nodes},
{type, bag}]).
read_store_create_tables(Nodes) ->
% Initialize the actual data-tables for the projections
{_, ok} = mnesia:create_table(rr_competencies,
[{attributes, record_info(fields, rr_competencies)},
{disc_copies, Nodes}]).
P.S.: I am using rebar3 which uses relx to build releases.
I am using my own build system which was written mainly because of that exact requirement - ability to install and initialize a node before starting it. The idea is quite simple: there are two releases, in this particular example called cmd and humbundee. The cmd release doesn't start the main applications, only loads them. Then a special function is executed to initialize the node. The function is configured in the reltool.config file. In this case it's hbd_setup from the deploy application. That function reads the configuration file and either creates and initializes the mnesia database from a backup or creates a new database if the backup doesn't exist. Once the node is installed it's started using the proper release. The same steps are executed in development (directly from the source code) and in production (from an OTP release).
With that setup the problem you described doesn't exist because both releases are started from the same location, using almost the same command and configuration files (builderl generates them from the configuration in reltool.config).
You could employ the same idea with any build tool, including rebar3 and relx, by executing these steps manually or with some sort of a script.
What builderl does is that it executes these steps automatically and provides an environment to execute them in the same way in development and production, e.g. see the bottom of the humbundee project's README file:
Install the node. This will start the cmd release and execute the hbd_setup:install/2 function to initialize the node:
./bin/init.esh
Start the node. This will start the humbundee release, which starts all applications with their corresponding supervisor trees:
./bin/start.esh
builderl actually uses rebar to pull and compile dependencies that depend on other projects, however it uses only OTP to create the release. It can also download dependencies itself using a project-wide dependency file, which are then compiled with make (compiling without the make is work in progress). I hope that helps.
I have created a small test application and when on a linux machine I would like to see the messages that appear on the erlang shell to appear on the linux as well. I am not sure how it could be done, but is it possible?
I am running the application in detached mode(erlang shell).
EDIT:
I meant the message passing to the linux shell.
Example:
When I start my application using,
test application <start/stop>
I would get a message on the Linux shell Test Application has successfully started
But if I was like copying files, I would like the progress of the file copy to be displayed on the Linux shell like,
Copy successful. Number of files copied : 1
Copy successful. Number of files copied : 2
....
So is it possible to do the above using the message passing from Erlang to Linux?
I believe you can use some logger for this purposes. For example lager can redirect logs to several facilities at the same time, e.g. file and erlang console.
Other option is to use "run_erl", utility which comes with erlang distribution and allows "redirect the standard input and standard output streams so that all output can be logged".
Try erl -man run_erl or you can see can generate "release" with rebar, it will generate startup scripts which use "run_erl".
Is there ways to specify path to schema in cowboy app? Maybe it's possible to set in my_app.app.src or any config file?
The path to the mnesia directory has to be provided to erlang VM before mnesia application is started through application configuration parameters. In Mnesia tutorial, this is done with the -Application par val VM arguments syntax.
What you call a cowboy application is probably an Erlang OTP release (built by relx as per cowboy tutorial). The solutions, quickly described in Cowboy issue #595, are as follows.
The choice between solutions really depends on style as well as some constraints. Any sufficiently complex release would use a configuration file, so it would be a good choice. vm.args seems easier to deal with. Eventually, you might need to alter the start script (for example to run several nodes from a single deployment), and include some logic to define the mnesia directory.
Provide relx with a configuration file (sys_config option)
To do so, add the following term to relx.config as documented.
{sys_config, "./config/sys.config"}.
sys.config actually is a standard Erlang configuration file, also documented. Specifying mnesia dir is done by adding a section for mnesia application. If no other configuration is required, the file would be:
[{mnesia, [{dir, "/path/to/dir"}]}].
Get relx to pass arguments to the vm (vm_args option)
The vm.args file is actually passed to the VM through -args_file option. This is a simple text file with arguments.
You would add the following term to relx.config as documented.
{vm_args, "./config/vm.args"}.
And put the following content in the vm.args file:
-mnesia dir foo
Write your own start script
relx actually creates a start script, which passes -config sys.config and args_file vm.args to the VM as required. You could modify this script or roll your own to actually pass the -mnesia dir argument to the VM.
in case of vm.args, planit text -mnesia dir foo is invalid, please use format:
-mnesia {dir,foo}
I wanted to integrate Elixir into our project, and the good old codes don't use rebar, so I think writing the rules for building .ex files into Emakefile may be a good idea, yet the man page here didn't mention anything relevant.
Edit:
Our team works mainly in Windows environment, but the deployment will be done on Linux servers, so I need a cross-platform solution. Since Erlang itself is cross-platform, I wanted to do it with erl -make command.
Of course I can write a Makefile, but then I'll need a build.bat or something alike to build the code in our developing environments, since we don't have make command on our dev' machines.
Anyone have a better idea?
Update:
In case anyone wants to know, I'm doing it this way:
Copy the lib/elixir directory in the Elixir source tree to our source dir, say some_project/src/tools/elixir.
Add some_project/src/tools/elixir/src/elixir_transform.erl and some_project/src/tools/elixir/src/* to the Emakefile, in that order. Set the output dir to some_project/ebin (All the other .beam files are located there).
Copy src/elixir.app.src in the Elixir source tree to some_project/ebin/elixir.app, and edit it to fix the version code.
Build the Erlang code by running erl -pa ebin -make, in some_project dir.
Build the Elixir compiler by running erl -pa ebin -s elixir_compiler core -s erlang halt
Now we have a working Elixir environment in our code, and I use the following escript to build our custom .ex files:
%%! -pa ./ebin
main(_) ->
ExList = [
<<"source_1.ex">>,
<<"source_2.ex">>,
<<"source_3.ex">>],
application:start(elixir),
gen_server:call(elixir_code_server, {compiler_options, [{docs, true}, {debug_info, true}]}),
[elixir_compiler:file_to_path(F, <<"./ebin">>) || F <- ExList],
erlang:halt(0).
If you want to explicitly compile Elixir, I would go with the Makefile approach since it will always be supported by Elixir. However, I would recommend the precompiled binaries or even assume Elixir is installed in each developer machine. You can even add a task to your Emakefile to guarantee everyone is using the proper Elixir version.
Finally, about compiling your own Elixir code, I would recommend simply using mix. mix is a binary that ships with Elixir and you can simply do ./src/tools/elixir/bin/mix compile from your Emakefile.
In case using mix is not possible, you should use the parallel compiler, since it will compile files using all cores available in your machine and it will automatically detect and solve dependency in between files. Here is an example of calling the parallel compiler from erlang:
https://github.com/basho/rebar/pull/347/files#L1R62
The API is very simple. It expects a list of file names to compile as binary and the directory to output files to as another binary.
I am in the process or learning erlang OTP and rebar and I have put together a small example app using a couple of dependencies: cowboy and lager.
I have issued
rebar get-deps
rebar compile
And things went smoothly. Now I want to fire up my console to test things around but it is not obvious to me how to start the dependency applications.
I tried issuing a
rebar generate
In order to get all the orchestration of firing up the apps, even though it's overkill for just development tests, but I miserably failed getting the following dump
Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 1459620480 bytes of memory (of type "old_heap").
Aborted
The ebin dir only has beam files for the app I wrote but not the dependencies, I see the dependencies have their own ebin directory inside the main app deps directory, how would I go about having them available in a console to start them up?
I would appreciate if someone can shed some light as to what the common practice is for the dev env with multiple OTP apps.
I have read a couple of tutorials but they are mostly targeted at the rebar release cycle and not the development process.
TIA
In your case, the modules you pull into the deps directory should typically be called from within your application code - and your application can be typically invoked from the Erlang shell using the application:start/1 function. If you haven't yet, I strongly suggest that you read Chapter 12, "OTP Behaviors", of Francesco Cesarini's excellent book Erlang Programming - it's a great practical introduction for what you're attempting.