How do you initialize mnesia on production? - erlang

I'm using rebar3 to produce a release, but how do I initialize mnesia on production?
If I write an "install" escript that does the mnesia:create_schema([node()]) - it will use a completely different node name from the one that release uses.
So I end up creating a schema for "nonode#nonode" while production mnesia when I start my app with my-app-1.0.0 start tries to access "myapp#localhost" node.
Also, it is kind of a chicken-and-egg problem:
I can't start up my app without mnesia tables
I can't install my mnesia tables without my app running (as the same node() name as the app will use).
Just wandering if there is a good way to handle this?
Here is my install escript that I run independently:
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -smp enable ls-mnesia debug verbose
-include("../include/rr.hrl").
main(_) ->
application:set_env(mnesia, dir, "/usr/local/src/db/mnesia"),
application:stop(mnesia),
install([node()|nodes()]).
install(Nodes) ->
case mnesia:create_schema(Nodes) of
ok ->
rpc:multicall(Nodes, application, start, [mnesia]),
read_store_create_tables(Nodes),
event_store_create_tables(Nodes),
rpc:multicall(Nodes, application, stop, [mnesia]);
Err ->
error_logger:warning_msg("Could not create schema: ~p~n", [Err]),
Err
end.
event_store_create_tables(Nodes) ->
{_, ok} = mnesia:create_table(rr_events,
[{attributes, record_info(fields, rr_events)},
{disc_copies, Nodes},
{type, bag}]).
read_store_create_tables(Nodes) ->
% Initialize the actual data-tables for the projections
{_, ok} = mnesia:create_table(rr_competencies,
[{attributes, record_info(fields, rr_competencies)},
{disc_copies, Nodes}]).
P.S.: I am using rebar3 which uses relx to build releases.

I am using my own build system which was written mainly because of that exact requirement - ability to install and initialize a node before starting it. The idea is quite simple: there are two releases, in this particular example called cmd and humbundee. The cmd release doesn't start the main applications, only loads them. Then a special function is executed to initialize the node. The function is configured in the reltool.config file. In this case it's hbd_setup from the deploy application. That function reads the configuration file and either creates and initializes the mnesia database from a backup or creates a new database if the backup doesn't exist. Once the node is installed it's started using the proper release. The same steps are executed in development (directly from the source code) and in production (from an OTP release).
With that setup the problem you described doesn't exist because both releases are started from the same location, using almost the same command and configuration files (builderl generates them from the configuration in reltool.config).
You could employ the same idea with any build tool, including rebar3 and relx, by executing these steps manually or with some sort of a script.
What builderl does is that it executes these steps automatically and provides an environment to execute them in the same way in development and production, e.g. see the bottom of the humbundee project's README file:
Install the node. This will start the cmd release and execute the hbd_setup:install/2 function to initialize the node:
./bin/init.esh
Start the node. This will start the humbundee release, which starts all applications with their corresponding supervisor trees:
./bin/start.esh
builderl actually uses rebar to pull and compile dependencies that depend on other projects, however it uses only OTP to create the release. It can also download dependencies itself using a project-wide dependency file, which are then compiled with make (compiling without the make is work in progress). I hope that helps.

Related

Idiomatic way to ship command line tools written in Erlang

The problem
Most of the articles and books about Erlang I could find focus on creating long running server-like applications, leaving the process of command line tools creation not covered.
I have a multi-app rebar3 project consisting of 3 applications:
myweb - a Cowboy-based web service;
mycli - a command line tool to prepare assets for myweb;
mylib - a library used by both myweb and mycli, depends on a NIF.
As a result of the build I want to get such artifacts:
an executable for the web part that is going to serve HTTP requests;
an executable command line tool for the assets preparation;
a set of libraries used by the above.
Requirements
CLI should behave like a sane non-interactive command line tool: handle arguments, deal with stdin/stdout, return non-zero exit code on error, etc;
both server and CLI should be able to use NIFs;
it should be easy to package the artifacts as a set of deb/rpm packages, so both server and CLI should reuse common dependencies.
Things tried so far
Building an escript
One of the ways I've seen in the wild is to create a self-contained escript file. At least rebar3 and relx do so. So I gave it a try.
Pros:
has support for command line arguments;
in case of errors, it returns non-zero exit code.
Cons:
embeds all the dependencies in a single file making it impossible to reuse mylib;
since *.so files get embedded into the resulting escript file, they cannot be loaded at runtime, thus NIFs don't work (see erlang rebar escriptize & nifs);
rebar3 escriptize doesn't handle dependencies well (see bug 1139).
Unknowns:
Should the CLI app become a proper OTP application?
Should it have a supervision tree?
Should it be started at all?
If so, how do I stop it when the assets have been processed?
Building a release
Another way to build a command line tool was described in a How I start: Erlang article by Fred Hebert.
Pros:
Each of the dependency applications get into their own directory, making it easy to share and package them.
Cons:
there's no defined entry point like escript's main/1;
as a consequence both command line arguments and exit code must be handled manually.
Unknowns:
How to model the CLI OTP app in a non-interactive way?
How to stop the app when the assets have been processed?
Neither of the approaches above seem to work for me.
It would be nice to get the best of both worlds: get the infrastructure that is provided by escript such as main/1 entry point, command line parameters, and exit code handling while still having a nice directory structure that is easy to package and which doesn't hinder the use of NIFs.
Regardless if you are starting a long-running daemon-like application in Erlang, or a CLI command, you always need the following:
erts application - the VM and kernel in a particular version
Erlang OTP applications
Your applications' dependencies
CLI entry point
Then in either case the CLI entry point has to start the Erlang VM and execute the code that it supposed to execute in a given situation. Then it will either exit or continue running - the later for a long-running application.
The CLI entry point can be anything that starts an Erlang VM, e.g. an escript script, sh, bash, etc. The obvious advantage of escript over generic shell is that escript is already being executed in the context of an Erlang VM, so no need to handle starting/stopping the VM.
You can start Erlang VM in two ways:
Use system-wide Erlang VM
Use an embedded Erlang release
In the first case you don't supply erts nor any OTP application with your package, you only make a particular Erlang version a dependency for your application. In the second case you supply erts and all required OTP applications along with your application's dependencies in your package.
In the second case you also need to handle setting the code root properly when starting the VM. But this is quite easy, see the erl script that Erlang uses to start the system-wide VM:
# location: /usr/local/lib/erlang/bin/erl
ROOTDIR="/usr/local/lib/erlang"
BINDIR=$ROOTDIR/erts-7.2.1/bin
EMU=beam
PROGNAME=`echo $0 | sed 's/.*\///'`
export EMU
export ROOTDIR
export BINDIR
export PROGNAME
exec "$BINDIR/erlexec" ${1+"$#"}
This can be handled by scripts, for example the node_package tool that Basho uses to package their Riak database for all major operating systems. I am maintaining my own fork of it which I am using with my own build tool called builderl. I just say that so you know that if I managed to customize it you will well be able to do that as well :)
Once the Erlang VM is started, your application should be able to load and start any application, either supplied with Erlang or with your application (and that includes the mylib library that you mentioned). Here are some examples how this could be achieved:
escript example
See this builderl.esh example how I handle loading other Erlang applications from builderl. That escript script assumes that the Erlang installation is relative to the folder from which it's executed. When it's a part of another application, like for example humbundee, the load_builderl.hrl include file compiles and loads bld_load, which in turn loads all remaining modules with bld_load:boot/3. Notice how I can use standard OTP applications without specifying where they are - builderl is being executed by escript and so all the applications are loaded from where they were installed (/usr/local/lib/erlang/lib/ on my system). If libraries used by your application, e.g. mylib, are installed somewhere else, all you need to do is add that location to the Erlang path, e.g. with code:add_path. Erlang will automatically load modules used in the code from folders added to the code path list.
embedded Erlang
However, the same would hold if the application was a proper OTP release installed independently from the system-wide Erlang installation. That's because in that case the script is executed by escript belonging to that embedded Erlang release rather than the system-wide version (even if it's installed). So it knows the location of all applications belonging to that release (including your applications). For example riak does exactly that - in their package they supply an embedded Erlang release that contains its own erts and all dependent Erlang applications. That way riak can be started without Erlang being even installed on the host operating system. This is an excerpt from a riak package on FreeBSD:
% tar -tf riak2-2.1.1_1.txz
/usr/local/sbin/riak
/usr/local/lib/riak/releases/start_erl.data
/usr/local/lib/riak/releases/2.1.0/riak.rel
/usr/local/lib/riak/releases/RELEASES
/usr/local/lib/riak/erts-5.10.3/bin/erl
/usr/local/lib/riak/erts-5.10.3/bin/beam
/usr/local/lib/riak/erts-5.10.3/bin/erlc
/usr/local/lib/riak/lib/stdlib-1.19.3/ebin/re.beam
/usr/local/lib/riak/lib/ssl-5.3.1/ebin/tls_v1.beam
/usr/local/lib/riak/lib/crypto-3.1/ebin/crypto.beam
/usr/local/lib/riak/lib/inets-5.9.6/ebin/inets.beam
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.app
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.beam
(...)
sh/bash
This doesn't differ much in principle from the above apart from having to explicitly call the function that you want to execute when starting the Erlang VM (the entry point or the main function as you called it).
Consider this script that builderl generates to start an Erlang application just to execute a specified task (generate the RELEASES file), after which the node shuts down:
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee -noshell -noinput -eval \"{ok, Cwd} = file:get_cwd(), release_handler:create_RELEASES(Cwd, \\\"releases\\\", \\\"releases/$APP_VSN/humbundee.rel\\\", []), init:stop()\""
This is a similar script but doesn't start any specific code or application. Instead, it starts a proper OTP release, so which applications are started and in what order depends on the release (specified by the -boot option).
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee"
In the vm.args file you can provide additional paths to your applications if required, e.g.:
-pa lib/humbundee/ebin lib/yolf/ebin deps/goldrush/ebin deps/lager/ebin deps/yajler/ebin
In this example these are relative, but could be absolute if your application is installed into a standard well-known location. Also, this would be only required if you are using the system-wide Erlang installation and need to add the additional paths to locate your Erlang applications, or if your Erlang applications are located in non-standard location (e.g. not in lib folder, as Erlang OTP requires). In a proper embedded Erlang release, where the applications are located in the code root/lib folder, Erlang is able to load those applications without specifying any additional paths.
Summing up and other considerations
The deployment of Erlang applications doesn't differ much from other projects written in scripting languages, e.g. ruby or python projects. All those projects have to deal with similar issues and I believe each operating system's package management deals with them one way or another:
Get to know how your operating system deals with packaging projects that have run-time dependencies.
See how other Erlang applications are packaged for your operating system, there are plenty of them that are usually distributed by all major systems: RabbitMQ, Ejabberd, Riak among others. Just download the package and unpack it to a folder, then you will see where all the files are placed.
EDIT - reference the requirements
Coming back to your requirements, you have the following choices:
Install Erlang as an OTP release system-wide, as an embedded Erlang, or as a bag with applications in some random folders (sorry Rebar)
You can have multiple entry points in the form of sh or escript scripts executing a selection of applications from the installed release. Both will work as long as you configured the code root and paths to those applications correctly (as outlined above).
Then each of your applications: myweb and mycli, would need to be executed in its own new context, e.g. start a new VM instance and execute the required application (from the same Erlang release). In case of myweb the entry point can be a sh scripts that starts a new node according to the release (similar to Riak). In case of mycli the entry point can be an escript, which finishes executing once the task is completed.
But it's entirely possible to create a short-running task that exits the VM even if it's started from sh - see the example above. In that case mycli would require separate release files - the script and boot to boot the VM. And of course it's also possible to start a long-running Erlang VM from escript.
I provided an example project that uses all these methods at once, humbundee. Once it's compiled it provides three access points:
The cmd release.
The humbundee release.
The builder.esh escript.
The first one is used to start the node for installation and then shut it down. The second is used to start a long-running Erlang application. The third is a build tool to install/configure the node. This is how the project looks like once the release has been created:
$:~/work/humbundee/tmp/rel % ls | tr " " "\n"
bin
erts-7.3
etc
lib
releases
$:~/work/humbundee/tmp/rel % ls bin | tr " " "\n"
builderl.esh
cmd.boot
humbundee.boot
epmd
erl
escript
run_erl
to_erl
(...)
$:~/work/humbundee/tmp/rel % ls lib | tr " " "\n"
builderl-0.2.7
compiler-6.0.3
deploy-0.0.1
goldrush-0.1.7
humbundee-0.0.1
kernel-4.2
lager-3.0.1
mnesia-4.13.3
sasl-2.7
stdlib-2.8
syntax_tools-1.7
yajler-0.0.1
yolf-0.1.1
$:~/work/humbundee/tmp/rel % ls releases/hbd-0.0.1 | tr " " "\n"
builderl.config
cmd.boot
cmd.rel
cmd.script
humbundee.boot
humbundee.rel
humbundee.script
sys.config.src
The cmd entry point will use application deploy-0.0.1 and builderl-0.2.7 as well as release files cmd.boot, cmd.script, and some OTP applications. The standard humbundee entry point will use all applications apart from builderl and deploy. Then the builderl.esh escript will use application deploy-0.0.1 and builderl-0.2.7. All from the same embedded Erlang OTP installation.
A small escript that then goes into code from 'conventional' modules could be a solution.
As an example, Concuerror is expected to be used as a command line tool and uses an escript as its entry point. It handles command-line arguments via getopt. All the main code is in regular Erlang modules, which are included in the path with simple arguments to the escript.
As far as I understand, NIFs can then be loaded with regular -onload attributes (Concuerror does not use NIFs).

Elixir: how can I leverage release_handler?

I know there are projects such as exrm which do that for you, and it seem to do it extremely well. However, I would like to manually play with release_handler for educational purposes, and I find little documentation or articles on the subject.
Have you read the Learn You Some Erlang article about release handling?
Doing upgrades with release handler is both easy and difficult. Easy because once you know all the nitty gritty details it's pretty much automatic. But difficult, because getting all the details right in the first place is quite a task.
I spent quite a lot of time trying to understand how to get the process right and the effect is builderl, a build system, something like rebar, but doing everything from start to finish the OTP way.
It's probably not of much use if you are using Elixir, and I don't know much about how the process differs for Elixir. But I can give you a few points that were relevant for Erlang, which you can try to adapt to your case (in the context of builderl, as that will allow me to share some code examples).
The bird's view
In general the process is as follows:
Create the initial release. This is a compiled version of your project processed with systools and reltool that contains all the folders and files as release handler expects them to be. This is what builderl creates in tmp/rel for gmake rel. It's a self-contained folder also called an embedded Erlang installation. More about it later.
Copy the release to the production system, install and start the node.
In your development environment make some updates to one or more of the applications running in the release.
Create a new version of each application that has changed. Create appropriate appup files for those applications. Increase the release version.
Create the relup file with systools. The easiest way is to have two release versions unpacked alongside each other and the newer version installed and running. Then in the Erlang shell of the new version call make_relup referencing the old version of applications from the old version of the release (see snippets at the bottom of the answer).
Create the upgrade archive. This is simply a tarball of all applications that have been upgraded, the releases folder, and the release definition file (again snippets at the bottom).
Copy the upgrade archive to the production system and unpack with release_handler:unpack_release/1. This will unpack the new versions of your applications alongside the old versions in the lib folder of the release folder.
Then finally install the new version with release_handler:install_release/1.
A release
In order to use release handler the code must be structured in a specific way. The Releases document specifies how a release folder should look like. The humbundee application is an example and here are its folders once builderl compiles it and creates the release using systools (in tmp/rel):
> tmp/rel/
bin/
erts-7.2.1/
etc/
lib/
releases/
The folders are as follows:
lib
Main folder with applications. Each application may be either without the version suffix (-Vsn) or all versions of the same applications have to have the suffix. You can't mix application folders where some of them contain the version suffix and some don't.
But when doing upgrades with release handler all folders must have a proper -Vsn suffix, otherwise upgrades don't make sense - upgrades are between versions of applications and Erlang distinguishes the versions by the suffix in the application's folder name.
> l tmp/rel/lib/
builderl-0.2.7/
compiler-6.0.2/
deploy-0.0.1/
goldrush-0.1.7/
humbundee-0.0.1/
kernel-4.1.1/
lager-3.0.1/
mnesia-4.13.2/
sasl-2.6.1/
stdlib-2.7/
syntax_tools-1.7/
yajler-0.0.1/
yolf-0.1.1/
erts-Vsn
Usually this is a copy of the erts folder from where Erlang is installed (/usr/local/lib/erlang/erts-7.2.1 in my case).
releases
Contains files that define the release. Each file is created slightly differently.
> l tmp/rel/releases/
RELEASES
hbd-0.0.1/
start_erl.data
humbundee.rel
> cat release/RELEASES
[{release,"humbundee","hbd-0.0.1","7.2.1",
[{kernel,"4.1.1","/usr/home/g/work/humbundee/lib/kernel-4.1.1"},
{stdlib,"2.7","/usr/home/g/work/humbundee/lib/stdlib-2.7"},
{sasl,"2.6.1","/usr/home/g/work/humbundee/lib/sasl-2.6.1"},
{yolf,"0.1.1","/usr/home/g/work/humbundee/lib/yolf-0.1.1"},
{yajler,"0.0.1","/usr/home/g/work/humbundee/lib/yajler-0.0.1"},
{mnesia,"4.13.2","/usr/home/g/work/humbundee/lib/mnesia-4.13.2"},
{syntax_tools,"1.7",
"/usr/home/g/work/humbundee/lib/syntax_tools-1.7"},
{compiler,"6.0.2","/usr/home/g/work/humbundee/lib/compiler-6.0.2"},
{goldrush,"0.1.7","/usr/home/g/work/humbundee/lib/goldrush-0.1.7"},
{lager,"3.0.1","/usr/home/g/work/humbundee/lib/lager-3.0.1"},
{humbundee,"0.0.1",
"/usr/home/g/work/humbundee/lib/humbundee-0.0.1"}],
permanent}].
> cat releases/start_erl.data
7.2.1 hbd-0.0.1
> cat tmp/rel/releases/humbundee.rel
%% rel generated at {2016,3,24} {11,9,39}
{release,{"humbundee","hbd-0.0.1"},
{erts,"7.2.1"},
[{kernel,"4.1.1"},
{stdlib,"2.7"},
{sasl,"2.6.1"},
{yolf,"0.1.1"},
{yajler,"0.0.1"},
{mnesia,"4.13.2"},
{syntax_tools,"1.7"},
{compiler,"6.0.2"},
{goldrush,"0.1.7"},
{lager,"3.0.1"},
{humbundee,"0.0.1"}]}.
> l tmp/rel/releases/hbd-0.0.1/
builderl.config
cmd.boot
cmd.data
cmd.rel
cmd.script
hbd.config
humbundee.boot
humbundee.data
humbundee.rel
humbundee.script
start.boot -> humbundee.boot
sys.config -> hbd.config
sys.config.src -> ../../etc/sys.config.src
Those files are created automatically by builderl, which in turn generates them with the help of Release Handler:
RELEASES
Contains the definitions of current and past releases currently available on the system. Can be created with release_handler:create_RELEASES/4. builderl creates it by starting a node with this command (which runs the command and shuts down the node afterwards):
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee -noshell -noinput -eval \"{ok, Cwd} = file:get_cwd(), release_handler:create_RELEASES(Cwd, \\\"releases\\\", \\\"releases/$APP_VSN/humbundee.rel\\\", []), init:stop()\""
start_erl.data
Can be created manually.
humbundee.rel
Contains a definition of a release - which applications should be included in the release in which versions. It can be created manually but it's difficult to keep track of the application versions. So, very often it's generated from a more generic file. In case of builderl it generates the .rel file from a reltool.config file based on these reltool examples.
bin
Contains links to Erlang executables, script and boot files for releases. In case of builderl they are simply copied from the Erlang installation.
> l tmp/rel/bin/
builderl -> ../lib/builderl-0.2.7
config.esh -> builderl.esh
configure.esh -> builderl.esh
deps.esh -> builderl.esh
init.esh -> builderl.esh
migresia.esh -> builderl.esh
mk_dev.esh -> builderl.esh
mk_rel.esh -> builderl.esh
start.esh -> builderl.esh
stop.esh -> builderl.esh
update_root_dir.esh -> builderl.esh
builderl.esh
ct_run
epmd
erl
escript
run_erl
to_erl
start
cmd.boot
humbundee.boot
start_clean.boot
start_sasl.boot
start.boot
start.script
The .esh files belong to builderl so you can skip them. The executables are copied from /usr/local/lib/erlang/bin/ (which is system dependent - I am using FreeBSD). The start* boot and script files are also copied from /usr/local/lib/erlang/bin/ and cmd.boot and humbundee.boot are simply copied from tmp/rel/releases/hbd-0.0.1/.
etc
Used by builderl to store configuration files, not required by OTP release handler.
Snippets
Finally some code snippets to help you work with the release tools:
Create a relup file that upgrades release version 1.1.1 to 9.9.9 (run in the version having the old version of the release alongside, see The bird's view at the top of the answer):
systools:make_relup("releases/rel-9.9.9/my_release", ["../1.1.1/releases/rel-1.1.1/my_release"], ["../1.1.1/releases/rel-1.1.1/my_release"], [{path, ["../1.1.1/lib/lager-2.6/ebin", "../1.1.1/lib/backup_app-1.1.2/ebin", "../1.1.1/lib/goldrush-2.5/ebin"]}, {outdir, "releases/rel-9.9.9"}]).
Create a relup file that upgrades two release versions, 1.1.1 and 2.2.2 to 9.9.9
systools:make_relup("releases/rel-9.9.9/my_release", ["../1.1.1/releases/rel-1.1.1/my_release", "../2.2.2/releases/rel-2.2.2/my_release"], ["../1.1.1/releases/rel-1.1.1/my_release", "../2.2.2/releases/rel-2.2.2/my_release"], [{path, ["../1.1.1/lib/lager-2.6/ebin", "../2.2.2/lib/lager-2.4/ebin", "../1.1.1/lib/backup_app-1.1.2/ebin", "../2.2.2/lib/other_app-1.1/ebin"]}, {outdir, "releases/rel-9.9.9"}]).
Manually create the release upgrade package
tar -czf update_to_rel-9.9.9.tar.gz releases/my_release.rel releases/rel-9.9.9 lib/lager-2.6.1 lib/builderl-0.2.6 lib/deploy-0.4.2 lib/backup_app-2.0.0 lib/other_app-2.5.3
Unpack the release upgrade package (after it has been copied to the releases folder in the release on the production node):
release_handler:unpack_release("rel-9.9.9").
Verify the release has been unpacked - the new version should be marked as unpacked:
release_handler:which_releases().
Install the new release:
release_handler:install_release("rel-9.9.9").
Make it permanent once it's working as expected:
release_handler:make_permanent("rel-9.9.9").
Rollback the release:
release_handler:install_release("rel-1.1.1").
release_handler:remove_release("rel-1.1.1").

mnesia doesn't restart with supervisor

I have a problem working with gen_server, supervisor and mnesia. I have supervisor:http://pastebin.com/8rkfrq7D
which fires server module which is starting mnesia. My problem is that when i write
erl
c(superv).
superv:start_link().
//it opens fine
C^
erl
c(superv).
superv:start_link().
** exception exit: shutdown
//if i try again start_link() it is working
I got this. I deleted part responsible for starting mnesia and it worked fine, so I expect that instant quit (by ctrl+c) doesn't close mnesia properly. Unfortunately even if I invoke mnesia:stop(), before quicking and invoke start_link() again it returns exception exit. Please help me to resolve this matter.
The not entirely right way to start mnesia is as an application.
application:start(mnesia).
before you start your application. It can be used when you are developing your system. For a real deployment, you want to generate a release with a boot-script. A release is a self-contained Erlang system you can start up on a foreign machine. You will write your own application, write a my_application_name.app file which contains a dependency on mnesia. Then you want to generate a release, typically with reltool and this release will then initialize by starting up mnesia before starting my_application_name. At least this is the real way to do it.
The tool like rebar can help you with maintaining your application and a reltool.config file for building your release.
Note that Mnesia needs a schema before it can start. A common trick is to have your release contain a default empty database which gets installed such that mnesias dir parameter points to it. Thus, if you start a newly generated system, it has a database to start from. And you can restart from scratch by re-installing the empty database. Check out FALLBACK.BUP in mnesia for hints on how to do this.
As for your errors, you can't start your server twice. The first time around, it registers itself under the atom server so a subsequent restart when it is already running will crash it. You can sometimes get a hint if you boot Erlang with the SASL application enabled. Either execute application:start(sasl) or run erlang like so:
erl -boot start_sasl
which substitutes the normal boot script with a variant that also starts SASL.

Why does EUnit compile .beam files into .eunit/ebin?

Disclaimer: The author of this question has mostly theoretical knowledge of Erlang/OTP.
I have a small OTP application which calls some non-Erlang executable inside the myapp/ebin directory via open_port(). When I run the application itself everything goes well and the port to the executable is successfully opened.
But when I try to run unit tests for the application, the ones that depend on the open_port() fail, because when started with EUnit the application tries to find the executable under myapp/.eunit/ebin.
How can I change that behaviour without changing the code of the application itself? How can I run EUnit tests with the same current directory as when running the application itself? (I mean it would not be a good idea to change the code which provides the path to the executable just to be able to run EUnit).
Edit: I followed the advice in the Erlang mailing list, but code:priv_dir(myapp_name) returns {error, bad_name}.
Edit: I can see that .eunit/ contains modulename.beam files and ebin/ contains both modulename.beam files and modulename_tests.beam files. Now I am completely lost. When I run make test, rebar runs eunit command, which calls each modulename_tests.beam file in the ebin/ directory which calls a corresponding modulename.beam file in the .eunit/ directory (filename:absname("") clearly shows that modulename.beam files are executed from .eunit/ during test). Why is it so? Why do we need to run modulename.beam files from the .eunit/ directory instead of ebin/?
Why do we actually need to have the very same .beam files in myapp/ebin and myapp/.eunit/ebin?
P.S. I have read the official documentation and did not find the solution.
EUnit does not do this by itself - the .eunit directory is a convention used by Rebar.
To use erlang start script ".erlang" , and it can solve your problem.
In the .erlang file, to use code:add_pathz/N to add your necessary path.
Before reading couchdb source code there is example of how to use priv directory. Maybe the solution is helpful to you. It wrap open_port with start_port function, and set the directory in start_port function.
In file couch_os_daemon.erl
start_port(Command) ->
PrivDir = couch_util:priv_dir(),
Spawnkiller = filename:join(PrivDir, "couchspawnkillable"),
Port = open_port({spawn, Spawnkiller ++ " " ++ Command}, ?PORT_OPTIONS),
{ok, Port}.
stop_port(#daemon{port=Port, kill=undefined}=D) ->
?LOG_ERROR("Stopping daemon without a kill command: ~p", [D#daemon.name]),
catch port_close(Port);
stop_port(#daemon{port=Port}=D) ->
?LOG_DEBUG("Stopping daemon: ~p", [D#daemon.name]),
os:cmd(D#daemon.kill),
catch port_close(Port).
In file couch_util.erl
priv_dir() ->
case code:priv_dir(couch) of
{error, bad_name} ->
% small hack, in dev mode "app" is couchdb. Fixing requires
% renaming src/couch to src/couch. Not really worth the hassle.
% -Damien
code:priv_dir(couchdb);
Dir -> Dir
end.
start_driver(LibDir) ->
case erl_ddll:load_driver(LibDir, "couch_icu_driver") of
ok ->
ok;
{error, already_loaded} ->
ok = erl_ddll:reload_driver(LibDir, "couch_icu_driver");
{error, Error} ->
exit(erl_ddll:format_error(Error))
end.
You can grep priv, many example can be found.

Doing an "offline" erlang OTP release upgrade

I'm working on a project that deals heavily with OTP releases and their upgrade process.
I managed to perform hot upgrades from a release upgrade archive (tar.gz) using release_handler functions unpack_release, install_release and make_permanent. They are all invoked in the very node that is being upgraded.
Now I have to deal with a situation where the erlang node is down and I have to do an "offline" upgrade. Essentially what I want to achieve is to unpack release, and update certain files like RELEASES and start_erl.data (maybe some more?) so they are in the same state as they would be after a hot upgrade. The result would be that when the node is started, newly installed erlang release is booted. Also, an important thing is that I want to avoid running old release.
Any ideas how to do this as simple and cleanly as possible?
Start an erlang node to get a shell. There is no need for a node name just be sure you are running the same ~/bin/erl that the target node does. Then place your release package in ~/lib/erlang/releases and unpack as you would normally:
1> application:start(sasl),
1> release_handler:unpack_release("my_release-1.0").
{ok, "1.0"}.
Now quit, shutting down the shell:
2> q().
[Don't try and cheat by using another window here! You must quit.]
Now you need to edit the ~/lib/erlang/releases/RELEASES file and change the status of the new release from unpacked to current:
[{release,"My Release Package","1.0","5.9.1",
[{kernel,"2.15.1","/Users/otpuser/lib/erlang/lib/kernel-2.15.1"},
{stdlib,"1.18.1","/Users/otpuser/lib/erlang/lib/stdlib-1.18.1"},
{sasl,"2.2.1","/Users/otpuser/lib/erlang/lib/sasl-2.2.1"}, ...],
- unpacked}].
+ current}].
Start a shell again and make it permanent:
1> application:start(sasl),
1> release_handler:make_permanent("1.0").
ok
[Note: all that make_permanent/1 does is put the release version ("1.0") in ~/lib/erlang/releases/start_erl.data so you could cheat here.]
Be sure to place your system configuration in ~/lib/erlang/releases/1.0/sys.config.
Now when you run ~/bin/start the release name will be read from start_erl.data and init will use the boot script in ~/lib/erlang/releases/1.0/start.boot.

Resources