mnesia doesn't restart with supervisor - erlang

I have a problem working with gen_server, supervisor and mnesia. I have supervisor:http://pastebin.com/8rkfrq7D
which fires server module which is starting mnesia. My problem is that when i write
erl
c(superv).
superv:start_link().
//it opens fine
C^
erl
c(superv).
superv:start_link().
** exception exit: shutdown
//if i try again start_link() it is working
I got this. I deleted part responsible for starting mnesia and it worked fine, so I expect that instant quit (by ctrl+c) doesn't close mnesia properly. Unfortunately even if I invoke mnesia:stop(), before quicking and invoke start_link() again it returns exception exit. Please help me to resolve this matter.

The not entirely right way to start mnesia is as an application.
application:start(mnesia).
before you start your application. It can be used when you are developing your system. For a real deployment, you want to generate a release with a boot-script. A release is a self-contained Erlang system you can start up on a foreign machine. You will write your own application, write a my_application_name.app file which contains a dependency on mnesia. Then you want to generate a release, typically with reltool and this release will then initialize by starting up mnesia before starting my_application_name. At least this is the real way to do it.
The tool like rebar can help you with maintaining your application and a reltool.config file for building your release.
Note that Mnesia needs a schema before it can start. A common trick is to have your release contain a default empty database which gets installed such that mnesias dir parameter points to it. Thus, if you start a newly generated system, it has a database to start from. And you can restart from scratch by re-installing the empty database. Check out FALLBACK.BUP in mnesia for hints on how to do this.
As for your errors, you can't start your server twice. The first time around, it registers itself under the atom server so a subsequent restart when it is already running will crash it. You can sometimes get a hint if you boot Erlang with the SASL application enabled. Either execute application:start(sasl) or run erlang like so:
erl -boot start_sasl
which substitutes the normal boot script with a variant that also starts SASL.

Related

How do you initialize mnesia on production?

I'm using rebar3 to produce a release, but how do I initialize mnesia on production?
If I write an "install" escript that does the mnesia:create_schema([node()]) - it will use a completely different node name from the one that release uses.
So I end up creating a schema for "nonode#nonode" while production mnesia when I start my app with my-app-1.0.0 start tries to access "myapp#localhost" node.
Also, it is kind of a chicken-and-egg problem:
I can't start up my app without mnesia tables
I can't install my mnesia tables without my app running (as the same node() name as the app will use).
Just wandering if there is a good way to handle this?
Here is my install escript that I run independently:
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -smp enable ls-mnesia debug verbose
-include("../include/rr.hrl").
main(_) ->
application:set_env(mnesia, dir, "/usr/local/src/db/mnesia"),
application:stop(mnesia),
install([node()|nodes()]).
install(Nodes) ->
case mnesia:create_schema(Nodes) of
ok ->
rpc:multicall(Nodes, application, start, [mnesia]),
read_store_create_tables(Nodes),
event_store_create_tables(Nodes),
rpc:multicall(Nodes, application, stop, [mnesia]);
Err ->
error_logger:warning_msg("Could not create schema: ~p~n", [Err]),
Err
end.
event_store_create_tables(Nodes) ->
{_, ok} = mnesia:create_table(rr_events,
[{attributes, record_info(fields, rr_events)},
{disc_copies, Nodes},
{type, bag}]).
read_store_create_tables(Nodes) ->
% Initialize the actual data-tables for the projections
{_, ok} = mnesia:create_table(rr_competencies,
[{attributes, record_info(fields, rr_competencies)},
{disc_copies, Nodes}]).
P.S.: I am using rebar3 which uses relx to build releases.
I am using my own build system which was written mainly because of that exact requirement - ability to install and initialize a node before starting it. The idea is quite simple: there are two releases, in this particular example called cmd and humbundee. The cmd release doesn't start the main applications, only loads them. Then a special function is executed to initialize the node. The function is configured in the reltool.config file. In this case it's hbd_setup from the deploy application. That function reads the configuration file and either creates and initializes the mnesia database from a backup or creates a new database if the backup doesn't exist. Once the node is installed it's started using the proper release. The same steps are executed in development (directly from the source code) and in production (from an OTP release).
With that setup the problem you described doesn't exist because both releases are started from the same location, using almost the same command and configuration files (builderl generates them from the configuration in reltool.config).
You could employ the same idea with any build tool, including rebar3 and relx, by executing these steps manually or with some sort of a script.
What builderl does is that it executes these steps automatically and provides an environment to execute them in the same way in development and production, e.g. see the bottom of the humbundee project's README file:
Install the node. This will start the cmd release and execute the hbd_setup:install/2 function to initialize the node:
./bin/init.esh
Start the node. This will start the humbundee release, which starts all applications with their corresponding supervisor trees:
./bin/start.esh
builderl actually uses rebar to pull and compile dependencies that depend on other projects, however it uses only OTP to create the release. It can also download dependencies itself using a project-wide dependency file, which are then compiled with make (compiling without the make is work in progress). I hope that helps.

Idiomatic way to ship command line tools written in Erlang

The problem
Most of the articles and books about Erlang I could find focus on creating long running server-like applications, leaving the process of command line tools creation not covered.
I have a multi-app rebar3 project consisting of 3 applications:
myweb - a Cowboy-based web service;
mycli - a command line tool to prepare assets for myweb;
mylib - a library used by both myweb and mycli, depends on a NIF.
As a result of the build I want to get such artifacts:
an executable for the web part that is going to serve HTTP requests;
an executable command line tool for the assets preparation;
a set of libraries used by the above.
Requirements
CLI should behave like a sane non-interactive command line tool: handle arguments, deal with stdin/stdout, return non-zero exit code on error, etc;
both server and CLI should be able to use NIFs;
it should be easy to package the artifacts as a set of deb/rpm packages, so both server and CLI should reuse common dependencies.
Things tried so far
Building an escript
One of the ways I've seen in the wild is to create a self-contained escript file. At least rebar3 and relx do so. So I gave it a try.
Pros:
has support for command line arguments;
in case of errors, it returns non-zero exit code.
Cons:
embeds all the dependencies in a single file making it impossible to reuse mylib;
since *.so files get embedded into the resulting escript file, they cannot be loaded at runtime, thus NIFs don't work (see erlang rebar escriptize & nifs);
rebar3 escriptize doesn't handle dependencies well (see bug 1139).
Unknowns:
Should the CLI app become a proper OTP application?
Should it have a supervision tree?
Should it be started at all?
If so, how do I stop it when the assets have been processed?
Building a release
Another way to build a command line tool was described in a How I start: Erlang article by Fred Hebert.
Pros:
Each of the dependency applications get into their own directory, making it easy to share and package them.
Cons:
there's no defined entry point like escript's main/1;
as a consequence both command line arguments and exit code must be handled manually.
Unknowns:
How to model the CLI OTP app in a non-interactive way?
How to stop the app when the assets have been processed?
Neither of the approaches above seem to work for me.
It would be nice to get the best of both worlds: get the infrastructure that is provided by escript such as main/1 entry point, command line parameters, and exit code handling while still having a nice directory structure that is easy to package and which doesn't hinder the use of NIFs.
Regardless if you are starting a long-running daemon-like application in Erlang, or a CLI command, you always need the following:
erts application - the VM and kernel in a particular version
Erlang OTP applications
Your applications' dependencies
CLI entry point
Then in either case the CLI entry point has to start the Erlang VM and execute the code that it supposed to execute in a given situation. Then it will either exit or continue running - the later for a long-running application.
The CLI entry point can be anything that starts an Erlang VM, e.g. an escript script, sh, bash, etc. The obvious advantage of escript over generic shell is that escript is already being executed in the context of an Erlang VM, so no need to handle starting/stopping the VM.
You can start Erlang VM in two ways:
Use system-wide Erlang VM
Use an embedded Erlang release
In the first case you don't supply erts nor any OTP application with your package, you only make a particular Erlang version a dependency for your application. In the second case you supply erts and all required OTP applications along with your application's dependencies in your package.
In the second case you also need to handle setting the code root properly when starting the VM. But this is quite easy, see the erl script that Erlang uses to start the system-wide VM:
# location: /usr/local/lib/erlang/bin/erl
ROOTDIR="/usr/local/lib/erlang"
BINDIR=$ROOTDIR/erts-7.2.1/bin
EMU=beam
PROGNAME=`echo $0 | sed 's/.*\///'`
export EMU
export ROOTDIR
export BINDIR
export PROGNAME
exec "$BINDIR/erlexec" ${1+"$#"}
This can be handled by scripts, for example the node_package tool that Basho uses to package their Riak database for all major operating systems. I am maintaining my own fork of it which I am using with my own build tool called builderl. I just say that so you know that if I managed to customize it you will well be able to do that as well :)
Once the Erlang VM is started, your application should be able to load and start any application, either supplied with Erlang or with your application (and that includes the mylib library that you mentioned). Here are some examples how this could be achieved:
escript example
See this builderl.esh example how I handle loading other Erlang applications from builderl. That escript script assumes that the Erlang installation is relative to the folder from which it's executed. When it's a part of another application, like for example humbundee, the load_builderl.hrl include file compiles and loads bld_load, which in turn loads all remaining modules with bld_load:boot/3. Notice how I can use standard OTP applications without specifying where they are - builderl is being executed by escript and so all the applications are loaded from where they were installed (/usr/local/lib/erlang/lib/ on my system). If libraries used by your application, e.g. mylib, are installed somewhere else, all you need to do is add that location to the Erlang path, e.g. with code:add_path. Erlang will automatically load modules used in the code from folders added to the code path list.
embedded Erlang
However, the same would hold if the application was a proper OTP release installed independently from the system-wide Erlang installation. That's because in that case the script is executed by escript belonging to that embedded Erlang release rather than the system-wide version (even if it's installed). So it knows the location of all applications belonging to that release (including your applications). For example riak does exactly that - in their package they supply an embedded Erlang release that contains its own erts and all dependent Erlang applications. That way riak can be started without Erlang being even installed on the host operating system. This is an excerpt from a riak package on FreeBSD:
% tar -tf riak2-2.1.1_1.txz
/usr/local/sbin/riak
/usr/local/lib/riak/releases/start_erl.data
/usr/local/lib/riak/releases/2.1.0/riak.rel
/usr/local/lib/riak/releases/RELEASES
/usr/local/lib/riak/erts-5.10.3/bin/erl
/usr/local/lib/riak/erts-5.10.3/bin/beam
/usr/local/lib/riak/erts-5.10.3/bin/erlc
/usr/local/lib/riak/lib/stdlib-1.19.3/ebin/re.beam
/usr/local/lib/riak/lib/ssl-5.3.1/ebin/tls_v1.beam
/usr/local/lib/riak/lib/crypto-3.1/ebin/crypto.beam
/usr/local/lib/riak/lib/inets-5.9.6/ebin/inets.beam
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.app
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.beam
(...)
sh/bash
This doesn't differ much in principle from the above apart from having to explicitly call the function that you want to execute when starting the Erlang VM (the entry point or the main function as you called it).
Consider this script that builderl generates to start an Erlang application just to execute a specified task (generate the RELEASES file), after which the node shuts down:
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee -noshell -noinput -eval \"{ok, Cwd} = file:get_cwd(), release_handler:create_RELEASES(Cwd, \\\"releases\\\", \\\"releases/$APP_VSN/humbundee.rel\\\", []), init:stop()\""
This is a similar script but doesn't start any specific code or application. Instead, it starts a proper OTP release, so which applications are started and in what order depends on the release (specified by the -boot option).
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee"
In the vm.args file you can provide additional paths to your applications if required, e.g.:
-pa lib/humbundee/ebin lib/yolf/ebin deps/goldrush/ebin deps/lager/ebin deps/yajler/ebin
In this example these are relative, but could be absolute if your application is installed into a standard well-known location. Also, this would be only required if you are using the system-wide Erlang installation and need to add the additional paths to locate your Erlang applications, or if your Erlang applications are located in non-standard location (e.g. not in lib folder, as Erlang OTP requires). In a proper embedded Erlang release, where the applications are located in the code root/lib folder, Erlang is able to load those applications without specifying any additional paths.
Summing up and other considerations
The deployment of Erlang applications doesn't differ much from other projects written in scripting languages, e.g. ruby or python projects. All those projects have to deal with similar issues and I believe each operating system's package management deals with them one way or another:
Get to know how your operating system deals with packaging projects that have run-time dependencies.
See how other Erlang applications are packaged for your operating system, there are plenty of them that are usually distributed by all major systems: RabbitMQ, Ejabberd, Riak among others. Just download the package and unpack it to a folder, then you will see where all the files are placed.
EDIT - reference the requirements
Coming back to your requirements, you have the following choices:
Install Erlang as an OTP release system-wide, as an embedded Erlang, or as a bag with applications in some random folders (sorry Rebar)
You can have multiple entry points in the form of sh or escript scripts executing a selection of applications from the installed release. Both will work as long as you configured the code root and paths to those applications correctly (as outlined above).
Then each of your applications: myweb and mycli, would need to be executed in its own new context, e.g. start a new VM instance and execute the required application (from the same Erlang release). In case of myweb the entry point can be a sh scripts that starts a new node according to the release (similar to Riak). In case of mycli the entry point can be an escript, which finishes executing once the task is completed.
But it's entirely possible to create a short-running task that exits the VM even if it's started from sh - see the example above. In that case mycli would require separate release files - the script and boot to boot the VM. And of course it's also possible to start a long-running Erlang VM from escript.
I provided an example project that uses all these methods at once, humbundee. Once it's compiled it provides three access points:
The cmd release.
The humbundee release.
The builder.esh escript.
The first one is used to start the node for installation and then shut it down. The second is used to start a long-running Erlang application. The third is a build tool to install/configure the node. This is how the project looks like once the release has been created:
$:~/work/humbundee/tmp/rel % ls | tr " " "\n"
bin
erts-7.3
etc
lib
releases
$:~/work/humbundee/tmp/rel % ls bin | tr " " "\n"
builderl.esh
cmd.boot
humbundee.boot
epmd
erl
escript
run_erl
to_erl
(...)
$:~/work/humbundee/tmp/rel % ls lib | tr " " "\n"
builderl-0.2.7
compiler-6.0.3
deploy-0.0.1
goldrush-0.1.7
humbundee-0.0.1
kernel-4.2
lager-3.0.1
mnesia-4.13.3
sasl-2.7
stdlib-2.8
syntax_tools-1.7
yajler-0.0.1
yolf-0.1.1
$:~/work/humbundee/tmp/rel % ls releases/hbd-0.0.1 | tr " " "\n"
builderl.config
cmd.boot
cmd.rel
cmd.script
humbundee.boot
humbundee.rel
humbundee.script
sys.config.src
The cmd entry point will use application deploy-0.0.1 and builderl-0.2.7 as well as release files cmd.boot, cmd.script, and some OTP applications. The standard humbundee entry point will use all applications apart from builderl and deploy. Then the builderl.esh escript will use application deploy-0.0.1 and builderl-0.2.7. All from the same embedded Erlang OTP installation.
A small escript that then goes into code from 'conventional' modules could be a solution.
As an example, Concuerror is expected to be used as a command line tool and uses an escript as its entry point. It handles command-line arguments via getopt. All the main code is in regular Erlang modules, which are included in the path with simple arguments to the escript.
As far as I understand, NIFs can then be loaded with regular -onload attributes (Concuerror does not use NIFs).

Erlang - Is it possible to make the messages that appear on erlang shell appear on linux shell too?

I have created a small test application and when on a linux machine I would like to see the messages that appear on the erlang shell to appear on the linux as well. I am not sure how it could be done, but is it possible?
I am running the application in detached mode(erlang shell).
EDIT:
I meant the message passing to the linux shell.
Example:
When I start my application using,
test application <start/stop>
I would get a message on the Linux shell Test Application has successfully started
But if I was like copying files, I would like the progress of the file copy to be displayed on the Linux shell like,
Copy successful. Number of files copied : 1
Copy successful. Number of files copied : 2
....
So is it possible to do the above using the message passing from Erlang to Linux?
I believe you can use some logger for this purposes. For example lager can redirect logs to several facilities at the same time, e.g. file and erlang console.
Other option is to use "run_erl", utility which comes with erlang distribution and allows "redirect the standard input and standard output streams so that all output can be logged".
Try erl -man run_erl or you can see can generate "release" with rebar, it will generate startup scripts which use "run_erl".

Doing an "offline" erlang OTP release upgrade

I'm working on a project that deals heavily with OTP releases and their upgrade process.
I managed to perform hot upgrades from a release upgrade archive (tar.gz) using release_handler functions unpack_release, install_release and make_permanent. They are all invoked in the very node that is being upgraded.
Now I have to deal with a situation where the erlang node is down and I have to do an "offline" upgrade. Essentially what I want to achieve is to unpack release, and update certain files like RELEASES and start_erl.data (maybe some more?) so they are in the same state as they would be after a hot upgrade. The result would be that when the node is started, newly installed erlang release is booted. Also, an important thing is that I want to avoid running old release.
Any ideas how to do this as simple and cleanly as possible?
Start an erlang node to get a shell. There is no need for a node name just be sure you are running the same ~/bin/erl that the target node does. Then place your release package in ~/lib/erlang/releases and unpack as you would normally:
1> application:start(sasl),
1> release_handler:unpack_release("my_release-1.0").
{ok, "1.0"}.
Now quit, shutting down the shell:
2> q().
[Don't try and cheat by using another window here! You must quit.]
Now you need to edit the ~/lib/erlang/releases/RELEASES file and change the status of the new release from unpacked to current:
[{release,"My Release Package","1.0","5.9.1",
[{kernel,"2.15.1","/Users/otpuser/lib/erlang/lib/kernel-2.15.1"},
{stdlib,"1.18.1","/Users/otpuser/lib/erlang/lib/stdlib-1.18.1"},
{sasl,"2.2.1","/Users/otpuser/lib/erlang/lib/sasl-2.2.1"}, ...],
- unpacked}].
+ current}].
Start a shell again and make it permanent:
1> application:start(sasl),
1> release_handler:make_permanent("1.0").
ok
[Note: all that make_permanent/1 does is put the release version ("1.0") in ~/lib/erlang/releases/start_erl.data so you could cheat here.]
Be sure to place your system configuration in ~/lib/erlang/releases/1.0/sys.config.
Now when you run ~/bin/start the release name will be read from start_erl.data and init will use the boot script in ~/lib/erlang/releases/1.0/start.boot.

erlang using 100% one of CPUs

I have situation when my project is frozen.
I see only 100% usage one of the CPUs (the rest is 0%, but I'm using SMP).
And my admin console says:
=ERROR REPORT==== 11-Feb-2011::00:45:00 ===
** Node 'node#example.com' not responding **
** Removing (timedout) connection **
After that I cannot connect to the node.
If I'm programming with C++ I can make a debug build and when such situation happens I can attach to my process and see the code where it is looping.
But how can I do it in Erlang? How can I get the call stack or something to help me understand what is really wrong?
Thank you for any help.
You can compile your module with debug information in the command line:
$ erlc +debug_info module.erl
Or in Erlang shell:
1> c(module, debug_info).
ok
When you have already started your process, you have an option to attach to the working process. Start the debugger in your erlang shell:
2> debugger:start().
The debugger window will appear. From menu select Module -> Interpret and select the appropriate module, to which you want attach in the Interpret Dialog window. When you did, you will see your module in the right window of debugger.
Now select Process -> Attach, Attach Process window should appear.
if the erlang distribution has shutdown you cannot use any remote debugging as you cannot connect to the node. You would have to connect to the pipes created by the VM at startup if configured to do so. See the http://www.erlang.org/doc/man/run_erl.html about how to do that.
You also might want to checkout this thread for a reason why your system is behaving the way it is: http://www.erlang.org/cgi-bin/ezmlm-cgi?4:mss:55859:201101:jconogbffcaogeijbdkl

Resources