How can i start mnesia from a supervisor - erlang

I have trying to find information about it everywhere and failed. Is it possible to start mnesia as a worker from a supervisor? I have tried it myself the same way I start other workers but always get error.

Mnesia is an application and comes with its own supervisor. You should only need to run application:start(mnesia). But if you really want to add your own supervision, you should be able to start it as a worker. However, you need to start it from your supervisor in much the same way that it is specified by the mod key in the mnesia.app file (but note that the details there have changed between OTP 19 and the upcoming OTP 20). You'll also need to ensure that its settings are loaded before it starts. You could call application:load(mnesia) yourself, or you could specify Mnesia as an included application of your own app - see http://erlang.org/doc/design_principles/included_applications.html for more info (you can probably ignore the stuff about start phases).

Related

Why there is no "start_monitor" for gen_server?

Is there any particular reason why there is no start_monitor as an equivalent of spawn_monitor?
Is this simply not needed since gen_servers are usually started by supervisors?
I would like to get a notifications when my temporary workers crash. What is the recommended way to do this in an OTP application?
First idea was to have a gen_server which would monitor workers started by a dynamic supervisor.
More Info:
As far as I know supervisors provide controlled start, shutdown and controlled restart in case of a crash (to get back to a well defined state).
In addition to that, I would like to run a function when a worker process crashes.
For example, I have a C nodes which connect to the Erlang node. Since the C node can't monitor processes (AFAIK) and is also limited in other ways how it can interact with Erlang, I have a "proxy" processes for connecting C nodes in order to keep the C node as simple as possible.
The C nodes do rpc calls to Erlang using ei_rpc_to and processing messages from the connected Erlang node. Messages are either results of rpc calls or "out-of-band" data/info for the C node.
The Erlang "proxy" process monitors its C node using monitor_node to detect if it vanished, but I also need a mechanism for informing the C node that its proxy process crashed. One way of detecting this would be when it does the next rpc call, since it would obviously fail, but since I already have the "out-of-band" message processing in place, I wanted to use that.
Other use case would be having clients which do REST requests to the Erlang cluster. This in turn starts workers that perform some tasks (which may take a long time). After a while the external client may want to get the status of the task. The worker can for example update the status in a Mnesia table, but if it crashes, who will update the table with the failure status.
I know there are many ways of achieving this, but I would like to know what is the Erlang way of doing this.
2nd Edit:
After reading the docs I saw that in a gen_server, terminate will get called (if it is defined with a matching clause). Would this be a viable option to a separate monitoring process? This looks a bit messy since terminate does not get called when receiving 'EXIT' from other processes, so I would also need to trap exits

Erlang: Who supervises the supervisor?

In all Erlang supervisor examples I have seen yet, there usually is a "master" supervisor who supervises the whole tree (or at least is the root node in the supervisor tree). What if the "master"-supervisor breaks? How should the "master"-supervisor be supervised?? any typical pattern?
The top supervisor is started in your application start/2 callback using start_link, this means that it links with the application process. If the application process receives an exit signal from the top supervisor dying it does one of two things:
If the application is started as an permanent application the entire node i terminated (and maybe restarted using HEART).
If the application is started as temporary the application stops running, no restart attempts will be made.
Typically Supervisor is set to "only" supervise other processes. Which mens there is no user written code which is executed by Supervisor - so it very unlikely to crash.
Of course, this cannot be enforced ... So typical pattern is to not have any application specific logic in Supervisor ... It should only Supervise - and do nothing else.
Good question. I have to concur that all of the examples and tutorials mostly ignore the issue - even if occasionally someone mentions the issue (without providing an example solution):
If you want reliability, use at least two computers, and then make them supervise each other. How to actually implement that with OTP is (with the current state of documentation and tutorials), however, appears to be somewhere between well hidden and secret.

Auto update a service

I have written several services in Delphi now, but I want to add the facility of auto updating the service either from a LAN unc path or from a http server. I have been pondering this and I am interested to hear peoples ideas. I can create a thread that will check for the update periodically, but how do I go about stopping the service uninstalling and installing automatically. My initial thoughts where to write a console app to do this and start it using create process, then let the service stop and the console app do the work, starting the new version of the service before it exits. Is this a good stratergy or shoul I consider something else. Thanks in advance
I do as you suggest. A thread checks occasionally for an update. If it is present, it downloads it and puts it into an appropriate place. It then verifies that it is wholesome (don't want it to be broken!). Finally, the thread then launches another app with parameters to tell it what to do, specifically, the name of the service, the location of the file to replace, and the file to replace it with. Then the service just waits.
When the updater app starts, it pauses a moment to make sure that the service is all stable, and then it uses the service control API to stop the service. It then monitors it until it is gone. Finally, it pauses a little to ensure that Windows has really finished with the file. Then it starts the process of renaming the old file to move it out of the way (if still in use, it retries a few times), and then copying the new file into place. And finally, it starts the service up again. Then the updater quits.
This has worked quite reliably for my services, and also standalone apps too (with different parameters for the updater app to know which mode). And if you are careful, you can update the updater using the exact same system, which is nice to watch.
I would have the service be a shell that only updates another executable or DLL file where the real code is at.
Have some communication method between the shell and the child process to force a shutdown and then have the shell perform the upgrade and relaunch the child.
As a side note, this makes debugging the service much easier as well as you'll be able to run the child process directly without having to worry about the extra efforts required to debug windows services.
your idea seems very good to me, however take this into consideration aswell:
- add module(the main core) to the service that will be unloaded and will load the updated module(*.dll file) when an update is available -- in this time the service should put the "tasks" in a queue or something...
additionally you can use plugins and/or scripts like Pascal script or DWScript
Last versions of Windows (I think since windows 10) does not allow a service to start other programs. So you will need an other program to run the update. It could be an other service.
Windows Services cannot start additional applications because they are
not running in the context of any particular user. Unlike regular
Windows applications, services are now run in an isolated session and
are prohibited from interacting with a user or the desktop.

How do you go about setting up monitoring for a non-web frontend process?

I have a worker process that is running in a server with no web frontend. what is the best way to set up monitoring fot it? It recently died for 3 days, and i did not know about it
There are several ways to do this. One simple option is to run a cron job that checks timestamps on the process's logs (and, of course, make sure the process logs something routinely).
Roll your own reincarnation job. Have your background process get its PID, then write it to a specific pre-determined location when it starts. Have another process (or perhaps cron) read that PID, then check the symbolic link proc/{pid}/exe. If that link does not exist or is not your process, it needs to be re-started.
With PHP, you can use posix_getpid() to obtain the PID. Then fopen() / fwrite() to write it to a file. use readlink() to read the symbolic link (take care to note FALSE as a return value).
Here's a simple bash-ified example of how the symlink works:
tpost#tpost-desktop:~$ echo $$
13737
tpost#tpost-desktop:~$ readlink /proc/13737/exe
/bin/bash
So, once you know the PID that the process started with, you can check to see if its still alive, and that Linux has not recycled the PID (you only see PID recycling on systems that have been running for a very long time, but it does happen).
This is a very, very cheap operation, feel free to let it do its work every minute, 30 seconds or even shorter intervals.

mochiweb and gen_server

[This will only make sense if you've seen Kevin Smith's 'Erlang in Practice' screencasts]
I'm an Erlang noob trying to build a simple Erlang/OTP system with embedded webserver [mochiweb].
I've walked through the EIP screencasts, and I've toyed with simple mochiweb examples created using the new_mochiweb.erl script.
I'm trying to figure out how the webserver should relate to the gen_server modules. In the EIP examples [Ch7], the author creates a web_server.erl gen_server process and links the mochiweb_http process to it. However in a mochiweb project, the mochiweb_http process seems to be 'standalone'; it doesn't seem to be embedded in a separate gen_server process.
My question is, should one of these patterns be preferred over the other ? If so, why ? Or doesn't it matter ?
Thanks in advance.
You link processes to the supervisor hierarchy of your application for two reasons: 1) to be able to restart your worker processes if they crash, and 2) to be able to kill all your processes when you stop the application.
As the previous answer says, 1) is not the case for http requests handling processes. However, 2) is valid: if you let your processes alone, you can't guarantee that all your processes will be cleared from the VM after stopping your application (think of processes stuck in endless loops, waiting in receives, etc...).
The reason to embed a process in a supervision tree is so that you can restart it if it fails.
A process that handles an HTTP request is responding to an event generated externally - in a browser. It is not possible to restart it - that is the prerogative of the person running the browser - therefore it is not necessary to run it under OTP - you can just spawn it without supervision.

Resources