What happen if i erased process dictionary of gen_server module? - erlang

I was playing with the process dictionary inside a gen_server module, i called get() function and i get something like this.
{'$ancestors',[main_server,<0.30.0>]},
{'$initial_call',{child_server,init,1}}]
what happen if i erased the process dictionary, what would go wrong ?
i erased it and every thing worked fine, even
calling a function that generates an exception in the child_server the main_server still can get the exit signal.

$ancestors is used only in the initialization stage, to get the parent's PID, which is used to catch the EXIT message coming from the parent, so that the terminate stuff can get executed. Erasing this key when the server is up and running makes no difference.
$initial_call, on the other hand, is used in the crash report by proc_lib to dump the MFA info.
A quick grep in the OTP source tree can certainly help.

I think some debug functions may use process dictionary, for example erlang:process_info/2

Related

Mainframe CEE3DD abend - CEE3501S - Module not found in COBOL Dynamic Call

I have encountered an issue recently while processing a CICS transaction. My CICS transaction is calling a chain of dynamically linked COBOL modules. The transaction runs fine for the first time after the PGM-A load is new copied into the region. When I try to process the transaction for the second time, I keep getting CEE3DD abend saying the module not found for PGM-B which is being called from PGM-A. IF I do a new copy for PGM-A in CICS, the transaction again runs fine.
Something is wrong with the CICS setup or memory but I am not able to figure it out. PGM-A is working fine in batch processing. PGM-B has no issues when it is called from any other PGMs except PGM-A.
Can someone share some thoughts on what may be wrong with this?
To invoke your program via CICS, it must be compiled with the NODYNAM option.
It admittedly seems counter-intuitive, but using the DYNAM option will cause CICS stubs to be loaded, instead of your intended programs, and result in the CEE3501S condition.
So, compile your programs with the NODYNAM option to avoid this error condition.
See the following links for additional info:
https://www.ibm.com/support/knowledgecenter/en/SSGMCP_5.3.0/com.ibm.cics.ts.applicationprogramming.doc/topics/dfhp3_cobol_subprog_rules.html
http://www-01.ibm.com/support/docview.wss?uid=swg21054079
Does PGM-A use "CALL VARIABLE" to invoke PGM-B? If so check the contents of VARIABLE on the second run (the contents of that variable will probably be reported in the error message. The contents of the variable may be overwritten by a bug in PGM-A. That might explain why the program always fails after the (seemingly) succesful run and after a newcopy.
Converting this from dynamic to static worked. But the question remains why it was not working with dynamic linking.

Can I have a custom function stop a Lua script in Delphi without exiting the application?

I have an application that periodically will run a Lua script. Within the script, on occasion, I have created a custom registered Lua function to check some parameters and decide if the Lua script should continue or exit. The logic ideally should not be part of the script and I can think of using a Lua script to work around this, but I'm wondering if it is possible to stop the execution of a Lua script without ending the application.
I have a custom function written in Delphi and exposed to Lua scripts using Lua 5.1. The Lua script looks something like that shown below and the script in Lua is started using luaL_loadbuffer.
io.write("Script starting\n");
--Custom Function
ExitIfFound();
io.write("Script continuing\n");
My custom function looks something like this, below I have provided one of my attempts where I tried to use lua_error to stop the script...
function ExitIfFound(LuaState: TLuaState): Integer;
var
s: AnsiString;
begin
s := 'ExitIfFound ending script, next Lua script line not called';
lua_pushstring(LuaState, PAnsiString(s));
lua_error(LuaState);
end;
When my custom function is called, I'm unsure as to how to exit the Lua script without any further evaluation. I have seen posts referring to Lua and using setjmp and longjmp in C, but I'm curious how these may translate Delphi.
In the example above, when I use lua_error, the entire program crashes with Windows doing its typical, [luarun.exe has stopped working] ...
With all of this, I'm am still pretty new to integrating Lua to Delphi and hoping that I can find some cleaner options to explore.
There is no clean way to entirely abort a Lua script. The lua_error function is the correct way to signal an error. It is the caller's responsibility to catch the error and propagate it to the next caller.
If you cannot rely on the caller to cooperate, then you can try to exert more control by installing debug hooks. Then the host program will be consulted before continuing to run the script. However, the script can still avoid exiting by using pcall to catch any errors.
The crash in your program is probably not simply from setting an error. Rather, it's likely from using the wrong calling convention on your ExitIfFound function. It needs to be cdecl, but Delphi's default, if you don't specify anything else, is register. Using the wrong calling convention will give you unpredictable parameter values and can lead to a corrupted stack. If you type-casted the function or used the # operator when you called lua_register, then you might have hidden the calling-convention mismatch from the compiler's type checker, which would have otherwise alerted you to the problem at compile time.
When compiled as C++, lua_error will use a exception instead of longjmp, but either way, the caller always catches the error. Exceptions are important, though, when your Delphi code uses compiler-managed types like string, or exception-sensitive constructs like try-finally blocks. In C mode, lua_error calls longjmp to jump directly to the waypoint set by a previous call to setjmp. That jump will skip over any exception handlers like the ones the Delphi compiler sets up to ensure the finally block runs and the string gets cleaned up.
A further headache is that since the compiler cleans up the string while exiting the function, the pointer you put on the Lua stack might not be valid by the time it's used; that depends on whether lua_pushstring makes a copy of its argument.

Erlang: Is it ok to write application without a supervisor?

I don't need a supervisor for some specific application I develop. Is it ok to not to use one?
The doc says about the start/2 that
"should return {ok,Pid} or {ok,Pid,State} where Pid is the pid of the
top supervision"
so I'm not sure if it is OK not to start a supervisor and to return some invalid pid (I tried and nothing bad happened)
Returning an {ok, self()} or something similar works fine until you start doing release upgrades. At that point, you'll need to use a supervisor with an empty child list. (The application and supervisor behaviours don't have colliding callback functions, so you can put both in the same module.)
Just to make sure: you are doing some kind of initialisation in your application module's start callback function, right? If not, you can just remove the mod directive from the .app file and the callback won't even be called, and thus there will be no supervisor, real or fake.

Is it bad to send a message to self() in init?

In this example, the author avoids a deadlock situation by doing:
self() ! {start_worker_supervisor, Sup, MFA}
in his gen_server's init function. I did something similar in one of my projects and was told this method was frowned upon, and that it was better to cause an immediate timeout instead. What is the accepted pattern?
Update for Erlang 19+
Consider using the new gen_statem behaviour. This behaviour supports generating of events internal to the FSM:
The state function can insert events using the action() next_event and such an event is inserted as the next to present to the state function. That is, as if it is the oldest incoming event. A dedicated event_type() internal can be used for such events making them impossible to mistake for external events.
Inserting an event replaces the trick of calling your own state handling functions that you often would have to resort to in, for example, gen_fsm to force processing an inserted event before others.
Using the action functionality in that module, you can ensure your event is generated in init and always handled before any external events, specifically by creating a next_event action in your init function.
Example:
...
callback_mode() -> state_functions.
init(_Args) ->
{ok, my_state, #data{}, [{next_event, internal, do_the_thing}]}
my_state(internal, do_the_thing, Data) ->
the_thing(),
{keep_state, Data);
my_state({call, From}, Call, Data) ->
...
...
Old answer
When designing a gen_server you generally have the choice to perform actions in three different states:
When starting up, in init/1
When running, in any handle_* function
When stopping, in terminate/2
A good rule of thumb is to execute things in the handling functions when acting upon an event (call, cast, message etc). The stuff that gets executed in init should not wait for events, that's what the handle callbacks are for.
So, in this particular case, a kind of "fake" event is generated. I'd say it seems that the gen_server always wants to initiate the starting of the supervisor. Why not just do it directly in init/1? Is there really a requirement to be able to handle another message in-between (the effect of doing it in handle_info/2 instead)? That windown is so incredibly small (the time between start of the gen_server and the sending of the message to self()) so it's highly unlikely to happen at all.
As for the deadlock, I would really advise against calling your own supervisor in your init function. That's just bad practice. A good design pattern for starting worker process would be one top level supervisor, with a manager and a worker supervisor beneath. The manager starts workers by calling the worker supervisor:
[top_sup]
| \
| \
| \
man [work_sup]
/ | \
/ | \
/ | \
w1 ... wN
Just to complement what has already been said about splitting a servers initialisation into two parts, the first in the init/1 function and the second in either handle_cast/2 or handle_info/2. There is really only one reason to do this and that is if the initialisation is expected to take a long time. Then splitting it up will allow the gen_server:start_link to return faster which can be important for servers started by supervisors as they "hang" while starting their children and one slow starting child can delay the whole supervisor startup.
In this case I don't think it is bad style to split the server initialisation.
It is important to be careful with errors. An error in init/1 will cause the supervisor to terminate while an error in the second part as they will cause the supervisor to try and restart that child.
I personally think it is better style for the server to send a message to itself, either with an explicit ! or a gen_server:cast, as with a good descriptive message, for example init_phase_2, it will be easier to see what is going on, rather than a more anonymous timeout. Especially if timeouts are used elsewhere as well.
Calling your own supervisor sure does seem like a bad idea, but I do something similar all the time.
init(...) ->
gen_server:cast(self(), startup),
{ok, ...}.
handle_cast(startup, State) ->
slow_initialisation_task_reading_from_disk_fetching_data_from_network_etc(),
{noreply, State}.
I think this is clearer than using timeout and handle_info, it's pretty much guaranteed that no message can get ahead of the startup message (no one else has our pid until after we've sent that message), and it doesn't get in the way if I need to use timeouts for something else.
This may be very efficient and simple solution, but I think it is not good erlang style.
I am using timer:apply_after, which is better and does not make impression of interacting with external module/gen_*.
I think that the best way would be to use state machines (gen_fsm). Most of our gen_srvers are really state machine, however because initial work effort to set up get_fsm I think we end up with gen_srv.
To conclude, I would use timer:apply_after to make code clear and efficient or gen_fsm to be pure Erlang style (even faster).
I have just read code snippets, but example itself is somehow broken -- I do not understand this construct of gen_srv manipulating supervisor. Even if it is manager of some pool of future children, this is even more important reason to do it explicitly, without counting on processes' mailbox magic. Debugging this would be also hell in some bigger system.
Frankly, I don't see a point in splitting initialization. Doing heavy lifting in init does hang supervisor, but using timeout/handle_info, sending message to self() or adding init_check to every handler (another possibility, not very convenient though) will effectively hang calling processes. So why do I need "working" supervisor with "not quite working" gen_server? Clean implementation should probably include "not_ready" reply for any message during initialization (why not to spawn full initialization from init + send message back to self() when complete, which would reset "not_ready" status), but then "not ready" reply should be properly processed by the caller and this adds a lot of complexity. Just suspending a reply is not a good idea.

erlang io:format, and a hanging web application

While I'm learning a new language, I'll typically put lots of silly println's to see what values are where at specific times. It usually suffices because the languages typically have available a tostring equivalent. In trying that same approach with erlang, my webapp just "hangs" when there's a value attempted to be printed that's not a list. This happens when variable being printed is a tuple instead of a list. There's no error, exception, nothing... just doesn't respond. Now, I'm muddling through by being careful about what I'm writing out and as I learn more, things are getting better. But I wonder, is there a way to more reliably to [blindly] print a value to stdout?
Thanks,
--tim
In Erlang, as in other languages, you can print your variables, no matter if they are a list, a tuple or anything else.
My feeling is that, for printing, you're doing something like (just a guess):
io:format("The value is: ~p.", A).
This is wrong, because you're supposed to pass a list of arguments:
io:format("The value is: ~p.", [A]).
Where A can be anything.
I usually find comfortable to use:
erlang:display/1
to print variables.
Also, tracing functions is usually a better way to debug an application, rather than using printouts. Please see:
http://aloiroberto.wordpress.com/2009/02/23/tracing-erlang-functions/
When developing webapps I use the error_logger module
I usually define some macros like this
-ifdef(debug).
-define(idbg(FmtStr, Err),
error_logger:info_msg("~p (line ~p): " FmtStr "~n",
[?MODULE, ?LINE | Err])).
-define(rdbg(Term), error_logger:info_report(Term)).
-else.
-define(idbg(_FmtStr, _Err), void).
-define(rdbg(_Term), void).
-endif.
You call the macros with something like:
code...
?rdbg(ErlangTerm),
other code...
During development you compile your modules with:
erlc -Ddebug *.erl
and so you get info messages in your erlang console.
Also make sure that there is no terminating process without link which could then cause other process to wait on something and not timeout-ing - hence strange hanging part.

Resources