Spawn remote process w/o common file system - erlang

(nodeA#foo.hyd.com)8> spawn(nodeA#bar.del.com, tut, test, [hello, 5]).
I want to spawn a process on bar.del.com which has no file system access to foo.hyd.com (from where I am spawning the process), running subroutine "test" of module "tut".
Is there a way to do so, w/o providing the nodeA#bar.del.com with the compiled "tut" module file?

You can use the following function to load a module at remote node without providing the file itself:
load_module(Node, Module) ->
{_Module, Bin, Filename} = code:get_object_code(Module),
rpc:call(Node, code, load_binary, [Module, Filename, Bin]).
As noted in code:load_binary/3 Filename argument is used only to track the path to module and the file it points to is not used by local node_server.

I'm interpreting your question as a desire to not copy the *.beams from your filesystem to the remote file system.
If you are just testing things you can use the nl(Mod) call in the erl shell to load the module on all (currently) known nodes. That is, those that show up in nodes().
This will send the code over and load it from the memory copy, it will not store it in the remote filesystem.
You can also start the remote node using the slave module. A slave accesses its master's filesystem and code server. Ordinary auto-loading will then make sure the module exists in the slave when you call your test:tut/2 function.

You can send the local code to the remote node:
> {Mod, Bin, File} = code:get_object_code(Module).
> rpc:call(RemoteNode, code, load_binary, [Mod, File, Bin]).

Related

Does Erlang's ssh_sftp library provide for a way to listen for directory changes?

I need to watch or listen to a folder on an SFTP server for any changes. At some given time in the future (I don't know when), the folder will be updated with a file. Instead of pinging every minute, how would I setup a listener or watcher on that folder so I know when it has that file? Does Erlang's ssh_sftp module provide a function for this?
Neither SFTP nor FTP protocol have any mechanism to notify a client about changes in a remote folder. The only solution to detect changes is to periodically enumerate remote directory tree and find differences.
Ref : https://winscp.net/eng/docs/library_example_watch_for_changes
I don't know whether you still need it or not but I have already implemented what you need. I won't be able to share the code due to legal agreements however I can hint you the way to implement it.
Requirement: Run a function whenever a file lands to a directory instead of polling the directory at a defined interval to make processing efficient.
Step 1: Start the ssh daemon process with sftp as a subsystem
{ok, Pid} = ssh:daemon(Port, [{user_passwords, [{User, Pass}]},
{system_dir, SystemDir},
{user_dir, UserDir},
{key_cb, cac_auth},
{shell, {cacsftpd_server, display_info, []}},
{subsystems, [ssh_sftpd:subsystem_spec([{cwd,
Home}, {file_handler, sftpd_file_handler}])]}])
Step 2: In sftpd_file_handler you can apply the trigger to call ur desired handler once the file is completely transferred ;)).
While not in Erlang, but rather Elixir, take a look at: https://github.com/Codenaut/exsftpd/blob/master/lib/sftpd_file_handler.ex
The essential bit is the write function:
defmodule Exsftpd.SftpFileHandler do
...
def write(io_device, data, state) do
{:file.write(io_device, data), state}
my_custom_function(state)
{:ok, state}
end
...
end
Here you can call whatever function you desire after (or instead of) writing the content to the filesystem.
Initialise the server like so: (see https://github.com/Codenaut/exsftpd/blob/master/lib/server.ex for a more elaborate example):
:ssh.daemon(2222,
system_dir: '/tmp/ssh',
subsystems: [
Exsftpd.SftpdChannel.subsystem_spec(
file_handler: {Exsftpd.SftpFileHandler, []}
)
],
user_dir: '/tmp/ssh/users'
)

How do I modify the code search path for a remote Erlang node?

I'm connected to an Erlang node with -remsh. How do I modify the code path, in order to load a library that wasn't packaged into my release?
All necessary functions to manipulate code loading, path... are in the code module (see doc at erlang otp code module).
You could add the system paths to the list by doing the following:
[code:add_pathz(P) || P <- filelib:wildcard("/usr/lib/erlang/lib/*/ebin")].
After compiling some test code and connecting to a running node I was able to make it work with this:
(app#127.0.0.1)1> code:add_pathz("/path/to/my/compiled/beam").
(app#127.0.0.1)2> tester:hi().
hi!
ok
(app#127.0.0.1)3>

Unable to spawn process on another node in Erlang

When I try to create a new process on separate node using
Pid = spawn(mynode, mymodule, myfunction, [self()])
(myfunction/1 is exported), I get this error:
Error in process <0.10.0> on node 'no#de1' with exit value:
{undef,[{mymodule, myfunction, [<33.64.0>], []}]}
I tried to set -compile(export_all) flag, but assuming the additional braces in error log, this is not the case.
I don't know what causes the error and I have no clue what to do.
The error you get is saying “There is no module ‘mymodule’ and/or no function ‘mymodule:myfunction/1’”.
This means mymodule is not loaded in the code server of your separate node.
To load mymodule's code there you need something like this snippet or this function
Did you check that the module mymodule is in the path of no#de1?
When you spwan a process using spawn(mynode, mymodule, myfunction, [self()]), the VM needs to load the code before executing it.
If you use a high order function (a fun) in this way spawn(Node, Fun), then in is not more necessary to have the code in the path (but beware that any call to a function in the function definition need to be solved on the remote node)
go to no#de1 and run m(mymodule). It should clarify if the module is loadable and which functions does export.
also: check if the other node is reachable. Do a net_adm:ping on it.

result from /proc/self/exe is unfriendly in a clearcase view

If I execute a binary in a clearcase view, and look at /proc/self/exe for that on Linux, I see something like the following:
$ cd /proc/19220
$ ls -l exe
lrwxrwxrwx 1 peeterj pdxdb2 0 2012-11-30 13:04 exe -> /home/peeterj/views/peeterj_clang-7.vws/.s/00024/8000028250b8f1d1llvm-config
The clang llvm-config program, not unreasonably, uses this output to try to figure out the absolute fully qualified path that it is located in (I assume in case argv[0] isn't fully qualified).
Is there a way to find the location within the view that this corresponds to. For example, in this case, the llvm-config exe is actually in:
/vbs/bldsupp/linuxamd64/clang/debug/bin
(I'm wondering if it's feasible to modify clang's GetExecutablePath() function to figure this out.)
No trivial solution here (for an old version of ClearCase though):
The technote "PK27447: WITHIN A CLEARCASE DYNAMIC VIEW, THE READLINK() CALL ON LINUX RETURNS THE WRONG PATH FOR THE EXECUTABLE'S /PROC/SELF/EXE VALUE" suggests:
Local fix
Use getcwd(), get_current_dir_name(), getwd() in applications slated for a VOB/View context
Create an interposer library to intercept the readlink() call, and modify to use any of the above calls to return the proper data
The cause:
/proc/self/exe returns the improper path while getcwd succeeds.
Unfortunately, for /proc/self/exe to return the proper value [from within a VOB/View context] would require a change within the Linux kernel to allow MVFS to "override" the present setting.
IBM LTC has been working on having the Linux community adopt this change so that we can then incorporate the new features within MVFS.
Related: Bug Sun 6189256.

Default process flags

Is there a way to instruct the Erlang VM to apply a set of process flags to every new process that is spawned in the system?
For example in testing environment I would like every process to have save_calls flag set.
One way for doing this is to combine the Erlang tracing functionalities with a .erlang file.
Specifically, you could either use the low-level tracing capabilities provided by erlang:trace/3 or you could simply exploit the dbg:tracer/2 function to create a new tracing process which executes your custom handler function every time a tracing message is received.
To automate things a bit, you could then create an Erlang Start Up File in the directory where you're running your code or in your home directory. The Erlang Start Up File is a special file, called .erlang, which gets executed every time you start the run-time system.
Something like the following should do the job:
% -*- Erlang -*-
erlang:display("This is automatically executed.").
dbg:tracer(process, {fun ({trace, Pid, spawn, Pid2, {M, F, Args}}, Data) ->
process_flag(Pid2, save_calls, Data),
Data;
(_Trace, Data) ->
Data
end, 100}).
dbg:p(new, [procs, sos]).
Basically, I'm creating a new tracing process, which will trace processes (first argument). I'm specifying an handler function to get executed and some initial data. In the handler function, I'm setting the save_calls flag for newly spawned processes, whilst I'm ignoring all other tracing messages. I set the save_calls' option to 100, using the Initial Data parameter. In the last call, I'm telling dbg that I'm interested only in newly created processes. I'm also setting the sos (set_on_spawn) option to ensure inheritance of the tracing flags.
Finally, note how you need to use a variant of the process_flag function, which takes an extra argument (the Pid of the process you want to set the flag for).

Resources