Erlang Port Data Transfer Length - erlang

I am trying to evaluate php code through erlang using erlang ports. The problem is when Data to be evaluated is bigger then I am getting parse error from php. But if data is smaller then I am getting the correct output. I think when the Data length is bigger erlang is truncating the data before it is being sent to php for evaluation. Is there any limit on data length which can be sent or received on erlang port. Or is this error due to some other reason ?
I am using open_port(PortName, PortSettings) to open a new port and in PortSettings I am setting [{packet,4},exit_status] as my port options.

The {packet, 4} tuple says the program launched to handle the other end of the port expects data in a 4-byte length-prefixed form. I don't see anything in the docs for the php(1) program that says it knows how to deal with such data. Probably the only reason it works for short inputs is that the length prefix looks kinda like ASCII if you squint, as long as the data you're sending is under 127 bytes. As soon as you go over that, PHP is probably running into a UTF-8 decoding error.
I'm pretty sure you want to say spawn here instead. This gets you standard Unix-like pipe interaction: data sent down the port goes to stdin on the launched process, and anything it sends to stdout comes back to your Erlang process.
The only problem doing it this way is that it re-launches php(1) on each transaction. This may seem expensive, but it's not too bad on any Unix type system, due to the relative efficiency of the fork(2) system call. If you're on Windows or you've benchmarked this and found that you really do need to build a FastCGI like system, you may be out of luck. There seems to be no libphp to embed PHP into a program you write to deal with packetized input, and no way to run php(1) in a way that lets it stay active on the other end of a port. You might be better off switching to a native Erlang templating system.
Also, note that the exit_status atom passed to open_port() does nothing unless you use spawn.

Related

Gnuradio streaming between two computers?

Is there a simple way to implement communication between two computers running GNUradio using the standard blocks set?
What I am have now is this:
On a Linux computer, GNUradio is running and receiving input from a Radio peripheral. On that computer I can see the received waveform on a WX scope. I can also use sliders and input boxes to change things like the receiver frequency.
What I'd like to do is this:
On a Windows computer, I have the WX scope and sliders. When I move the a slider or change an input box, that data gets sent to the Linux, which is still running the radio receiver on Gnuradio. The received signal goes through a stream back to the windows, and gets displayed on the WX scope on Windows.
Someone elsewhere suggested using the ZMQ blocks, however, when I tried setting up a PUSH/PULL to transmit a sine wave from the Linux to the Windows, nothing went through. The guy who recommended that approach tried the same and also could not get it working, so I think that block might be broken?
So is there any alternative blocks that can do what I'm trying to do? Preferably something well documented, and available on GNUradio-companion.
Depending on the data rate from the receiver, it's possible to encounter performance issues attempting to send raw waveform data using e.g. the UDP blocks, where the sender may print an error similar to the following:
gr::log :WARN: udp_source0 - Too much data; dropping packet.
Because the scope widgets usually only display a portion of the input data, a more ideal way of remotely visualizing the waveform might be to only send the rendered scope widget (e.g. using a remote desktop such as VNC or X2Go). Although this solution reaches beyond your original problem, it is probably easier to use in the long run for cases involving two-way GUI interaction.
For the scope widget data, the UDP sink and source blocks seem to be native to GNU Radio, and are either sufficiently documented solution or simple enough for this problem, again taking firewall configuration into consideration as #Zephyr mentioned.
From GRC, specify in the UDP blocks:
the hostname or IP address of the display computer, and
a choose port number that isn't already in use (and were you using Linux, OS X, or anything UNIX-like, not any port below 1024).
For setting variables over the network, you might try the XMLRPC blocks, as described in another answer. These were recently deprecated, however.
See my other answer for discussion of alternative if performance issues arise.
Both Linux and Windows should have firewalls which might be blocking the connections.
You need to post the error messages displayed in gnuradio-companion.

Erlang pid comparison guarantees

This might be a trivial question for some erlang veterans but it would be nice to know since it wasn't clear in the documentation. Many distributed systems algorithms make use of the comparability of unique pids to make decisions. Erlang is kind enough to offer build-in comparison of pids, However, I was wandering whether comparisons stay consistent among multiple machines referring to both local and external pids. My guess is there are no comparison guarantees but I might be wrong, am I?
Erlang stores more than just a simple process ID in its PID structures; the data includes a unique identifier for the remote node (whether it be another local or a remote VM).
See Can someone explain the structure of a Pid in Erlang? for details.
Thus, you're guaranteed to not send a message to the wrong PID on the wrong VM (or misinterpret the source of a received message), at least not without making an error somewhere in your code.
Update: It occurs to me that I may well have been answering the wrong question. If you're asking how the comparisons would work (e.g., if Pid1 < Pid2, whether Pid1 is local or remote), all I can state with some confidence is that the ordering will be constant, based on http://learnyousomeerlang.com/starting-out-for-real#bool-and-compare.

Compressing messages sent between Erlang nodes

I am writing a distributed Erlang application where several nodes are connected via a network with limited bandwidth. Thus, I would like to be able to minimize the size of the packets sent across the network when processes on different nodes send each other messages.
From http://www.erlang.org/doc/apps/erts/erl_ext_dist.html, I understand that the Erlang distribution mechanism uses erlang:term_to_binary/1,2 internally to convert Erlang messages to the external binary format that is sent over the network. Now, term_to_binary/2 supports several options that are useful for reducing the size of the binaries (http://www.erlang.org/doc/man/erlang.html#term_to_binary-1), including a compression option as well as the ability to choose a minor version with more efficient encoding of floats.
I would like to be able to tell the distribution mechanism to use both of these options every time it sends a message over the network. In other words, I would like to be able to specify the Options list that the distribution mechanism calls term_to_binary with. However, I have not been able to find any documentation on this subject. Is this possible to do?
Thanks for your help! :)
If I understand the code correctly, message encoding hardcoded around the line 1565 of dist.c/dsig_send() so you can't change the way messages are encoded without patching and recompiling the emulator.
However you can change the carrier for message distribution as described here. There is an example of use SSL for Erlang distribution. So you can create a connection which compress all transmission messages (maybe it's even possible with tweaked SSL example).
There are few examples of standard distribution modules:
inet_tcp_dist.erl
inet6_tcp_dist.erl
inet_ssl_dist.erl
uds_dist example
Are you using rpc from node to node? Or OTP behaviours? if so try to compress with zlib the binary before it is sent

Threaded Erlang C-Node(cnode) Interoperability howto?

I am at a point in my Erlang development where I need to create a C-Node (see link for C-Node docs). The basic implementation is simple enough, however, there is a huge hole in the doc.
The code implements a single threaded client and server. Ignoring the client for the moment... The 'c' code that implements the server is single threaded and can only connect to one erlang client at a time.
Launch EPMD ('epmd -daemons')
Launch the server application ('cserver 1234')
Launch the erlang client application ('erl -sname e1 -setcookie secretcookie') [in a different window from #2]
execute a server command ('complex3:foo(3).') from the erlang shell in #3
Now that the server is running and that a current erlang shell has connected to the server try it again from another window.
open a new window.
launch an erlang client ('erl -sname e2 -setcookie secretcookie').
execute a new server command ('complex3:foo(3).').
Notice that the system seems hung... when it should have executed the command. The reason it is hung is because the other erlang node is connected and that there are no other threads listening for connections.
NOTE: there seems to be a bug in the connection handling. I added a timeout in the receive block and I caught some errant behavior but I did not get them all. Also, I was able to get the cserver to crash without warnings or errors if I forced the first erlang node to terminate after the indicated steps were performed.
So the question... What is the best way to implement a threaded C-Node? What is a reasonable number of connections?
The cnode implementation example in the cnode tutorial is not meant to handle more than one connected node, so the first symptom you're experiencing is normal.
The erl_accept call is what accepts incoming connections.
if ((fd = erl_accept(listen, &conn)) == ERL_ERROR)
erl_err_quit("erl_accept");
fprintf(stderr, "Connected to %s\n\r", conn.nodename);
while (loop) {
got = erl_receive_msg(fd, buf, BUFSIZE, &emsg);
Note that, written this way, the cnode will accept only one connection and then pass the descriptor to the read/write loop. That's why when the erlang node closes, the cnode ends with an error, since erl_receive_msg will fail because fd will point to a closed socket.
If you want to accept more than one inbound connection, you'll have to loop accepting connections and implement a way to handle more than one file descriptor. You needn't a multithread programme to do so, it would probably be easier (and maybe more efficient) to use the poll or select syscall if your OS supports them.
As for the optimum number of connections, I don't think there is a rule for that, you'd need to benchmark your application if you want to support high concurrency in the cnode. But in that case it would probably be better to re-engineer the system so that erlang copes with the concurrency, alleviating the cnode from that.

Calling Lisp from Ruby/Rails?

How might you call a Lisp program from a Rails application?... For example, allow the end user to enter a block of text in the Rails web app, have the text processed by the Lisp program and return results to the Rails app?
There are a couple ways that come to mind:
Execute the lisp program with Process. Communicate with the Lisp program via standard in, and have the Lisp program output its result over stdout.
Do the same thing as above, but communicate via named pipes instead. Have your Ruby code write data into a named pipe, then have the Lisp program read from that pipe, and write data out over another named pipe which you then read with your Ruby app. The Lisp program can either run in the background as a daemon that checks for data on its incoming pipe, or you can fire it up as-needed using Ruby's command-line utilities (as above).
Find a Ruby-Lisp bridge. I have no experience with such a bridge (nor do I know off-hand if one even exists) and I think the above 2 mechanisms are easier, but your mileage may vary.
Another simple way is to have Lisp running a HTTP server and contact Lisp from the outside via HTTP requests.
CL-JSON supports JSON-RPC. It's very easy to set up with a web server such as Hunchentoot to have a Lisp-based web service that anything that speaks JSON-RPC (e.g. this) can use.
It would depend on how often it's going to happen.
If it's once in a blue moon, then just run a backquote command that starts the lisp interpreter, or popen it and write to it.
If it happens all the time, you will need to have Lisp already running, so the question then is how to communicate. Any of the interprocessor mechanisms will work, but I would suggest a TCP socket for development, testing, and production flexibility.
If it happens a million times a day, but a toy Lisp would be good enough, it is a simple matter to implement Lisp with Ruby classes. This was done as chapter 8 of Practical Ruby Projects.

Resources