I found Erlang's 'distribution protocol' outlining the handshake process between nodes here: http://erlang.org/doc/apps/erts/erl_dist_protocol.html
I'm writing my own toy platform and I'd like to make it compatible with others if possible. Is there a common protocol that is widely implemented or do most platforms take Erlang's approach and force you to implement their specific protocol?
I'm not familiar with any common protocols for distributed systems. I know Erlang only supports one 'distribution protocol' out of the box, but of course you can implement your own protocol in Erlang if necessary.
Related
I have a project that involves a backend communicating with numerous clients. I am searching for the optimal protocol to use. Is MQTT appropriate for my project?
Mqtt is best suited for projects involving numerous users. The objective is to provide a light-weight protocol optimized around a publish/subscribe model. It is the perfect protocol for your project because it will disseminate information throughout your user-base more rapidly than most other protocols.
I'm figuring that CORBA is considered a legacy technology that just refuses to die. That being said, I'm curious if there are any known standards out there that are preferred (and are also as platform independent.)
Thoughts? TIA!
Many organization are moving to WebServices and the open standards relating to them (HTTP, WS-*) as alternatives to Corba.
This article provides a comparison of the two technologies and offers some recommendations on when to use which.
If you really care about platform independence and protocol standardization - then the WS-* standards are something to look into.
There is now a state of the art modern CORBA implementation using C++11, TAOX11. This uses the new IDL to C++11 language mapping. For TAOX11 see the TAOX11 website. TAOX11 is supported on a wide range of platforms and compilers.
I have recently tried Google Protocol buffers, they seem rather similar to CORBA by design (some kind of IDL with compiler, binary compact messages, etc). It is probably one of the many possible successors.
Web services are good for the right tasks but creating and parsing messages needs more time and text based messages are more bulky than binary ones. REST API with JSON looks like a good solution where binary protocols do not fit well.
ICE from ZeroC aims to be a "better CORBA".
Unfortunately their licensing terms are crap (at least last time I checked with them), as they do not sell developer licenses but only (roughly) per-installation terms.
It is offered via GPL license too, if you can live with this.
From all the NMS(network management solutions) I've looked into,
only Zenoss has a daemon to process AMQP messages (meaning my prefered one, Zabbix, is oblivious to it.)
Why is that?
Is AMQP that far away from production ready?
From a glance RabbitMQ 2.0 (or even ØMQ) seem to have solved most problems still standing from the Reddit May 10' test.
)
AMQP scalability and generic design stand to me as an obvious choice for an efficient and agnostic NMS feeder.
Is being agnostic its main flaw?
Is it being ignored by existing NMS solutions because having a proprietary communication protocol makes it harder for enterprises to switch from one NMS to another?
So far, AMQP is an "unrealized potential" for a simple reason : there are several non interoperable versions of the protocol, which makes it very difficult for an ecosystem to emerge.
For instance, RabbitMQ is supporting versions 0.8 and 0.9 of the protocol, Qpid C++ is implementing 0.10 so you've got no way to connect them. Hopefully, the situation should evolve positively in 2011 because the working group is closed to releasing version 1.0 of the protocol and implementers are working together to make sure that interoperability is achieved (it's a condition for marking the current version 1.0 proposal as "final"). When this happens, it should make a lot more sense for third party products to support AMQP.
Also, you should note that having an open messaging protocol doesn't solve all the problems. In the case of a monitoring solution, it would allow various applications do communicate but it wouldn't say what are the expected information in each message or where they should be sent. That's why Qpid has developped it's own monitoring and management protocol on top of AMQP (See Qpid Management Framework)
What is an example of a situation where CORBA would be used? Is it just a matter of using an interface language (e.g. Java) to 'talk' to all applications?
CORBA might be used to build a language-independent, O/S-independent distributed system. For example, C++ on Linux developers could build a common distributed system with Java on Windows developers. IDL describes the interfaces that bind the two implementations over a common substrate (CORBA).
CORBA is also useful when building a plain old distributed object system - it has a rich set of services defined and is generally very well thought out. However, these days - depending on the language - many folks have opted for either simpler (e.g., RMI, protocol buffers) or message-based protocols (e.g., HTTP) for building distributed systems, so it's not as common. CORBA suffered from design-by-committee (esp on things like security).
More info:
http://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture
You will see a list of real-life example of CORBA projects from below website.
http://www.cs.wustl.edu/~schmidt/TAO-users.html
TAO is one of the most popular C++ CORBA implementation available today. The project is pretty active.
CORBA technology vendors killed each other through incompatible and bureaucratic implementations. Today, you can safely consider CORBA to be a legacy technology; that is, use it if you have to deal with components that already expose themselves through COBA. Otherwise, stick to modern RPC/distribution standards like SOAP, or, better yet, REST/JSON.
Sorry. To answer your question: CORBA was intended to be what SOAP, REST, and others are today. Real-life examples of applications of the latter are examples of things attempted with the former.
I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!