I have a project that involves a backend communicating with numerous clients. I am searching for the optimal protocol to use. Is MQTT appropriate for my project?
Mqtt is best suited for projects involving numerous users. The objective is to provide a light-weight protocol optimized around a publish/subscribe model. It is the perfect protocol for your project because it will disseminate information throughout your user-base more rapidly than most other protocols.
Related
I study about IoT protocol CoAP, MQTT, LwM2M.
I was able to know a little about CoAP and MQTT.
But I do not know what LwM2M is.
I do not know what's different from CoAP.
I just thought that LwM2M is not a protocol with some format but a system structure using CoAP.
Is that correct?
What is LwM2M and How Can I know more information about LwM2M?
Please someone teach me.
LwM2M (specified by OMA) is a is a protocol group largely built on top of CoAP (specified by the IETF).
LwM2M uses a subset of CoAP's capabilities that fit into an architecture of many small devices registering at a large LwM2M server that manages the devices. It prescribes particular path structures (that numbers are used in paths, and what they mean) that represent the LwM2M object model to allow that unified management.
Compared to "plain CoAP", this limits the scope of what devices can do. Devices can still provide other CoAP functionality on the same server that is not covered by LwM2M. Those limitations allow different vendors to build devices that can interoperate with a different management servers, and LwM2M provides additional specifications for easy deployment (e.g. based on smart cards) that are out of scope for CoAP.
The direct answer can be obtained from the official sites:
CoAP "is a specialized web transfer protocol for use with constrained nodes and constrained networks in the Internet of Things.
The protocol is designed for machine-to-machine (M2M) applications such as smart energy and building automation."
LwM2M "is a device management protocol designed for sensor networks and the demands of a machine-to-machine (M2M) environment. With LwM2M, OMA SpecWorks has responded to demand in the market for a common standard for managing lightweight and low power devices on a variety of networks necessary to realize the potential of IoT."
Basically, we can simplify saying that CoAP was designed to communications between constrained IoT devices and it is very similar to HTTP protocol, which facilitates the developers work, while the LwM2M was designed mainly to manage constrained devices remotely, providing service enablement, for instance. Both protocols are commonly used together.
More information you can find in the following links:
- What is LwM2M? A device management solution for low power M2M
- CoAP functionality expected in a LwM2M system
This is a basic questions. I want to apply to an entry level java developer position with the following requirement:
Familiarity with the Sailpoint Identity IQ standard adapters/connectors
By standard connectors do they basically mean how Sailpoint exchanges data with third party tools? And by adapter do they mean that the adapter pattern would be used? Thanks
This is going to probably appear well after your interview - but to answer the question:
1) Standard adapters/connectors:
SailPoint ships with a "standard" set of connectors which are part of the purchase price there are those ie EPIC which do not ship as part of the standard product and must be enabled. To give you a deeper view into connectors..
Connectivity Methods:
Direct Connectivity - This is where a connector communicates directly to a system using APIs or data-sources. Some advantages of using direct connect are that you don't have to generate or transmit files, and you can be more efficient in processing only things that have changed. Some disadvantages are the they are subject to availability and downtime concerns like any connected system. They are also typically subject to advantages and disadvantages that APIs might impose as well.
Some people also refer to this as an 'online' method of connectivity.
File-Based Connectivity - This is where a connector reads from a snapshot of data presented in a file, rather than connecting directly to the system. Some advantages of using a file, are that files are portable, easily inspected for data issues, and not typically subject to availability. Some disadvantages are that files are usually processed in their entirety, and may require processing or transformation in order to work effectively.
Some people also refer to this as a 'decoupled' or 'offline' method of connectivity.
Connector Implementations
Source-Specific Implementation - These are connectors built with a specific target-system in mind. These typically use specific APIs targeted to the system they are integrated with. Because the systems and APIs are known, these typically require less configurations to get working.
Examples of these are Active Directory, Workday, Salesforce, SAP, etc.
General Implementation - These are general-purpose connectors which can be used to connect to a variety of sources or systems. These tend to be more flexible in general, but typically do require a bit more setup and configuration to meet needs.
Examples of these are Web Services, SCIM, JDBC, Delimited Files, etc.
Custom Implementation - These are completely custom connectors and tailored to the system and API of your choice. This approach offers the most flexibility of all connector options, however making custom connectors is definitely a development-level activity, and is not to be taken lightly. The code written for custom connectors is maintained and supported by the customer who owns the connector.
Examples of these are custom in-house applications, etc.
Understanding these connector implementations is important, because if a source-specific implementation isn't available, another general or custom connector implementation may be used instead.
I understand that Apache Thrift and ZeroMQ are softwares belonging to different categories, and it is not easy to do a comparison since it is an apple to orange comparison. But I don't know why they belong to different categories. Aren't they both used to pass data between different services, which may or may not be written in different languages?
When should I use Thrift and when should I use a message queue?
They belong to different categories primarily because they are targetted at different audiences with different concerns. Therefore they are better at different things.
Apache Thrift similar to Google Protocol Buffers is intended to be high-level, reasonably well abstracted means to send data between processes on different machines, possibly in different languages. They purposefully provide an IDL-ish layer to describe the message, perhaps with automatic or semi-automatic versioning and optional sections.
ZeroMQ specifically on the other hand, not message queues in general (which would be an entirely separate question), is all about speed. They efficiently move bytes to the other end. As few stops along the way as possible. As such, you are responsible for serialization, versioning, or whatever else is important to you the developer. Of course, this can mean complexity, particularly if you are communicating between different platforms and languages, but that's part of the penalty for lack of abstraction.
Which to choose? Depends on your project. If you don't need absolute raw performance, a higher level toolkit will likely serve your purpose just fine. If you are building a high-speed low-latency application, you're going to end up closer to the metal anyways.
Good Luck
Thrift defines how to represent complex data so that it can be written and read by different languages (hence it has IDL to define types that will be transported). It also defines simple means to transport such formatted message between two end points (aka thirft transport).
On the other hand ZeroMQ shines in ways you can transport message between endpoints in order to acquire different behaviors like one to one, one to many, many to many, and different expectations of speed and reliability of such transfers. And as for message itself it is just a blob to ZeroMQ, and apps should find a way to encode decode them.
So if you have complex data structures but simple messaging patterns, you might lean on the thrift side. If you have simple data but complex messaging patterns, you might lean on the ZeroMQ or something like that (AMQP).
And if you need both, you could use THrift and ZeroMQ in pair, thrift to format the message, and ZeroMQ to transport it.
Davorin mentioned using Thrift and ZeroMQ in pair and in case you're interested in that approach checkout the Thrift codebase and look under thrift/contrib/zeromq for a demo of Thrift using ZermoMQ.
From all the NMS(network management solutions) I've looked into,
only Zenoss has a daemon to process AMQP messages (meaning my prefered one, Zabbix, is oblivious to it.)
Why is that?
Is AMQP that far away from production ready?
From a glance RabbitMQ 2.0 (or even ØMQ) seem to have solved most problems still standing from the Reddit May 10' test.
)
AMQP scalability and generic design stand to me as an obvious choice for an efficient and agnostic NMS feeder.
Is being agnostic its main flaw?
Is it being ignored by existing NMS solutions because having a proprietary communication protocol makes it harder for enterprises to switch from one NMS to another?
So far, AMQP is an "unrealized potential" for a simple reason : there are several non interoperable versions of the protocol, which makes it very difficult for an ecosystem to emerge.
For instance, RabbitMQ is supporting versions 0.8 and 0.9 of the protocol, Qpid C++ is implementing 0.10 so you've got no way to connect them. Hopefully, the situation should evolve positively in 2011 because the working group is closed to releasing version 1.0 of the protocol and implementers are working together to make sure that interoperability is achieved (it's a condition for marking the current version 1.0 proposal as "final"). When this happens, it should make a lot more sense for third party products to support AMQP.
Also, you should note that having an open messaging protocol doesn't solve all the problems. In the case of a monitoring solution, it would allow various applications do communicate but it wouldn't say what are the expected information in each message or where they should be sent. That's why Qpid has developped it's own monitoring and management protocol on top of AMQP (See Qpid Management Framework)
I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!