As far as I understand, LoRaWAN is intentionally designed as a Non-IP Stack.
Based on all requirements on LoRaWAN, I can understand the design decisions behind the standard.
But IMHO, there are many other use cases for LoRa (just the physical protocol) which, for example, do
not need to be able to communicate to many gateways at the same time or
don't have low energy consumption requirements.
For these use cases, it would be nice to have other MAC implementations, where one could either have
IP-based stack on top of LoRa or
a lightweight protocol between LoRa-based Sensor and ONE gateway, which handles message transport & security
Sigfox has a similar architecture to LoRaWAN where the device/sensor sends messages directly to a backend-network to which the application needs to connect.
To me, this kind of architecture seems pretty odd, since I loose many advantages of the internet and I am tightly coupled to a backend-network provider (imagine using LTE, you would need to explicitly add your application to the mobile providers backend).
I would like to have a local network (would be okay if it is not IP based) but the devices are connected to a gateway and there I have all flexibility what to do with the sensor data. Using LoRaWAN, this could be achieved by running a network server on the gateway but this would be rather a workaround than a solution I am looking for.
The only reason that I can see now, which makes this network architecture really necessary are that a device can connect to multiple gateways and therefore use cases as, e.g., asset tracking can easily be realized.
Are there any LoRa based solutions where I do not have to deal with setting up network servers? If not, why is that the case?
Edit:
For Linux, I found this project here:
https://de.slideshare.net/chienhungpan/lets-have-an-ieee-802154-over-lora-linux-device-driver-for-iot
And also the LoRa Mesh Project:
https://github.com/meshtastic/Meshtastic-device
LoRaWAN is a Low Power Wide Area Network (LPWAN). This means that the technology allows us building a scaleable wireless IoT network where all devices (things) can be connected even if their transmission power is limited. A LoRaWAN network can easily scale at a size of a country and the low power communication makes it possible to operate the network in an ISM band where both transmission power and bandwidth are limited anyway. Low transmission power also ensures long battery life time for battery powered devices.
Beyond supporting geo-localisation, gateway diversity (meaning that the same radio frame can be received by multiple gateways) significantly increases the resiliency of the network, improves the link budget and lowers the packet error rate.
Traditional IP based based protocols would require much higher average data rate than what LoRa was designed for.
Although you are not obliged to use LoRWAN's MAC layer with the LoRa modulation and you may develop your own proprietary protocols, if low transmission power, long range and long battery life is not important for your use case, it is probably better to use another technology.
The Reticulum Network Stack supports many different physical mediums, including raw LoRa. Mediums like LoRa can be used exclusively, or mixed with any number of other mediums to build as simple or as complex networks as you need, from two devices to billions.
Reticulum is purposefully designed to handle very low data rates and very high latency, while still supporting transport accross much faster network segments, and is very efficient in terms of per-packet and overall protocol overhead.
The source code for the reference implementation, and releases can be found here: https://github.com/markqvist/reticulum
Related
I study about IoT protocol CoAP, MQTT, LwM2M.
I was able to know a little about CoAP and MQTT.
But I do not know what LwM2M is.
I do not know what's different from CoAP.
I just thought that LwM2M is not a protocol with some format but a system structure using CoAP.
Is that correct?
What is LwM2M and How Can I know more information about LwM2M?
Please someone teach me.
LwM2M (specified by OMA) is a is a protocol group largely built on top of CoAP (specified by the IETF).
LwM2M uses a subset of CoAP's capabilities that fit into an architecture of many small devices registering at a large LwM2M server that manages the devices. It prescribes particular path structures (that numbers are used in paths, and what they mean) that represent the LwM2M object model to allow that unified management.
Compared to "plain CoAP", this limits the scope of what devices can do. Devices can still provide other CoAP functionality on the same server that is not covered by LwM2M. Those limitations allow different vendors to build devices that can interoperate with a different management servers, and LwM2M provides additional specifications for easy deployment (e.g. based on smart cards) that are out of scope for CoAP.
The direct answer can be obtained from the official sites:
CoAP "is a specialized web transfer protocol for use with constrained nodes and constrained networks in the Internet of Things.
The protocol is designed for machine-to-machine (M2M) applications such as smart energy and building automation."
LwM2M "is a device management protocol designed for sensor networks and the demands of a machine-to-machine (M2M) environment. With LwM2M, OMA SpecWorks has responded to demand in the market for a common standard for managing lightweight and low power devices on a variety of networks necessary to realize the potential of IoT."
Basically, we can simplify saying that CoAP was designed to communications between constrained IoT devices and it is very similar to HTTP protocol, which facilitates the developers work, while the LwM2M was designed mainly to manage constrained devices remotely, providing service enablement, for instance. Both protocols are commonly used together.
More information you can find in the following links:
- What is LwM2M? A device management solution for low power M2M
- CoAP functionality expected in a LwM2M system
I wish to create a platform as a service in the financial markets using Erlang/Elixir. I will provide AWS lambda-style functions in financial markets, but rather than being accessible via web/rest/http, I plan to distribute my own ARM-based hardware terminals to clients (Nvidia Jetson TX2-based or similar, so decent hardware). They will access the functions from these terminals. I want said terminals to be full nodes in the system. So they will use the actor model to message pass to my central servers, and indeed, the terminals might message pass amongst each other if terminal users decide to put their own functions online.
Is this a viable model? Could I run 1000 terminals like this? 100 000? What kinds of limitations might I start bumping into? Is Erlang message routing scalable enough to imagine such a network still being performant if we had soft-real time financial markets streaming data flowing around? (mostly from central servers to terminals, but a good proportion possible moving directly around from terminal to terminal). We could have a system where up to 100k or more different "subscription" data channel processes were available, many of them taking input and producing output every second.
Basically I'd like a canonical guide to the scalability capabilities of an Erlang system something like the above. Ideally I'd also like some guide to the security implications of such a system ie. would global routing tables or any other part of the system be compromisable by a rogue terminal user, or can edge nodes be partly "sealed off" from sensitive parts of the rest of the Erlang network?
Note that I'd want to make heavy use of ports/NIFs for high-compute processes.
I would not pursue this avenue for various reasons, all of which hark back to the sort of systems that Erlang's distribution mechanism was developed for - a set of boards on a passive backplane: "free" local bandwidth and the whole machine sits in the same security domain. The Erlang distribution protocol is probably too chatty to work well on widely spread and large networks, and it is certainly too insecure. Unless you want nodes to be able to execute :os.cmd("rm -rf /") on each other, of course.
Use the Erlang distribution protocol in your central system to your heart's content, and have these terminals talk something that's data-only-over-SSL to that system and each other. On top of that, you can quite simply build a sort of overlay network to do whatever you want.
I recommend read this carefully and i recommend divide your service to little Micro-Services too.
Another benchmark is Investigating the Scalability Limits of
Distributed Erlang.
In the Joe Armstorng's book programming Erlang, he said:
"A few years ago, when I had my research hat on, I was working with PlanetLab. I had access to the PlanetLab a network, so I installed empty Erlang servers on all the PlanetLab machines (about 450 of them).
I didn’t really know what I would do with the machines, so I just set up the server infrastructure to do something later."
Do not use External ports, use internal drivers which are written in C or C++ instead.
You will find a lot of information regarding erlang Architectures is this answer: How scalable is distributed Erlang?
Short answer is, there is a pratical limitation of nodes in a cluster, but this limitation can be breach with federations fairly easily.
EDIT 1/ Further more I would recommend to read this book : Designing for scalability with Erlang/OTP
My startup is building an online/mobile labor marketplace where there will be a business interface for businesses posting jobs and we distribute these jobs through a mobile interface for users. We use Rails, REST, Amazon RDS & EC2 and mysql.
My question is: from an architecture standpoint on the server-side does it make sense to?:
a) Have 2 applications one serving the web interface and one acting as the server-side (API) for the mobile interface and both communicating via the DB and via 2 different EC2 instances
b) Try to build one comprehensive application serving both interfaces
Any opinion and perspective on pros and cons would be much appreciated
Thanks
Amazon Web Services are giving you a few tools to make your architecture simpler, more robust and scalable, if you break down your system into smaller, simpler and independent components.
Since you have a set different sizes of instances from micro to extra large (and a few sizes of extra large), you have the flexibility to mix each service type to the appropriate size, configuration, software dependencies, update cycle etc. It will make the life of the developers, testers and administrators much easier.
You also have the ability to scale and even auto-scale each layer and service independently. If you have more users of one of the interfaces, or the other, or increase in data size that is fed through one of the interfaces, you can scale only the relevant service. It will save you the complexity and cost of scaling the full system as a whole.
Another characteristic of AWS is the ability to scale up, down, out and in based on your need. For example, if one of the interface has more users on working weekdays and fewer on weekends, you can scale down this interface for the weekend or night, saving 50% of your computation costs for this instances. In this regards, you can switch from the on-demand model for the more static interfaces to reserved instances, saving again 50% of your costs.
To allow communication between the different interfaces you can use your DB, but you also have a few more options based on your use case. One option is to use Queueing system as SQS. The queue is used as a buffer between the interfaces, reducing the risk of failure of one component (software bug, hardware failure...), affecting the whole system. Another option for inter interfaces communication, that is more tuned to performance is to use in-memory cache. AWS are offering ElasticCache as such service. It can be more efficient than updating such transient data for short period of time and with high load.
I see no long term cons, other than the quick (and dirty) implementation of a single service on a single machine.
The overall goal should be to have most amount of code common between the mobile and web application. Otherwise you are looking at maintenance headaches where you will end up e.g. fixing bugs, or adding features at two places at minimum.
Ideally everything below the front end should be common. The web UI should itself be calling the server side APIs that you are going to need for mobile application. And most amount of logic should be put into these APIs, leaving only presentation details to the UI. This is as prescribed by many patterns such as MVC.
I have a mobile site and desktop site running myself. The codebase is exactly the same at PHP level. Only the smarty templates are different between the two. Granted that mobile application is different from mobile site, but the basic principles still apply.
How scalable is ZeroMQ? I'm especially interested in understanding its potential for running on a large number (10,000 - 15,000) of cores.
We've tried to make it as scalable as possible but I personally tested only on up to 16 core boxes. Up to that limit we've seen almost linear scaling.
You don't mention whether your 10k or 15k cores are on the same box or not.
Let's assume they are. Every two years the number of cores on a box can, theoretically, double. So if we have 16-core boxes today, it'll be 16K cores in 20 years.
So now, your question is maybe, "will ZeroMQ help my application scale to such huge numbers of cores, so that it will scale over the next 20+ years?" The answer is "yes, but only if you use it properly". This means designing your application using inproc sockets and patterns that properly divide the work, and flow of data. You will need to adjust the architecture over time.
If your question is, "can I profitably use that many cores between multiple applications", the answer lies with your O/S more than ZeroMQ. Can your I/O layer handle the load? Probably, yes.
And if your question is, "can I use ZeroMQ across a cloud of 10K-16K boxes", then the answer is "yes, this has already been proven in practice".
Note that although ZeroMQ is multithreaded internally, it may not be wise to rely solely on that to scale it up to large numbers of cores. However, because ZeroMQ uses the same API for inter-machine, inter-process and inter-thread communication, it is easy to write application using ZeroMQ that can move seamlessly into a one-process-per-core scenario or into a grid fabric of many, many machines.
ZeroMQ already has a reputation for being the fastest structured messaging protocol around so if you were going to do benchmarks to choose a technology, ZeroMQ should definitely be one of them.
The two big reasons for using ZeroMQ are its easy-to-use cross-language API (see all the examples on the ZeroMQ Guide site) and its low overhead both in terms of bytes on the wire, and in terms of latency. For instance ZeroMQ can leverage UDP multicast to run faster than any TCP protocol, but the application programmer doesn't need to learn a new API. It is all included.
I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!