In many papers, they said the performance of TCP is not good in wireless network. But I didn't feel it when using LTE network. So how did they solve it in LTE net.?
There are different versions with minor differences.
Take a look at this paper http://ijarcet.org/wp-content/uploads/IJARCET-VOL-5-ISSUE-7-2240-2242.pdf
In this paper, we review about various versions of Transmission
Control Protocol (TCP) in LTE networks. There are various versions of
TCP available with minor modifications, out of which we have tried to
list out many. Each version is better than the other in one way or the
other. A lot of research has been performed so far in order to
implement TCP in 4G for congestion control. Future scope of this paper
is to implement TCP in a better form such that it gives better
performance in 4G. Various versions of TCP can be combined together to
perform this task or research can be done to implement altogether a
new version of TCP.
Related
As far as I understand, LoRaWAN is intentionally designed as a Non-IP Stack.
Based on all requirements on LoRaWAN, I can understand the design decisions behind the standard.
But IMHO, there are many other use cases for LoRa (just the physical protocol) which, for example, do
not need to be able to communicate to many gateways at the same time or
don't have low energy consumption requirements.
For these use cases, it would be nice to have other MAC implementations, where one could either have
IP-based stack on top of LoRa or
a lightweight protocol between LoRa-based Sensor and ONE gateway, which handles message transport & security
Sigfox has a similar architecture to LoRaWAN where the device/sensor sends messages directly to a backend-network to which the application needs to connect.
To me, this kind of architecture seems pretty odd, since I loose many advantages of the internet and I am tightly coupled to a backend-network provider (imagine using LTE, you would need to explicitly add your application to the mobile providers backend).
I would like to have a local network (would be okay if it is not IP based) but the devices are connected to a gateway and there I have all flexibility what to do with the sensor data. Using LoRaWAN, this could be achieved by running a network server on the gateway but this would be rather a workaround than a solution I am looking for.
The only reason that I can see now, which makes this network architecture really necessary are that a device can connect to multiple gateways and therefore use cases as, e.g., asset tracking can easily be realized.
Are there any LoRa based solutions where I do not have to deal with setting up network servers? If not, why is that the case?
Edit:
For Linux, I found this project here:
https://de.slideshare.net/chienhungpan/lets-have-an-ieee-802154-over-lora-linux-device-driver-for-iot
And also the LoRa Mesh Project:
https://github.com/meshtastic/Meshtastic-device
LoRaWAN is a Low Power Wide Area Network (LPWAN). This means that the technology allows us building a scaleable wireless IoT network where all devices (things) can be connected even if their transmission power is limited. A LoRaWAN network can easily scale at a size of a country and the low power communication makes it possible to operate the network in an ISM band where both transmission power and bandwidth are limited anyway. Low transmission power also ensures long battery life time for battery powered devices.
Beyond supporting geo-localisation, gateway diversity (meaning that the same radio frame can be received by multiple gateways) significantly increases the resiliency of the network, improves the link budget and lowers the packet error rate.
Traditional IP based based protocols would require much higher average data rate than what LoRa was designed for.
Although you are not obliged to use LoRWAN's MAC layer with the LoRa modulation and you may develop your own proprietary protocols, if low transmission power, long range and long battery life is not important for your use case, it is probably better to use another technology.
The Reticulum Network Stack supports many different physical mediums, including raw LoRa. Mediums like LoRa can be used exclusively, or mixed with any number of other mediums to build as simple or as complex networks as you need, from two devices to billions.
Reticulum is purposefully designed to handle very low data rates and very high latency, while still supporting transport accross much faster network segments, and is very efficient in terms of per-packet and overall protocol overhead.
The source code for the reference implementation, and releases can be found here: https://github.com/markqvist/reticulum
I am using Rails ActionCable.
I can choose between two choices mainly. One of them is to use multiple channels for different functionalities. Other option is to use same channel with multiple conidtions to create the same functionality.
Which one is better while scaling up? What are the disadvantages of relying too much on websockets (Actioncable) while building applications?
Can someone refer me some good article which explains websockets, redis caching and its effect when the application scales up.
Thanking you guys in anticiaption of positive response.
Although I think the question is a duplicate of "Multiple websocket channels, single ws object?", I will add a few specific ActionCable considerations just to clarify.
Which one is better while scaling up?
A single WebSocket connection is (usually) better when scaling up.
Servers have a limit on the number of connections they can handle, which means that adding WebSocket connections per client will consume a limited server resource.
For example, if each client requires 2 WebSocket connections instead of 1, the server's capacity is cut by half (drops from 100% to 50%).
What are the disadvantages of relying too much on websockets (Actioncable) while building applications?
Some machines run older browsers that don't support WebSockets. Also, WebSocket applications and clients are often harder to code, which translates to higher maintenance costs.
Having said that, WebSockets are a wonderful solution to issues that plagued web applications for ages and are superior to polling techniques.
All in all, I would argue that the disadvantages should be ignored since the advantages far outweigh the costs.
However:
Having said that, note that currently the Actioncable implementation is quite slow.
In fact, one might argue that the implementation is so slow that polling would be better.
Comparing ActionCable to AnyCable or the server-side Iodine WebSocket + Pub/Sub solution would immediately highlight the fact that ActionCable should be replaced by other solutions until such time as it's fixed.
Further reading:
I just started reading this article about Ruby WebSockets, Push and Pub/Sub, which seems very well written.
I also wrote an article about the main issues concerning Ruby implementations for WebSockets and how a server-side WebSocket solution could solve these issues. You can read it here.
How scalable is ZeroMQ? I'm especially interested in understanding its potential for running on a large number (10,000 - 15,000) of cores.
We've tried to make it as scalable as possible but I personally tested only on up to 16 core boxes. Up to that limit we've seen almost linear scaling.
You don't mention whether your 10k or 15k cores are on the same box or not.
Let's assume they are. Every two years the number of cores on a box can, theoretically, double. So if we have 16-core boxes today, it'll be 16K cores in 20 years.
So now, your question is maybe, "will ZeroMQ help my application scale to such huge numbers of cores, so that it will scale over the next 20+ years?" The answer is "yes, but only if you use it properly". This means designing your application using inproc sockets and patterns that properly divide the work, and flow of data. You will need to adjust the architecture over time.
If your question is, "can I profitably use that many cores between multiple applications", the answer lies with your O/S more than ZeroMQ. Can your I/O layer handle the load? Probably, yes.
And if your question is, "can I use ZeroMQ across a cloud of 10K-16K boxes", then the answer is "yes, this has already been proven in practice".
Note that although ZeroMQ is multithreaded internally, it may not be wise to rely solely on that to scale it up to large numbers of cores. However, because ZeroMQ uses the same API for inter-machine, inter-process and inter-thread communication, it is easy to write application using ZeroMQ that can move seamlessly into a one-process-per-core scenario or into a grid fabric of many, many machines.
ZeroMQ already has a reputation for being the fastest structured messaging protocol around so if you were going to do benchmarks to choose a technology, ZeroMQ should definitely be one of them.
The two big reasons for using ZeroMQ are its easy-to-use cross-language API (see all the examples on the ZeroMQ Guide site) and its low overhead both in terms of bytes on the wire, and in terms of latency. For instance ZeroMQ can leverage UDP multicast to run faster than any TCP protocol, but the application programmer doesn't need to learn a new API. It is all included.
From all the NMS(network management solutions) I've looked into,
only Zenoss has a daemon to process AMQP messages (meaning my prefered one, Zabbix, is oblivious to it.)
Why is that?
Is AMQP that far away from production ready?
From a glance RabbitMQ 2.0 (or even ØMQ) seem to have solved most problems still standing from the Reddit May 10' test.
)
AMQP scalability and generic design stand to me as an obvious choice for an efficient and agnostic NMS feeder.
Is being agnostic its main flaw?
Is it being ignored by existing NMS solutions because having a proprietary communication protocol makes it harder for enterprises to switch from one NMS to another?
So far, AMQP is an "unrealized potential" for a simple reason : there are several non interoperable versions of the protocol, which makes it very difficult for an ecosystem to emerge.
For instance, RabbitMQ is supporting versions 0.8 and 0.9 of the protocol, Qpid C++ is implementing 0.10 so you've got no way to connect them. Hopefully, the situation should evolve positively in 2011 because the working group is closed to releasing version 1.0 of the protocol and implementers are working together to make sure that interoperability is achieved (it's a condition for marking the current version 1.0 proposal as "final"). When this happens, it should make a lot more sense for third party products to support AMQP.
Also, you should note that having an open messaging protocol doesn't solve all the problems. In the case of a monitoring solution, it would allow various applications do communicate but it wouldn't say what are the expected information in each message or where they should be sent. That's why Qpid has developped it's own monitoring and management protocol on top of AMQP (See Qpid Management Framework)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis.
I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable.
My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself.
Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP.
I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?
SNMP is great for getting information out of a Cisco device, but is not very useful controlling the device. (although technically, you can push a new config to a Cisco IOS device using a combination of SNMP and TFTP. But sending a whole new config is a pretty blunt instrument for controlling your router or switch).
One of the other commenters mentioned the Cisco IOS XR XML API. It's important to note that the IOS XR XML API is only available on devices that run IOS XR. IOS XR is only used on a few of Cisco's high end carrier class devices, so for 99% of all Cisco routers and switches the IOS XR XML API is not an option.
Other possibilities are SSH or HTTP (many Cisco routers, switches, AP, etc. have an optional web interface). But I'd recommend against either of those. To my knowledge, the web interface isn't very consistent across devices, and a rather surprising number of Cisco devices don't support SSH, or at least don't support it in the base license.
Telnet is really the only way to go, unless you're only targeting a small range of device models. To give you something to compare against, Cisco's own CiscoWorks network management software uses Telnet to connect to managed devices.
I wouldn't use SNMP, instead look at a little language called 'expect'. it makes for a very nice expect/response processor for these routers.
I have done a reasonable amount of real world SNMP programming with Cisco switches and find Python on top of Net-SNMP to be quite reasonable. Here is an example, via Google books, of uploading a new Cisco configuration via Net-SNMP and Python: Cisco Switch Upload via Net-SNMP and Python. I should disclose I was the co-author of the book referenced in the link.
Everyone's milage may vary, but I personally do not like using expect, and prefer to use SNMP because it was actually designed to be a "Simple Network Management Protocol". In a pinch, expect is ok, but it would not be my first choice. One of the reasons some companies use expect is that a developer just gets used to using expect. I wouldn't necessarily chock up bypassing SNMP just because there is an example of someone automating telnet or ssh. Try it out for self first.
There can be some truly horrible things that happen with expect, that may not be obvious as well. Because expect waits for input, under the right conditions there be very subtle problems that are difficult to debug. This doesn't mean a very experienced developer can't develop reliable code with expect, but it something to be aware of as well.
One of the other things you may want to look at is an example of using the multiprocessing module to write non-blocking SNMP code. Because this is my first post to stackoverflow I cannot post more then one link, but if you google for it you can find it, or another one on using IPython and Net-SNMP.
One thing to keep in mind when writing SNMP code is that it involves reading a lot of documentation and doing trial and error. In the case of Cisco, the documentation is quite good though.
SNMP isn't bad but it may not be able to do everything you need it to do. Depending on the library you use and how it hides the details of interacting with SNMP you may have a hard time finding the correct parts of the MIB to change and even knowing what or how to change them to do what you want.
One reason not to use SNMP is that you can do all the configuration you need using the IOS XR XML API. It could be a lot easier to bundle up the commands you want to send to the devices using that than to interact with SNMP.
I've found SNMP to be a pain for management. If you just need to grab a little data it's great; if you need to change things or use if heavily it can be very time consuming. In my case I'm comfortable with the CLI so a Telnet approach works well. I've written some Python scripts to perform administrative tasks on various pieces of network gear using Telnetlib
SNMP has quite a significant CPU hit on the devices in question compared to telnet; I'd recommend telnet wherever possible. (As stated in a previous answer, the IOS XR XML API would be nice, but as far as I know IOS XR is only deployed on high-end carrier grade routers).
In terms of existing configuration management systems, two commercial players are HP Opsware, and EMC Voyence. Both will probably do what you need. I'm not aware of many open source solutions that actually support deploying changes. (RANCID, for example, only does configuration monitoring, not pre-staging and deploying config changes).
If you are going to roll your own solution, one thing I would recommend is sitting down with your network admin and coming up with a best-practice deployment model for the service he's providing (e.g. standardised ACL, QoS queue, and VLAN names; similar entries in ACLs that have the same function for different customers, etc.). Ensure that all the existing deployed config complies with this BP before you start your design, it will make the problem much more manageable. Best of luck.
Sidenote: before you reinvent the wheel writing another service provisioning system/network management system, try looking for existing ones. I know quite a lot of commercial solutions of various degrees of flexibility/functionality, but I am sure there are quite a lot opensource ones.
Cisco has included menu options for helpdesk applications. Basically you telnet to the box and it presents a nice clean menu (press 1, 2, 3). For more info check this link:
http://www.cisco.com/en/US/docs/ios/12_2/configfun/command/reference/frf001.html#wp1050026
Another vote for expect.
Also, you don't want to allow configuration of your firewalls via either telnet or SNMP - ssh is the only way to go. The reason is that ssh encrypts its payload, and will not expose the privileged management credentials to potential interception.
If for some reason you cannot use ssh directly, consider connecting up an ssh-enabled serial console server to the firewall's console port and configuring it that way.