Apple Secure Transport API is deprecated, what alternatives are there - ios

Hi according to this the API is deprecated and considered legacy. It is not specific about when it will be removed. They recommend using Network, though it doesn't have a low-level API for alternative transports/physical layers.
I plan on using it for securing Bluetooth communications (like a BLE Uart thing), which means the API should not be dependent on network transports. Secure Transport fits this bill.
Is it okay to use this for future-safe developments?
Is there an alternative that will support something with an obscure transport layer (like BLE)?
I've looked at Swift-nio and its children, and it looks like it may work when overriding Channel etc, though it is more geared toward network transports.

Related

LoRa based network stack alternatives to LoRaWAN

As far as I understand, LoRaWAN is intentionally designed as a Non-IP Stack.
Based on all requirements on LoRaWAN, I can understand the design decisions behind the standard.
But IMHO, there are many other use cases for LoRa (just the physical protocol) which, for example, do
not need to be able to communicate to many gateways at the same time or
don't have low energy consumption requirements.
For these use cases, it would be nice to have other MAC implementations, where one could either have
IP-based stack on top of LoRa or
a lightweight protocol between LoRa-based Sensor and ONE gateway, which handles message transport & security
Sigfox has a similar architecture to LoRaWAN where the device/sensor sends messages directly to a backend-network to which the application needs to connect.
To me, this kind of architecture seems pretty odd, since I loose many advantages of the internet and I am tightly coupled to a backend-network provider (imagine using LTE, you would need to explicitly add your application to the mobile providers backend).
I would like to have a local network (would be okay if it is not IP based) but the devices are connected to a gateway and there I have all flexibility what to do with the sensor data. Using LoRaWAN, this could be achieved by running a network server on the gateway but this would be rather a workaround than a solution I am looking for.
The only reason that I can see now, which makes this network architecture really necessary are that a device can connect to multiple gateways and therefore use cases as, e.g., asset tracking can easily be realized.
Are there any LoRa based solutions where I do not have to deal with setting up network servers? If not, why is that the case?
Edit:
For Linux, I found this project here:
https://de.slideshare.net/chienhungpan/lets-have-an-ieee-802154-over-lora-linux-device-driver-for-iot
And also the LoRa Mesh Project:
https://github.com/meshtastic/Meshtastic-device
LoRaWAN is a Low Power Wide Area Network (LPWAN). This means that the technology allows us building a scaleable wireless IoT network where all devices (things) can be connected even if their transmission power is limited. A LoRaWAN network can easily scale at a size of a country and the low power communication makes it possible to operate the network in an ISM band where both transmission power and bandwidth are limited anyway. Low transmission power also ensures long battery life time for battery powered devices.
Beyond supporting geo-localisation, gateway diversity (meaning that the same radio frame can be received by multiple gateways) significantly increases the resiliency of the network, improves the link budget and lowers the packet error rate.
Traditional IP based based protocols would require much higher average data rate than what LoRa was designed for.
Although you are not obliged to use LoRWAN's MAC layer with the LoRa modulation and you may develop your own proprietary protocols, if low transmission power, long range and long battery life is not important for your use case, it is probably better to use another technology.
The Reticulum Network Stack supports many different physical mediums, including raw LoRa. Mediums like LoRa can be used exclusively, or mixed with any number of other mediums to build as simple or as complex networks as you need, from two devices to billions.
Reticulum is purposefully designed to handle very low data rates and very high latency, while still supporting transport accross much faster network segments, and is very efficient in terms of per-packet and overall protocol overhead.
The source code for the reference implementation, and releases can be found here: https://github.com/markqvist/reticulum

What is the best practice to deploy CoAP-DTLS server that can support multiple PSK identity/secret sets?

We're estimating the practicability to replace our conventional HTTPS/RESTful over cellular network (4G-LTE) with CoAP/DTLS over NB-IoT, to prolong the battery life of remote devices. The IoT application we've deployed only takes a tiny proportion of 4G-LTE data bandwidth and UDP over NB-IoT is good enough; so transmission performance is not our main concern.
But the problem is, we're now using mutual authentication in SSL/TLS layer and we assign different client certificates to different sub-groups. And I'm not sure how to do that in CoAP/DTLS.
I've learned that the default credential model of CoAP/DTLS is Pre-Shared Key (PSK) and I also learned from RFC4279 that I may use the PSK identity / shared-key pair as an easy alternative to username, which could just fit my needs. But when I'm trying to figure out how to implement this, I found the internet resource is very limited. So far I've surveyed node-coap.js and libcoap but I can't find any hints in the documents. Both seemed to support only one credential at the same time.
What is the best practice to deploy CoAP-DTLS server that can support multiple PSK identity/shared-key sets ? Or do I need to implement the whole authentication mechanism in application layer ?
One option for server/cloud side CoAP is Eclipse Californium. I am involved in that project and may thus be biased. That said, we have actually built Californium for exactly this purpose.

What differ CoAP and LwM2M?

I study about IoT protocol CoAP, MQTT, LwM2M.
I was able to know a little about CoAP and MQTT.
But I do not know what LwM2M is.
I do not know what's different from CoAP.
I just thought that LwM2M is not a protocol with some format but a system structure using CoAP.
Is that correct?
What is LwM2M and How Can I know more information about LwM2M?
Please someone teach me.
LwM2M (specified by OMA) is a is a protocol group largely built on top of CoAP (specified by the IETF).
LwM2M uses a subset of CoAP's capabilities that fit into an architecture of many small devices registering at a large LwM2M server that manages the devices. It prescribes particular path structures (that numbers are used in paths, and what they mean) that represent the LwM2M object model to allow that unified management.
Compared to "plain CoAP", this limits the scope of what devices can do. Devices can still provide other CoAP functionality on the same server that is not covered by LwM2M. Those limitations allow different vendors to build devices that can interoperate with a different management servers, and LwM2M provides additional specifications for easy deployment (e.g. based on smart cards) that are out of scope for CoAP.
The direct answer can be obtained from the official sites:
CoAP "is a specialized web transfer protocol for use with constrained nodes and constrained networks in the Internet of Things.
The protocol is designed for machine-to-machine (M2M) applications such as smart energy and building automation."
LwM2M "is a device management protocol designed for sensor networks and the demands of a machine-to-machine (M2M) environment. With LwM2M, OMA SpecWorks has responded to demand in the market for a common standard for managing lightweight and low power devices on a variety of networks necessary to realize the potential of IoT."
Basically, we can simplify saying that CoAP was designed to communications between constrained IoT devices and it is very similar to HTTP protocol, which facilitates the developers work, while the LwM2M was designed mainly to manage constrained devices remotely, providing service enablement, for instance. Both protocols are commonly used together.
More information you can find in the following links:
- What is LwM2M? A device management solution for low power M2M
- CoAP functionality expected in a LwM2M system

Ways to do network programming in Cocoa

It seems like there a bunch of ways to do networking in Cocoa: Webkit, NSUrl, CFNetwork, BSD Sockets. Is there any other APIs/Frameworks that are commonly used for networking? I'm trying to understand all the ways to do networking in Cocoa and learn each one's strength's and weaknesses.
As a related question, why would anyone use CFSocket? It seems that most things can be done with NSUrl or BSD Sockets. Is CFSocket commonly used in practice?
You can watch the WWDC videos Network Apps for iPhone (Part 1, Part 2) and Networking Best Practices where they suggest to use NSURLConnection for HTTP and HTTPS, the CFSocket/CFStream/NSStream family for other TCP networking, and of course WebKit if you just intend to render web content. They advice against using the low-level BSD sockets, unless you're writing a server. The higher level frameworks you use, the more things are taken care for you (from DNS resolution to cellular network management, authentication, encryption, run loop integration...) and the more it is integrated to the rest of the Cocoa framework.
For iOS, the best networking suite is AFNetworking. It is being actively developed and has everything you should need to work with any network for your project.

Messaging Protocols - feed a middleware monitoring solution

From all the NMS(network management solutions) I've looked into,
only Zenoss has a daemon to process AMQP messages (meaning my prefered one, Zabbix, is oblivious to it.)
Why is that?
Is AMQP that far away from production ready?
From a glance RabbitMQ 2.0 (or even ØMQ) seem to have solved most problems still standing from the Reddit May 10' test.
)
AMQP scalability and generic design stand to me as an obvious choice for an efficient and agnostic NMS feeder.
Is being agnostic its main flaw?
Is it being ignored by existing NMS solutions because having a proprietary communication protocol makes it harder for enterprises to switch from one NMS to another?
So far, AMQP is an "unrealized potential" for a simple reason : there are several non interoperable versions of the protocol, which makes it very difficult for an ecosystem to emerge.
For instance, RabbitMQ is supporting versions 0.8 and 0.9 of the protocol, Qpid C++ is implementing 0.10 so you've got no way to connect them. Hopefully, the situation should evolve positively in 2011 because the working group is closed to releasing version 1.0 of the protocol and implementers are working together to make sure that interoperability is achieved (it's a condition for marking the current version 1.0 proposal as "final"). When this happens, it should make a lot more sense for third party products to support AMQP.
Also, you should note that having an open messaging protocol doesn't solve all the problems. In the case of a monitoring solution, it would allow various applications do communicate but it wouldn't say what are the expected information in each message or where they should be sent. That's why Qpid has developped it's own monitoring and management protocol on top of AMQP (See Qpid Management Framework)

Resources