IoT Edge offline capabilities (not extended) - azure-iot-edge

A question on IoT Edge offline capabilities (not extended).
After IoT Edge was connected at least once to IoT Hub, will it continue to function with no connection between restarts (module to module communication)?
If yes, for how long (certificate lifetime etc.)?

Yes, it can work indefinitely offline until (as you correctly point out) certificate lifetime.
Another thing to be aware of (as of release 1.0.8) is that either cached module container instances are present, which means "docker start modulex" operation is possible or if an image pull is needed, there is access to the container registry when offline.

Related

What are the "remote writes" which you can await with CU_STREAM_WAIT_VALUE_FLUSH?

When you perform a wait-on-value operation using the CUDA driver API call cuStreamWaitValue32(), you can specify the flag CU_STREAM_WAIT_VALUE_FLUSH. Here's what the documentation says it does:
Follow the wait operation with a flush of outstanding remote writes. This
means that, if a remote write operation is guaranteed to have reached the
device before the wait can be satisfied, that write is guaranteed to be
visible to downstream device work.
My question is: What counts as a "remote write" in this context? Is it only calls to cuStreamWriteValue32() / cuStreamWriteValue64()? Is it any kind of write involving a different device or the host? Including cudaMemcpy() and friends?
Remote Writes are writes issued by a third party device targeting the GPU device memory. This is related to GPUDirect RDMA.
By extension, that also includes writes by issued by the CPU via GDRCopy mappings.

How Long Will a Device Hold Event Messages in Offline

If my device is offline, and sending event messages that are destined for $upstream -- how long will the local $edgeHub hold on to those events? In short, what's the max amount of time that the device can be offline before events start rotting away?
It is important to note that Edge hub provides at-least-once
guarantees, which means that messages are stored locally in case a
route cannot deliver the message to its sink, for example, the Edge
hub cannot connect to IoT Hub, or the target module is not connected.
Edge hub stores the messages up to the time specified in the
storeAndForwardConfiguration.timeToLiveSecs property of the Edge
hub desired properties.
You can set this property in portal like this:

Azure IOT edge Identity Translation Gateway : Understanding

I'm trying to create an Identity Translation Gateway as described in here.
I have also read lot of Microsoft Documentation about their IoT solution.
I have :
leaf devices (A) connected to my gateway by a custom protocol without direct connection to the hub.
gateway devices (B) acting as a IoT Edge device connected to A and to the
IoT Hub.
My IoT Hub.
As far as I understand:
my IoT Edge has to register each device on the IoT Hub this way each
A device will be represented on my hub and we will be able to send
message via its ID directly.
I can send message and listen them on my gateway via the route
/devices/{deviceId}/messages
For the example let's say I have:
an IoTEdge device with Id : "Edge1"
an IoTEdge device with Id : "Edge2"
a device with Id : "DeviceA" connected to "Edge1"
a device with Id : "DeviceB" connected to "Edge2"
What I don't understand since there is no connection between hub and leaf devices is how, when i'll send a message to "DeviceA" with "/devices/DeviceA/messages", the hub will know which gateway address and how to listen it from my gateway. Doing the (un)multiplexing process, in short
Is there a way to handle it automatically with IoT Hub that I don't see ? The GatewayHostName inside the connection-string does the trick ?
Must I handle it manualy, sending all my messages to my gateways ID instead (i.e devices/Edge1/messages) and sending the final targetted device ID inside my message body ? If yes, i don't understand the benefit of registering each device on the HuB
Must I listen each connected device route inside my gateway (i.e
/devices/DeviceA/messages for Edge1) ?
thanks for helping.
Based on what I understand about your scenario, you are trying to send a message from the cloud to a module running on the Edge device and then have the module send the message to the downstream device. C2D (cloud to device) messages are not supported for Edge devices and modules. You can use C2D methods provided by the ServiceClient in the following package https://www.nuget.org/packages/Microsoft.Azure.Devices/1.16.0-preview-001 and call a method on the module. The module can then pass on relevant data to its downstream device.

Job and device/shadow running at same time on the device

As we know the device can only run one single connection by same client id, based on that. A physical device opens a connection with the device implementation to subscribe to the AWS IoT Shadow. I was wondering how it that will work based on the jobs implementation seems to create another new MQTT client
You use the same connection you already have for your device to use jobs. Jobs are used through publish and subscribe to specific topics. You can read more about jobs MQTT API here: https://docs.aws.amazon.com/iot/latest/developerguide/jobs-api.html#mqtt-describejobexecution

VPN - NEPacketTunnelProvider - background mode

I'm building a simple VPN app.
I got networking entitlements, and I created the app extension.
I've configured the VPN to be "on demand" and active while sleeping.
My question is - What happens when the app is in background mode ?
Should I add more app capabilities, or is it enough ?
(And a following question - while in a background mode, the app extension functions like startTunnelWithOptions(...) are still getting called, Am I right? )
The application which starts the Packet Tunnel Provider is called as container app. Here your application is the container app.
The container app and packet tunnel provider runs in separate process. Container app and the Packet Tunnel provider process communicate through IPC.
Even when your application goes background your packet tunnel provider keeps running and handle your application according to packet tunnel provider(VPNManager) status when moving from background to foreground. You need not add any other capabilities.

Resources