I try to set up a TTN based LoRaWAN Monitoring of my Gateways and devices inside a FIWARE-Environment. Therefore it would be essential to access data not in payload_field of the MQTT-Broker of TTN.
I wonder if it is possible to access field like counter, port, app_id and metadata.
I did not find a possibility yet. Does any of you face the same problem and got a solution to this challenge?
I use the following relevant FIWARE-components in a docker environment:
fiware/orion:2.2.0
fiware/iotagent-lorawan:1.2.3
mongo:3.6.8
If you need to receive metadata directly from LoRaWAN, you will have to customize the code within the LoRaWAN IoT Agent - this just passes measures by default, but the IoT Agent node lib interface is capable of receiving metadata as well.
Alternatively a recent PR Request for the IoT Agent node lib allows for additional static metadata to be added during the provisioning stage and sent as part of the requests to the context broker. You would need to use the latest development code base as the library hasn't been ported to the LoRaWAN IoT Agent yet - amend the iotagent-node-lib dependency in the package.json as shown:
"dependencies": {
...
"iotagent-node-lib": "git://github.com/telefonicaid/iotagent-node-lib.git#master",
...
},
... etc
The documentation can be found here
Attributes with metadata are provisioned with an additional parameter as shown:
"attributes": [
{"object_id": "s", "name": "state", "type":"Text"},
{"object_id": "l", "name": "luminosity", "type":"Integer",
"metadata":{
"unitCode":{"type": "Text", "value" :"CAL"}
}
}
Related
I'm creating a Dialogflow agent in which the client identifies with a clientId. This uses Twilio for Whatsapp chatbot integration.
DIALOG
- Hi, tell me your clientId
- abcde1234
At this point I need to get the client name from an external service...
GET Authentication: Basic xxx:yyy http://xxx/clients/id/abcde1234
-> {"id":"abcde1234", "name": "John", ...}
... and answer with it:
DIALOG
- Hi, John, how can I help you?
Is this possible with Dialogflow?
So in order to fetch the value of the user's input, we can create something called a session parameter. Basically, this will be a JSON object in the API request sent to your webhook API which will present throughout the lifespan of your conversation (due to the high lifetime set for the same). You can read more in depth about contexts here.
We can then set up a simple NodeJS codebase on a Cloud Function (used this only due to its simplicity of deployment, though you are free to use any cloud provider/platform of your choice).
I made some minor modifications to the boiler plate codebase present in every Dialogflow ES agent.
So for example, here's the changes made in the index.js file
.
.
.
function welcome(agent) {
const globalParameters = agent.getContext('global-parameters');
const questionNumber = globalParameters.parameters.number;
const sampleNameFromGetCall = 'John'
agent.add(`Welcome to my agent! ${sampleNameFromGetCall}`);
}
and here's the package.json
{
"name": "dialogflowfirebasefulfillment",
"description": "This is the default fulfillment for a Dialogflow agents using Cloud Functions for Firebase",
"version": "0.0.1",
"private": true,
"license": "MIT",
"author": "Google Inc.",
"engines": {
"node": "16"
},
"dependencies": {
"actions-on-google": "^2.2.0",
"dialogflow": "^1.2.0",
"dialogflow-fulfillment": "^0.5.0",
"firebase-admin": "^11.4.1",
"firebase-functions": "^4.1.1"
},
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
}
}
Here's the library we used which is the library built by Google for this purpose.
https://github.com/googleapis/nodejs-dialogflow
Once I enabled the webhook fulfillment on my agent, I quickly tested it and
There's a major caveat to using this as this repo has been archived by Google and doesn't receive any updates, so you may have to figure out how to parse the incoming request in your webhook API OR you can use this library with some major changes to its codebase.
You would need to make sure the overall latency of your request isn't too much,
So, in a nutshell, yes, we can definitely fetch a value from your Dialogflow Agent, use it to a call an API, parse that response and use that as a part of our dynamic response. The value would be stored in a JSON object called context, which will be a part of any incoming request to your webhook API.
I have a gateway fromKhomp manufacturer which delivers packages in the following format (SenML):
message: [
{
"bn": "000D6FFFFE642E09",
"bt": 1611339204
},
{
"n": "model",
"vs": "nir21z"
},
{
"n": "Geladeira Temp",
"u": "Cel",
"v": 4.0
}
When I connect to the Thingsboard platform, the internal GW/Parser breaks as an array before the Input in the Root Rule Chain, and threats as individual packets, but since the first position in this array corresponds to the device ID (MAC) I need to have the whole message to be parser in a script. Does anyone know a way to get the information before the GW parses the message?
If you're using Thingsboard CE then I think you will need to first forward the data to a middleware service to restructure the payload. If you are familiar with AWS Lambda you can do it there.
It would just be a simple script that takes an input payload, restructures, and then forwards to your Thingsboard deployment.
If you're using Thingsboard PE then you can use Integration/Data Converters to do this.
I have an energy monitor that can only output xml data via http post. I am looking to send this data to an azure-iot hub for processing and storage. What is the best way to send xml data to from several of these devices to the hub? I have looked at various gateways but havent found a simple, scalable, cost effective way to do this. I am open to having some sort of intermediary but they all introduce a layer of complexity to simply sending the data to the hub.
Your energy monitor can publish a telemetry data direct to the Azure IoT Hub using a HTTPs protocol.
The following is an example:
and the body:
and the Device Explorer output:
and the blob Storage:
{
"EnqueuedTimeUtc": "2019-09-25T15:58:25.0900000Z",
"Properties": {
"abcd": "abcd1234"
},
"SystemProperties": {
"connectionDeviceId": "device2",
"connectionAuthMethod": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
"connectionDeviceGenerationId": "636842109368955167",
"contentType": "application/xml",
"contentEncoding": "",
"enqueuedTime": "2019-09-25T15:58:25.0900000Z"
},
"Body": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4gDQo8UGFyYUluZm8gPg0KICA8TmFtZT5Wb2x0YWdlPC9OYW1lPg0KICA8Q29kZT5VczwvQ29kZT4NCiAgPFVuaXQ+VjwvVW5pdCA+DQogIDxGcmVxPjQwPC9GcmVxID4NCiAgPFN0YXJ0PjA8L1N0YXJ0Pg0KICA8RW5kPjI4OS41PC9FbmQ+DQo8L1BhcmFJbmZvPg0K"
}
Note, that the body is a Base64 encoded xml text:
I'm using confluentinc/cp-kafka-connect docker image.
I'm trying to send JSON file to kafka, with elasticsearch id.
{"_id":10000725, "_source": {"createdByIdentity":"tu_adminn","createdBy":"Admin Testuser"}}
here is my connector
{
"name": "test-connector",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"tasks.max": "1",
"topics": "andrii",
"key.ignore": "false",
"schema.ignore": "true",
"connection.url": "http://elasticsearch:9200",
"type.name": "test-type",
"name": "elasticsearch-sink"
}
}
When i'm using key.ignore = true it's generates some weird id.
How can i pass exactly my id and source?
Per the docs:
If you specify key.ignore=true then Kafka Connect will use a composite key of your message's kafka topic, partition, and offset -- this is the "weird id" that you're seeing.
If you want to use your own ID for the created Elasticsearch document, you can set key.ignore=false and Kafka Connect will use the key of the Kafka message as the ID.
If your Kafka message does not have the appropriate key for what you want to do, you will need to set it. One option is to use something like KSQL:
CREATE STREAM target AS SELECT * FROM source PARTITION BY _id
Disclaimer: I work for Confluent, the company behind the open-source KSQL project
I am trying to sent a DM firmware update command from a NodeRed Flow.
Function node:
msg.payload = {"MgmtInitiationRequest": {
"action":"firmware/update",
"devices": [{
"typeId": "myType",
"deviceId": "myDevice"
}]
}}
msg.headers={"Content-Type":"application/json"}
return msg;
I send it to a http request node with a POST to
https://orgid.internetofthings.ibmcloud.com/api/v0002/mgmt/requests
Basic Authentication with api keys. I based it of Initiate a device management request
I get back a 403 which the docs have as:
One or more of the devices does not support the requested action
Anyone see what I'm missing? It works fine from the IoT Platform UI to the same devicetype/deviceid.
EDIT: Same 403 if I use a Rest client like Postman.
The swagger API documentation is a little bit misleading in that the 'body' parameter is given a name.
But, like the other POST APIs, that name isn't actually included anywhere as part of the payload.
The payload should just look like this:
{
"action": "firmware/update",
"devices": [
{
"typeId": "string",
"deviceId": "string"
}
]
}
This page in the documentation provides more detail:
https://console.ng.bluemix.net/docs/services/IoT/devices/device_mgmt/requests.html#firmware-actions-update
Has your appliance published the set of supported commands it supports when it announced itself as a managed device?
A device connects to the Watson IoT Platform and uses the managed devices operation to become a managed device.
Which looks something like this
Topic: iotdevice-1/mgmt/manage
{
...
"supports": {
"deviceActions": true,
"firmwareActions": boolean
},
...
},
...
}
https://console.ng.bluemix.net/docs/services/IoT/devices/device_mgmt/index.html