the docs say to route messages within Edge properly based on the content of the body of the message, you have to set the content-type and content-encoding system properties, as shown in the sample below that uses the SDK
// Encode message body using UTF-8
var messageBytes = Buffer.from(messageBody, "utf8");
var message = new Message(messageBytes);
// Set message body type and content encoding
message.contentEncoding = "utf-8";
message.contentType = "application/json";
However, for a client that does not use the SDK, and instead uses straight MQTT such as the Paho client (as in this example), how do you specify those system properties? the IoT Hub docs (not edge) say that you can add a property bag of properties to the topic, but that doesn't seem to work for Edge.
So how can I route messages within Edge, based on the body of the messages, from non-SDK leaf devices?
Looking at this code:
https://github.com/Azure/azure-iot-sdk-c/blob/master/iothub_client/src/iothubtransport_mqtt_common.c#L655
you can use $.ce and $.ct to specify respectively the contentEncoding and contentType system properties (i could not find any documentation about this so this must be a hack ;-)). Here is a JavaScript example:
const deviceId = '<your device id>';
const querystring = require('querystring');
const propertyBag = querystring.stringify({
'$.ce': 'utf-8',
'$.ct': 'application/json'
});
const topic = `devices/${deviceId}/messages/events/${propertyBag}`;
Another way to go could be to extract a value you want to filter on and put it in the property bag as an application property. Here we get the value of 'source' from our data and use it to filter the data:
const deviceId = '<your device id>';
const querystring = require('querystring');
let data = ...;
const propertyBag = querystring.stringify({
source: data.source
});
const topic = `devices/${deviceId}/messages/events/${propertyBag}`;
Then when defining the 'message route', you would for instance define the 'routing query' as:
source = 'my_source'
Any data containing a 'source' field with value 'my_source' would then be routed to this message route by IoT Hub.
Related
I have created an UI5 Application to read a file and send it to a custom OData Service in the Backend.
onUploadFile: function() {
var oFileUpload =
this.getView().byId("fileUploaderFS");
var domRef = oFileUpload.getFocusDomRef();
var file = domRef.files[0];
var that = this;
var reader = new FileReader();
var ftype = file.type;
reader.readAsArrayBuffer(file);
reader.onload = function(evt) {
var vContent = evt.currentTarget.result
console.log(vContent);
var hex = that.buf2hex(vContent);
that.updateFile(hex, ftype);
}
},
buf2hex: function(buffer) {
return [...new Uint8Array(buffer)]
.map(x => x.toString(16).padStart(2, '0'))
.join('');
}
When I print the content of hex on the console before sending it to the backend, the data starts with 89504e470d0a1a0a0000000d49484 ....
Even before sending the data in the payload to Odata Service it shows the correct data
Here is the Odata Service
Inside the Create Stream the data when received, is getting converted into something else. As a result the image that has been saved is not opening.
I tried to change the Data Type of Content in SEGW to Binary and it did not work. I also tried to convert the data in the create_stream but in vain. At last I tried reading the data in UI5 in different formats but of no use.
This whole Odata service works perfectly fine when I load the data through Postman Application.
Please help me resolve this Issue. Thanks In Advance.
The sap.ui.unified.FileUploader has everything built in. No need for conversions from Buffer to hex.
Make sure that your FileUploader knows where to upload the file
<unified:FileUploader xmlns:unified="sap.ui.unified"
id="fileUploaderFS"
uploadUrl="/sap/opu/odata/sap/Z_TEST_SRV/FileSet"
/>
The attribute uploadUrl points to the media entity for which you implemented the create_stream method.
Then when the upload is triggered via button press, simply get the FileUploader, set the token (for security reasons when doing a POST request), and fire the upload method.
onUploadFile: function () {
const oFileUpload = this.getView().byId("fileUploaderFS");
const sToken = this.getModel("nameOfTheModel").getSecurityToken();
const oTokenParam = new FileUploaderParameter({
name: "x-csrf-token",
value: sToken
});
oFileUpload.removeAllHeaderParameters()
oFileUpload.addHeaderParameter(oTokenParam);
oFileUpload.upload();
}
To use FileUploaderParameter, make sure to import it at the beginning:
sap.ui.define([
// ...,
"sap/ui/unified/FileUploaderParameter"
], function (/*..., */FileUploaderParameter) {
// ...
Now about your File entity. When working with it via create_stream or read_stream, you don't use the entity structure but is_media_resource. This means your entity doesn't need a property content. Or most of the other properties (except a unique id and the mime type). All other properties would only be used if you want to do one of the CRUD methods (which happens almost never when dealing with streams).
How can I find available Twilio numbers based on the US state?
When we are in the Twilio console, there is a criteria where we can add a number and select "match to = first part of number", which I guess works with regex conditions / patterns.
I am aware we can extract the state prefixes and to use the prefix for the state which we want to search and use it as some pattern.
In addition, I need to check for the voice capabilities.
I am referring to this API: https://www.twilio.com/docs/phone-numbers/api but I have a hard tome to understand the proper fields that needs to be used. Also, I found many StackOverflow questons pointing to a links to Twilio doc, that are no longer valid.
const accountSid = process.env.TWILIO_ACCOUNT_SID;
const authToken = process.env.TWILIO_AUTH_TOKEN;
const client = require('twilio')(accountSid, authToken);
const statePrefix = '823';
const voiceCapabilty = true;
client.availablePhoneNumbers('US')
.then(/* do something */);
Hello there I have setup successfully inbound webhook with strongGrid in net core 3.1.
The endpoint gets called and I want to parse value inside the attachment which is csv file.
The code I am using is following
var parser = new WebhookParser();
var inboundEmail = await parser.ParseInboundEmailWebhookAsync(Request.Body).ConfigureAwait(false);
await _emailSender.SendEmailAsyncWithSendGrid("info#mydomain.com", "ParseWebhook1", inboundEmail.Attachments.First().Data.ToString());
Please note I am sending an email as I don t know how to debug webhook with sendgrid as I am not aware of any cli.
but this line apparently is not what I am looking for
inboundEmail.Attachments.First().Data.ToString()
I am getting this on my email
Id = a3e6a543-2aee-4ffe-a36a-a53k95921998, Tag = HttpMultipartParser.MultipartFormDataParser.ParseStreamAsync, Length = 530 bytes
the csv I need to parse has 3 fields Sku productname and quantity I'd like to get sku values.
Any help would be appreciated.
The .Data property contains a Stream and invoking ToString on a stream object does not return its content. The proper way to read the content of a stream in C# is something like this:
var streamReader = new StreamReader(inboundEmail.Attachments.First().Data);
var attachmentContent = await streamReader.ReadToEndAsync().ConfigureAwait(false);
As far as parsing the CSV, there are literally thousands of projects on GitHub and hundreds on NuGet with the keyword 'CSV'. I'm sure one of them will fit your needs.
While integrating Chromecast into iOS app, I have faced the problem accessing media content that requires authentication. In this particular case, authentication token must be added to the request in the http header, not as token in the url. There does not seem to be a way to do this with Cast SDK directly. So I have played with custom CAF receiver app, hoping that I can pass this data through customdata to receiver app and receiver app then would form the request with proper http header using playerManager.setMessageInterceptor. But again, how to add custom http header to the final request in the CAF receiver app?
This is how I did it:
const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
const playbackConfig = new cast.framework.PlaybackConfig();
playbackConfig.manifestRequestHandler = requestInfo => {
requestInfo.headers = {SomeHeader: "SomeValue", Hello: "World"};
};
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.LOAD, requestData => {
console.log("loaded " + requestData.media.contentId);
playerManager.setPlaybackConfig(playbackConfig);
return requestData;
}
);
I'm trying to retrieve a list of Slack reminders, which works fine using Slack API's reminders.list method. However, reminders that are set using SlackBot (i.e. by asking Slackbot to remind me of a message) return the respective permalink of that message as text:
{
"ok": true,
"reminders": [
{
"id": "Rm012C299C1E",
"creator": "UV09YANLX",
"text": "https:\/\/team.slack.com\/archives\/DUNB811AM\/p1583441290000300",
"user": "UV09YANLX",
"recurring": false,
"time": 1586789303,
"complete_ts": 0
},
Instead of showing the permalink, I'd naturally like to show the message I wanted to be reminded of. However, I couldn't find any hints in the Slack API docs on how to retrieve a message identified by a permalink. The link is presumably generated by chat.getPermalink, but there seems to be no obvious chat.getMessageByPermalink or so.
I tried to interpet the path elements as channel and timestamp, but the timestamp (transformed from the example above: 1583441290.000300) doesn't seem to really match. At least I don't end up with the message I expected to retrieve when passing this as latest to conversations.history and limiting to 1.
After fiddling a while longer, here's how I finally managed in JS:
async function downloadSlackMsgByPermalink(permalink) {
const pathElements = permalink.substring(8).split('/');
const channel = pathElements[2];
var url;
if (permalink.includes('thread_ts')) {
// Threaded message, use conversations.replies endpoint
var ts = pathElements[3].substring(0, pathElements[3].indexOf('?'));
ts = ts.substring(0, ts.length-6) + '.' + ts.substring(ts.length-6);
var latest = pathElements[3].substring(pathElements[3].indexOf('thread_ts=')+10);
if (latest.indexOf('&') != -1) latest = latest.substring(0, latest.indexOf('&'));
url = `https://slack.com/api/conversations.replies?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&ts=${ts}&latest=${latest}&inclusive=true&limit=1`;
} else {
// Non-threaded message, use conversations.history endpoint
var latest = pathElements[3].substring(1);
if (latest.indexOf('?') != -1) latest = latest.substring(0, latest.indexOf('?'));
latest = latest.substring(0, latest.length-6) + '.' + latest.substring(latest.length-6);
url = `https://slack.com/api/conversations.history?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&latest=${latest}&inclusive=true&limit=1`;
}
const response = await fetch(url);
const result = await response.json();
if (result.ok === true) {
return result.messages[0];
}
}
It's not been tested to the latest extend, but first results look alright:
The trick with the conversations.history endpoint was to include the inclusive=true parameter
Messages might be threaded - the separate endpoint conversations.replies is required to fetch those
As the Slack API docs state: ts and thread_ts look like timestamps, but they aren't. Using them a bit like timestamps (i.e. cutting off some characters at the back and inserting a dot) seems to work, gladly, however.
Naturally, the slackAccessToken variable needs to be set beforehand
I'm aware the way to extract & transform the URL components in the code above might not the most elegant solution, but it proves the concept :-)