Opening Task module from Adaptive Card using Action.OpenUrl - microsoft-graph-api

we are posting an Adaptive card in channel using following Graph API https://learn.microsoft.com/en-us/graph/api/channel-post-chatmessage?view=graph-rest-beta&tabs=cs
Our payload is
{
"body":{
"contentType":"html",
"content":"<attachment id="attachmentId"></attachment>" // for attachmentId see attachments section below
},
"attachments":[
{
"id":"attachmentId",
"content":cardData // this is JSON.stringify of actual Adaptive card data
}
]
}
Also our Adaptive card is like
{
"type":"AdaptiveCard",
"$schema":"http://adaptivecards.io/schemas/adaptive-card.json",
"body":"some body",
"actions":[
{
"type":"Action.OpenUrl",
"title":"Suggest Item",
"url":"https://teams.microsoft.com/l/task/2a05d07c-d194-400e-8122-cad64cfe1cef?url=https%3A%2F%2Flocalhost%3A44349%2Fteams%2F%23%2Fsuggest%2Fee31b3aa-f60f-4594-a964-a01fcc461ceb%3Ffrom%3Dcard&height=540&width=800&title=*Suggest%20item"
}
]
}
If i post task module Url in chat and click it works,
But it is not working from Adaptive Card Action.OpenUrl
Previously it used to work

Task module deep link works fine from Adaptive card OpenUrl. Could you please try following JSON payload?
{
"body": {
"contentType": "html",
"content": "<attachment id=\"fa74618d23064677a1af25d0ae973532\"></attachment>"
},
"attachments": [
{
"id": "fa74618d23064677a1af25d0ae973532",
"contentType": "application/vnd.microsoft.card.adaptive",
"content": "{\r\n \"type\": \"AdaptiveCard\",\r\n \"actions\": [\r\n {\r\n \"type\": \"Action.OpenUrl\",\r\n \"title\": \"View\",\r\n \"url\": \"https:\/\/teams.microsoft.com\/l\/task\/f195eed2-4336-4c33-a11b-a417dcaa8680?url=https:\/\/taskmoduletest.azurewebsites.net\/customform&height=430&width=510&title=Custom%20Form&fallbackUrl=https:\/\/taskmoduletest.azurewebsites.net\/customform\"\r\n }\r\n ],\r\n \"$schema\": \"http:\/\/adaptivecards.io\/schemas\/adaptive-card.json\",\r\n \"version\": \"1.0\"\r\n}"
}
]
}
Please let us know if you are facing any issues.

There was a * in my url for title, after removing that it works again

Related

Springdoc sends Multipart file as application/x-www-form-urlencoded and not multipart/form-data

I am using the latest version of openapi-ui 1.6.7 and I can't make a file upload endpoint work.
This is my configuration of the parameter :
#PostMapping(
consumes = MediaType.MULTIPART_FORM_DATA_VALUE,
produces = MediaType.APPLICATION_JSON_VALUE
)
#Operation(
summary = "Create a new FileResource",
requestBody = #RequestBody(description = "File to upload")
)
public ResponseEntity<FileResourceIdPublicApiDto> create(
#Parameter(
description = "File to upload",
required = true
)
#RequestPart
MultipartFile file
When I use the "Try out" button in the generated swagger UI, I get a 415 Unsupported Media Type error.
The request headers has content-type : application/x-www-form-urlencoded
I think this is where the error comes from. The generated json from OpenApi looks like this :
{
"operationId": "create_4",
"parameters": [
...
],
"requestBody": {
"content": {
"multipart/form-data": {
"schema": {
"required": [
"file"
],
"type": "object",
"properties": {
"file": {
"type": "string",
"format": "binary",
"description": "File to upload"
}
}
}
}
},
"description": "File to upload"
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FileResourceId"
}
}
},
"description": "OK"
}
},
"summary": "Create a new FileResource",
"tags": [
"File Resource"
]
}
What am I missing to send a correct request with form-data content-type ?
For me replacing RequestPart to RequestParam did the job! btw I was using openapi-ui 1.6.4.
It’s a combination of two things:
Defining “consumes = multipart” and using RequestParam instead of RequestPart.
This wasn’t required when using springfox Swagger 2.0.
It’s really irritating that there is no good migration guide written for 2.0 -> 3.0.

graph-api: Adaptive card with mention doesn't render date properly

It seams that msteams object doesn't allow to render date, when card pushed through graph api in private msteams channel.
Example is like this:
POST https://graph.microsoft.com/v1.0/teams/{team_id}/channels/{channel_id}/messages
{
"body": {
"contentType": "html",
"content": "<attachment id=\"1\"/>"
},
"attachments":[
{
"contentType":"application/vnd.microsoft.card.adaptive",
"id":"1",
"content":"{\"type\":\"AdaptiveCard\",\"body\":[{\"text\":\"<at>Lev</at> hello {{ DATE(2021-04-28T00:00:00Z, SHORT) }}\",\"wrap\":true,\"type\":\"TextBlock\"}],\"version\":\"1.3\",\"msteams\":{\"entities\":[{\"type\":\"mention\",\"text\":\"<at>Lev</at>\",\"mentioned\":{\"id\":\"29:131...Rg\",\"name\":\"Lev\"}}]}}"
}
]
}
Example in teams
Is there some workaround?
msteams object is not allowed through Graph API. Only Bots have access to the msteams object. This is by design.
The at mention object when you send from bot, the user will get notify in teams activity and the card will display as shown in below image
and the at mention in adaptive card using graph api post request, the user will not get notify in teams activity and the card will display as shown in below image
Please check the below request body
{
"body": {
"contentType": "html",
"content": "<attachment id=\"1\"/>"
},
"attachments": [
{
"contentType": "application/vnd.microsoft.card.adaptive",
"id": "1",
"content": "{ \"$schema\": \"http://adaptivecards.io/schemas/adaptive-card.json\", \"type\": \"AdaptiveCard\", \"version\": \"1.0\", \"body\": [ { \"type\": \"TextBlock\", \"text\": \"Your package will arrive on {{DATE(2017-02-14T06:00:00Z, SHORT)}} at {{TIME(2017-02-14T06:00:00Z)}}\", \"wrap\": true } ]}"
}
]
}

Jira API: Add Comment Using Edit Endpoint

Jira has a an /edit endpoint which can be used to add a comment. There is an example in their documentation that suggests this input body to accomplish this:
{
"update": {
"comment": [
{
"add": {
"body": "It is time to finish this task"
}
}
]
}
}
I create the exact same input in my Java code:
private String createEditBody() {
JsonNodeFactory jsonNodeFactory = JsonNodeFactory.instance;
ObjectNode payload = jsonNodeFactory.objectNode();
ObjectNode update = payload.putObject("update");
ArrayNode comments = update.putArray("comment");
ObjectNode add = comments.addObject();
ObjectNode commentBody = add.putObject("add");
commentBody.put("body", "this is a test");
return payload.toString();
}
but when I send this PUT request I get an error saying that the "Operation value must be of type Atlassian Document Format"!
Checking the ADF format it says that "version", "type" and "content" are required for this format. So although their documentation example doesn't seem to be ADF format, I'm trying to guess the format and change it. Here's what I accomplished after modifying my code:
{
"update": {
"comment": [
{
"add": {
"version": 1,
"type": "paragraph",
"content": [
{
"body": "this is a test"
}
]
}
}
]
}
}
the add operation seems to be an ADF but now I get 500 (internal server error). Can you help me find the issue?
Note that the above example from Atlassian documentation is for "Jira Server Platform" but the instance I'm working with is "Jira Cloud Platform" although I think the behaviour should be the same for this endpoint.
after tinkering with the input body, I was able to form the right request body! This will work:
{
"update": {
"comment": [
{
"add": {
"body": {
"version": 1,
"type": "doc",
"content": [
{
"type": "paragraph",
"content": [
{
"type": "text",
"text": "this is a test"
}
]
}
]
}
}
}
]
}
}
The annoying things that I learned along the way:
Jira's documentation is WRONG!! Sending the request in their example will fail!!
after making a few changes, I was able to get 204 from the endpoint while still comment was not being posted! And I guessed that the format is not correct and kept digging! But don't know why Jira returns 204 when it fails!!!

Azure Logic App: Read telemetry data as dynamic content from IoT hub message

I'm routing telemetry messages via IoT Events and event Grid to Logic Apps using a webhook. The logic app lets you input a sample JSON message and then use dynamic content to add information to an email alert I'm sending(O365: Send an Email V2)
I can include System Properties like "iothub-connection-device-id" But when I try to pick temeletry data I get the following error:
InvalidTemplate. Unable to process template language expressions in action 'Send_an_email_(V2)' inputs at line '1' and column '1680': 'The template language expression 'items('For_each')?['data']?['body']?['windingTemp1']' cannot be evaluated because property 'windingTemp1' cannot be selected. Property selection is not supported on values of type 'String'. Please see https://aka.ms/logicexpressions for usage details.'.
When I look at the raw output of the webhook connector it shows the following message but the telemetry points are cleary not there. I'd expect to see them in the "body" property but instead there is just the string: "eyJ3aW5kaW5nVGVtcDEiOjg2LjYzOTYxNzk4MjYxODMzLCJ3aW5kaW5nVGVtcDIiOjc4LjQ1MDc4NTgwMjQyMTUyLCJ3aW5kaW5nVGVtcDMiOjg1LjUzMDYxMDY5OTQ1MzY1LCJMb2FkQSI6MjAyOS44NDgyMTg4ODYxMTEsIkxvYWRCIjoyMDQwLjgxMDk4OTg0MDMzMzgsIkxvYWRWIjoyMDA0LjYxMTkzMjMyNTQ2MTgsIk9pbFRlbXAiOjk5LjA2MjMyNjU2MTY4ODU4fQ=="
Looking for help to determine what could be causing this and how to get the telemetry data passed through correctly so that I can inculde it dynamically in the email alert.
Thanks!
{
"headers": {
"Connection": "Keep-Alive",
"Accept-Encoding": "gzip,deflate",
"Host": "prod-24.northeurope.logic.azure.com",
"aeg-subscription-name": "TEMPALERT",
"aeg-delivery-count": "1",
"aeg-data-version": "",
"aeg-metadata-version": "1",
"aeg-event-type": "Notification",
"Content-Length": "1017",
"Content-Type": "application/json; charset=utf-8"
},
"body": [
{
"id": "c767fb91-3806-324c-ec3c-XXXXXXXXXX",
"topic": "/SUBSCRIPTIONS/XXXXXXXXXXXX",
"subject": "devices/Device-001",
"eventType": "Microsoft.Devices.DeviceTelemetry",
"data": {
"properties": {
"TempAlarm": "true"
},
"systemProperties": {
"iothub-connection-device-id": "Device-001",
"iothub-connection-auth-method": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
"iothub-connection-auth-generation-id": "637264713410XXXX",
"iothub-enqueuedtime": "2020-06-01T23:05:58.3130000Z",
"iothub-message-source": "Telemetry"
},
"body": "eyJ3aW5kaW5nVGVtcDEiOjg2LjYzOTYxNzk4MjYxODMzLCJ3aW5kaW5nVGVtcDIiOjc4LjQ1MDc4NTgwMjQyMTUyLCJ3aW5kaW5nVGVtcDMiOjg1LjUzMDYxMDY5OTQ1MzY1LCJMb2FkQSI6MjAyOS44NDgyMTg4ODYxMTEsIkxvYWRCIjoyMDQwLjgxMDk4OTg0MDMzMzgsIkxvYWRWIjoyMDA0LjYxMTkzMjMyNTQ2MTgsIk9pbFRlbXAiOjk5LjA2MjMyNjU2MTY4ODU4fQ=="
},
"dataVersion": "",
"metadataVersion": "1",
"eventTime": "2020-06-01T23:05:58.313Z"
}
]
}
Here is the sample input I am using with the trigger:
[{
"id": "9af86784-8d40-fe2g-8b2a-bab65e106785",
"topic": "/SUBSCRIPTIONS/<subscription ID>/RESOURCEGROUPS/<resource group name>/PROVIDERS/MICROSOFT.DEVICES/IOTHUBS/<hub name>",
"subject": "devices/LogicAppTestDevice",
"eventType": "Microsoft.Devices.DeviceTelemetry",
"eventTime": "2019-01-07T20:58:30.48Z",
"data": {
"body": {
"windingTemp1": 95.62818310718433
},
"properties": {
"Status": "Active"
},
"systemProperties": {
"iothub-content-type": "application/json",
"iothub-content-encoding": "utf-8",
"iothub-connection-device-id": "d1",
"iothub-connection-auth-method": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
"iothub-connection-auth-generation-id": "123455432199234570",
"iothub-enqueuedtime": "2019-01-07T20:58:30.48Z",
"iothub-message-source": "Telemetry"
}
},
"dataVersion": "",
"metadataVersion": "1"
}]
Summary comment to answer to help others who have same problem.
The body you provided is Base64 encoded, you can decode it with Convert.FromBase64String(String) Method.
byte[] newBytes = Convert.FromBase64String(body);
For more details, you could refer to this issue.
Update:
Add the following code in my application will solve the problem.
message.ContentEncoding = "utf-8";
message.ContentType = "application/json";

Autopilot redirect to new task not working correctly

So, I've been working with Autopilot tasks for a little here since the patch when you no longer need to build, and I've seen that when I get to the second redirect to another task and when that task listens, it just fails to listen and it goes back to its fallback task.
I've tried to not use a function between the redirect and such, I've used a direct post to my Twilio function, and none of that works. I do have a questionnaire of two questions, and the complete label is a redirect, and that is where my tasks start to fail.
"actions": [
{
"say": {
"speech": "I just have a few questions"
}
},
{
"collect": {
"name": "questions",
"questions": [
{
"question": "Is the weather nice today",
"name": "q_1",
"type": "Twilio.YES_NO",
},
{
"question": "Do you like ice cream?",
"name": "q_2",
"type": "Twilio.YES_NO",
}
],
"on_complete": {
"redirect": "MY FUNCTION LINK"
}
}
}
]
}
Then the function will return this as a JSON:
responseObject = {
"actions": [
{
"redirect": "task://MY TASK"
}
]
};
Then the tasks goes like this:
{
"actions": [
{
"say": "Would you like to be transfered over, or be called later?"
},
{
"listen": {
"tasks": [
"transfer",
"calllater"
]
}
}
]
}
But the tasks that as being listened to never completes, and my logs seem like the task that called it does not exist.
The task should go to the correct tasks that are being listed to, but it just crashes and goes back to the fallback task. I have to idea why this does not work, please let me know.
Twilio developer evangelist here. 👋
I just took the code you posted and adjusted it and it works fine. Let me tell you what I did.
I created a welcome task
// welcome task
{
"actions": [
{
"say": {
"speech": "I just have a few questions"
}
},
{
"collect": {
"name": "questions",
"questions": [
{
"question": "Do you like ice cream?",
"name": "q_2",
"type": "Twilio.YES_NO"
}
],
"on_complete": {
"redirect": "https://picayune-snout.glitch.me/api/collect"
}
}
}
]
}
This tasks defines similar to your example an on_complete endpoint which I hosted on Glitch. The endpoints responds with JSON which looks like this.
module.exports = (req, res) => {
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify(
{
"actions": [
{
"say": {
"speech": "Thanks for you information"
}
},
{
"redirect": "task://continue"
}
]
}
));
}
Then, I defined the continue task similar to yours:
{
"actions": [
{
"say": "Would you like to be transfered over, or be called later?"
},
{
"listen": {
"tasks": [
"transfer",
"calllater"
]
}
}
]
}
calllater and transfer only use say and it works fine. Important piece is that you define samples for these two tasks so that the system can recognize them. Also you have to rebuild the model for the Natural Language Router.
Hard to tell what you did wrong. :/

Resources