Action on Google - Custom action in german language using action sdk - google-assistant-sdk

I am developing a speech recognition for a custom device using Google Assistant SDK. I am using Action SDK to create custom actions.
In my example the Google Assistant doesn't recognize actions in german language in case these actions are marked with "locale": "de" and the assistants language is set to german. I recognized that query patterns are understood clearly, but the event is not triggered. If everything is set to english the events are triggered.
action.json
{
"locale": "de",
"manifest": {
"displayName": "Blink Licht",
"invocationName": "Blink Licht",
"category": "PRODUCTIVITY"
},
"actions": [
{
"name": "com.acme.actions.blink_light",
"availability": {
"deviceClasses": [
{
"assistantSdkDevice": {}
}
]
},
"intent": {
"name": "com.acme.intents.blink_light",
"parameters": [
{
"name": "number",
"type": "SchemaOrg_Number"
},
{
"name": "light_target",
"type": "LightType"
}
],
"trigger": {
"queryPatterns": [
"lasse das $LightType:light_target $SchemaOrg_Number:number mal blinken"
]
}
},
"fulfillment": {
"staticFulfillment": {
"templatedResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Das Licht $light_target.raw blinkt $number mal"
}
},
{
"deviceExecution": {
"command": "com.acme.commands.blink_light",
"params": {
"lightKey": "$light_target",
"number": "$number"
}
}
}
]
}
}
}
}
],
"types": [
{
"name": "$LightType",
"entities": [
{
"key": "LIGHT",
"synonyms": [
"Licht",
"LED",
"Glühbirne"]
}
]
}
]
}
hotword.py - snipped of event processing
def process_event(event, device_id):
"""Pretty prints events.
Prints all events that occur with two spaces between each new
conversation and a single space between turns of a conversation.
Args:
event(event.Event): The current event to process.
device_id(str): The device ID of the new instance.
"""
if event.type == EventType.ON_CONVERSATION_TURN_STARTED:
print()
print(event)
if (event.type == EventType.ON_CONVERSATION_TURN_FINISHED and
event.args and not event.args['with_follow_on_turn']):
print()
if event.type == EventType.ON_DEVICE_ACTION:
for command, params in process_device_actions(event, device_id):
print('Do command', command, 'with params', str(params)) #
if command == "com.acme.commands.blink_light":
number = int(params['number'])
for i in range(int(number)):
print('Device is blinking.')
Project language in action console is German:
enter image description here
To update and make the action available for testing I used "gaction CLI".
The question: Why is the event/command "com.acme.commands.blink_light" in hotword.py not triggered in case using german language?
Thanks in anticipation!

Here's how I solved this problem:
1. Go to your action on google console and pick the project you're having this trouble with.
2. In the 'Overview' section you'll see a window with the languages of your action on top, and at their right a 'Modify languages' in blue. Click it and then delete the langauge you're not using, english in this case.
At least that worked for me.

Related

Renaming type for FSharp.Data JsonProvider

I have a JSON that looks something like this:
{
...
"names": [
{
"value": "Name",
"language": "en"
}
],
"descriptions": [
{
"value": "Sample description",
"language" "en"
}
],
...
}
When using JsonProvider from the FSharp.Data library, it maps both fields as the same type MyJsonProvider.Name. This is a little confusing when working with the code. Is there any way how to rename the type to MyJsonProvider.NameOrDescription? I have read that this is possible for the CsvProvider, but typing
JsonProvider<"./Resources/sample.json", Schema="Name->NameOrDescription">
results in an error.
Also, is it possible to define that the Description field is actually an Option<MyJsonProvider.NameOrDescription>? Or do I just have to define the JSON twice, once with all possible values and the second time just with mandatory values?
[
{
...
"names": [
{
"value": "Name",
"language": "en"
}
],
"descriptions": [
{
"value": "Sample description",
"language" "en"
}
],
...
},
{
...
"names": [
{
"value": "Name",
"language": "en"
}
],
...
}
]
To answer your first question, I do not think there is a way of specifying such renaming. It would be quite reasonable option, but the JSON provider could also be more clever when generating names here (it knows that the type can represent Name or Description, so it could generate a name with Or based on those).
As a hack, you could add an unusued field with the right name:
type A = JsonProvider<"""{
"do not use": { "value_with_langauge": {"value":"A", "language":"A"} },
"names": [ {"value":"A", "language":"A"} ],
"descriptions": [ {"value":"A", "language":"A"} ]
}""">
To answer your second question - your names and descriptions fields are already arrays, i.e. ValueWithLanguge[]. For this, you do not need an optional value. If they are not present, the provider will simply give you an empty array.

Jira API: Add Comment Using Edit Endpoint

Jira has a an /edit endpoint which can be used to add a comment. There is an example in their documentation that suggests this input body to accomplish this:
{
"update": {
"comment": [
{
"add": {
"body": "It is time to finish this task"
}
}
]
}
}
I create the exact same input in my Java code:
private String createEditBody() {
JsonNodeFactory jsonNodeFactory = JsonNodeFactory.instance;
ObjectNode payload = jsonNodeFactory.objectNode();
ObjectNode update = payload.putObject("update");
ArrayNode comments = update.putArray("comment");
ObjectNode add = comments.addObject();
ObjectNode commentBody = add.putObject("add");
commentBody.put("body", "this is a test");
return payload.toString();
}
but when I send this PUT request I get an error saying that the "Operation value must be of type Atlassian Document Format"!
Checking the ADF format it says that "version", "type" and "content" are required for this format. So although their documentation example doesn't seem to be ADF format, I'm trying to guess the format and change it. Here's what I accomplished after modifying my code:
{
"update": {
"comment": [
{
"add": {
"version": 1,
"type": "paragraph",
"content": [
{
"body": "this is a test"
}
]
}
}
]
}
}
the add operation seems to be an ADF but now I get 500 (internal server error). Can you help me find the issue?
Note that the above example from Atlassian documentation is for "Jira Server Platform" but the instance I'm working with is "Jira Cloud Platform" although I think the behaviour should be the same for this endpoint.
after tinkering with the input body, I was able to form the right request body! This will work:
{
"update": {
"comment": [
{
"add": {
"body": {
"version": 1,
"type": "doc",
"content": [
{
"type": "paragraph",
"content": [
{
"type": "text",
"text": "this is a test"
}
]
}
]
}
}
}
]
}
}
The annoying things that I learned along the way:
Jira's documentation is WRONG!! Sending the request in their example will fail!!
after making a few changes, I was able to get 204 from the endpoint while still comment was not being posted! And I guessed that the format is not correct and kept digging! But don't know why Jira returns 204 when it fails!!!

Autopilot redirect to new task not working correctly

So, I've been working with Autopilot tasks for a little here since the patch when you no longer need to build, and I've seen that when I get to the second redirect to another task and when that task listens, it just fails to listen and it goes back to its fallback task.
I've tried to not use a function between the redirect and such, I've used a direct post to my Twilio function, and none of that works. I do have a questionnaire of two questions, and the complete label is a redirect, and that is where my tasks start to fail.
"actions": [
{
"say": {
"speech": "I just have a few questions"
}
},
{
"collect": {
"name": "questions",
"questions": [
{
"question": "Is the weather nice today",
"name": "q_1",
"type": "Twilio.YES_NO",
},
{
"question": "Do you like ice cream?",
"name": "q_2",
"type": "Twilio.YES_NO",
}
],
"on_complete": {
"redirect": "MY FUNCTION LINK"
}
}
}
]
}
Then the function will return this as a JSON:
responseObject = {
"actions": [
{
"redirect": "task://MY TASK"
}
]
};
Then the tasks goes like this:
{
"actions": [
{
"say": "Would you like to be transfered over, or be called later?"
},
{
"listen": {
"tasks": [
"transfer",
"calllater"
]
}
}
]
}
But the tasks that as being listened to never completes, and my logs seem like the task that called it does not exist.
The task should go to the correct tasks that are being listed to, but it just crashes and goes back to the fallback task. I have to idea why this does not work, please let me know.
Twilio developer evangelist here. 👋
I just took the code you posted and adjusted it and it works fine. Let me tell you what I did.
I created a welcome task
// welcome task
{
"actions": [
{
"say": {
"speech": "I just have a few questions"
}
},
{
"collect": {
"name": "questions",
"questions": [
{
"question": "Do you like ice cream?",
"name": "q_2",
"type": "Twilio.YES_NO"
}
],
"on_complete": {
"redirect": "https://picayune-snout.glitch.me/api/collect"
}
}
}
]
}
This tasks defines similar to your example an on_complete endpoint which I hosted on Glitch. The endpoints responds with JSON which looks like this.
module.exports = (req, res) => {
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify(
{
"actions": [
{
"say": {
"speech": "Thanks for you information"
}
},
{
"redirect": "task://continue"
}
]
}
));
}
Then, I defined the continue task similar to yours:
{
"actions": [
{
"say": "Would you like to be transfered over, or be called later?"
},
{
"listen": {
"tasks": [
"transfer",
"calllater"
]
}
}
]
}
calllater and transfer only use say and it works fine. Important piece is that you define samples for these two tasks so that the system can recognize them. Also you have to rebuild the model for the Natural Language Router.
Hard to tell what you did wrong. :/

Create OnlineMeeting in MS Graph with Call-in Info

I am building some utilities to automate aspects of Microsoft Teams at my company. One thing we are trying is automating scheduling/creation of Online Meetings under various circumstances. Overall this is working fine, but I can't figure out how to get / attach telephone call-in information for the calls we're creating.
Here's an example POST /app/onlineMeetings:
{
"meetingType": "meetNow",
"participants": {
"organizer": {
"identity": {
"user": {
"id": "<user-id>"
}
}
}
},
"subject": "Personal Room"
}
And here's what a typical response looks like:
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#app/onlineMeetings/$entity",
"joinUrl": "<join-url>",
"subject": "Personal Room",
"isCancelled": false,
"meetingType": "MeetNow",
"accessLevel": "SameEnterprise",
"id": "<meeting-id>",
"audioConferencing": null,
"meetingInfo": null,
"participants": {
"organizer": {
"upn": "<user-name>",
"sipProxyAddress": "<user-name>",
"identity": {
}
},
"attendees": []
},
"chatInfo": {}
}
As you can see, the audioConferencing key is null. If a user accesses the joinUrl, they can join the call and audio conferencing information is displayed at that time -- but I can't figure out how to get it out in advance (e.g. to send in an email).
Also note that since this is not a VTC-enabled meeting, the id can't be used to issue a new GET request for additional information, as discussed here

Google assistant service, how to filter multiple audio responses

It's hard to explain but in fact I m trying to code my own library with The Google Assistant Service.
me > "set a timer"
GA > "sure, how long"
me > "10 mn"
GA > "ok, timer is set" (1st response)
GA > "Sorry I can't help you" (2nd response)
The reaction is normal, because service don't support timer. I want to code my own timer, but no way to keep the first response and block the second. dialog_state_out.supplemental_display_text contain only the first one, but the audio core play all the data we have in audio_out.audio_data.
How to separe the 2 responses, I don't see disconnection on the data flow and only 1 request done.
The right way to do it is using custom device actions. You can create your own action that will trigger on a query like "set a timer", allowing you to handle custom logic and even support parameters within the query itself.
This page in the documentation explains how to set them up. You define an action package with your actions. Here's an action for "blinking":
"actions": [
{
"name": "com.example.actions.BlinkLight",
"availability": {
"deviceClasses": [
{
"assistantSdkDevice": {}
}
]
},
"intent": {
"name": "com.example.intents.BlinkLight",
"parameters": [
{
"name": "number",
"type": "SchemaOrg_Number"
},
{
"name": "speed",
"type": "Speed"
}
],
"trigger": {
"queryPatterns": [
"blink ($Speed:speed)? $SchemaOrg_Number:number times",
"blink $SchemaOrg_Number:number times ($Speed:speed)?"
]
}
},
"fulfillment": {
"staticFulfillment": {
"templatedResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Blinking $number times"
}
},
{
"deviceExecution": {
"command": "com.example.commands.BlinkLight",
"params": {
"speed": "$speed",
"number": "$number"
}
}
}
]
}
}
}
}
],

Resources