I was trying to use Google Assistant Service to embed a Google Assistant on my device and it was working very very well. However, after I seemingly changed nothing, it seems that the Google Assistant lost most of its capabilities.
I tried following the instructions for Google Assistant Library (the python one) as well, using a brand new project, and was able to set up the assistant successfully, however, even this unmodified google library is having the same issue. Anything beyond a simple question like "who am I" or "who are you" results in "I'm sorry, I don't know how to deal with that".
It's important to note here (and why this is so weird) that it is actually recognizing what I say. Here's an example of the terminal output from running G. A. Library:
ON_CONVERSATION_TURN_STARTED
ON_END_OF_UTTERANCE
ON_RECOGNIZING_SPEECH_FINISHED:
{"text": "who won the World Cup"}
ON_RESPONDING_STARTED:
{"is_error_response": false}
ON_RESPONDING_FINISHED
ON_CONVERSATION_TURN_FINISHED:
{"with_follow_on_turn": false}
ON_CONVERSATION_TURN_STARTED
ON_END_OF_UTTERANCE
ON_RECOGNIZING_SPEECH_FINISHED:
{"text": "who am I"}
ON_RENDER_RESPONSE:
{
"text": "To get you that information, I'll need your permission. You can give it to me in the Google Assistant settings on your phone. Once that's done, ask me again!",
"type": 0
}
ON_RESPONDING_STARTED:
{"is_error_response": false}
ON_RESPONDING_FINISHED
ON_CONVERSATION_TURN_FINISHED:
{"with_follow_on_turn": false}
As you can see, it recognizes what I say, but doesn't actually recognize basic questions.
Similar questions to "who won the world cup" that do not work that did work before include "who is elon musk" or "tell me who won the stanley cup". And again this seems to still happening with a fresh Google Assistant Library install, having made a new project and refollowed the installation instructions.
I guess it could be a google permissions-related thing? Does anyone have any guesses?
It turns out activity controls were disabled. After reenabling them, it still did not work so I created a new google account and it worked fine.
Related
I want to create a new action that is able to understand my voice commands using google home in a foreign language and control my lights. Basically if i say a phrase in a foreign language it should understand to to turn the light on or off. Im starting from scratch and im lost.
The platform overview is the best place to get started with information on how voice commands to the Google Assistant can be translated into commands for your device.
There are several codelabs to walk you through what's happening.
I'm looking for a way to integrate the Google Assistant into my chatbot and be able to get answers to general questions like "whats the weather?", "how tall is X?", "what does X mean?" etc. (just how Google Home works). Ideally this would be over a REST API and I'd get the response back inside of a JSON payload.
I looked through the Google Assistant SDK docs but it wasn't clear on how I could host/build an API that does this. Any ideas on if something like this already exists?
Yes, you use the Google Assistant SDK.
There isn't a REST API, since other requirements for the SDK are poorly suited for REST. Instead it uses gRPC, which lets them publish a standard interface and lets you compile this interface to local language bindings.
IF you are using Python, C++, or Node.js, there are already libraries available which allow you to skip the gRPC setup yourself.
I am working on an Arduino project and I was curious if it is possible to add a "direct command" to Google assistant on android.
I've searched a bit and all I could find is having the Assistant do things like "Hey Google, let's talk to Application Name" but I think that's a little annoying to use, I wanted to know is if it's possible to add like "Hey Google do this" and it would like open a specific website.
Is is possible or I'm out of luck?
Thank you!
You can use explicit invocation to trigger a Google Assistant action.
The user can include an invocation phrase at the end of their invocation that will take them directly to the function they're requesting, like so:
"Hey Google do this" would be an example of invocation that is currently only available to partners. Since Spotify has a relationship with Google, for example, users can say "Hey Google, play Despacito on Spotify." If you would like to create Actions using parnter solutions, you will need to contact support to request access and become a partner.
As a third party developer, the closest you can get to mimicking the feature you're requesting is, "Hey Google, talk to My App Name about visiting www.example.com", which could trigger an intent that would respond with a browsing carousel of links to www.example.com and any other websites you would like to suggest.
I am currently experimenting with some Rental Car Reservation Email Schema and also Go-To Action Email Markup. About a year ago, I played around with the code and used Apps Script Quickstart Guide to test the markup. The tests worked as expected and I was getting all sort of great results in email. Specifically, I was seeing action buttons and, when in Google Inbox, all sorts of cool treatments of the reservations. I also saw my tests come through on "my car rental" queries in Google.
Today, I went ahead and redid my test, and some of the results were not the same. First off, I am unable to replicate the action buttons. My theory is that perhaps these were discontinued with the new version of Gmail. As for Google Inbox, those are still coming through. However, with Google Inbox slated to be discontinued in March 2019, this matters very little to us now. Lastly, my "my car rental" queries are not pulling in any results.
Is anyone able to confirm that email markup is still relevant? Google's documentation hasn't been updated since 2017, so it would be great to know before going through the effort of implementing the markup.
Thanks!
Currently following hot word example, I create custom commands like turn screen on/off, how do I disable voice response "sorry I can't help you"
there are multiple ways do it.follow this link and details google assistant
1 - if your using this method/project creation and run it. then you can parse the request/query in event.args['text'] based on which you can perform activity local without sending it to google assistant. problems: google will response with some voice message parallel.
2 - use IFTTT, pretty simple to work with. basic use with webhooks takes little time though. this link is useful and use ngrok for local webhook url
3 - use API.AI this is for advanced projects where you depend on google to assist with questions recognization and response with your answers from webhooks. it's not straight forward to work with, the details and tutorials that are given are with google cloud functions which works only with node.js as of now. if your python programmer or any other languages google has examples in github which are again not stright forward, I guess.