Google Voice Actions, Home, Actions, Assistant - google-assistant-sdk

Please can somebody give a short summary/difference between Google Voice Actions, Home, Actions and Assistant (Dialogflow)? I would like to control my android java application via google assistant/home, but I'm having hard time finding something I can start with.

If you are building something for the Google Assistant, you will use Actions on Google. This is a cloud-based platform for AI-powered interactions. An action can take a user's intent, either through text or a transcription of what they said, and return a useful result.
Most people are not experts with natural language understanding, so they can use Dialogflow. This service gives developers the ability to easily match the user's intent and identify entities through what they said.
Once you build an action, you can publish it for the Google Assistant. This is a platform that is available on a variety of surfaces. Surfaces include Google Home as well as your phone.
If you are interested in building a device that embeds the Google Assistant or want to add the Google Assistant to your application, you can use the Google Assistant SDK. This allows you to make requests to the Assistant and present the response to the user.
Depending on what you are building, the tools you will use can vary.

Related

difference between "work with google assistant" and "built-in"? thanks

Is there any doc or people can tell me the difference between "work with google assistant" and "built-in"? If a new smart device going to add google assistant in, what should we do based on these 2 different ways?
Building IoT devices for the Google Assistant fall into two categories.
You can have the Google Assistant built into your device. This would let you talk directly to your device and invoke custom device actions. It would not really integrate with a consumer's existing devices.
Alternatively, you can work with the Google Assistant. This would be for a device that you control using an existing Google Assistant surface like a Google Home or a phone. This simplifies the integration work, as you don't need to worry about all the voice management, and may be better for integrating into other voice ecosystems.

Cannot run Google Home assistant routines by conversation relay

I'm using google assistant relay (https://github.com/greghesp/assistant-relay/) on Raspberry PI. My objective is to allow my automation (jeedom) server to run Google actions. For basic instructions, everything is ok, such as switch on the light of a room. But when trying to run a Google Home routine command (for example, "lunch time" that should light on living room, light off tv room) assistant doesn't run routine but answer such as if routine was not existing, for example it tries to search for a restaurant named "lunch time".
I registered a device (obtaining model id and device id) and put reference of this device in the relay system but it does not work better, same result.
In Google Home app, I set for this device permission to execute with personal data but same result.
I expect to launch routines with my relay in order to allow my jeedom server to put some advanced tasks to Google.
Developer of Assistant Relay here.
I'm not sure if the triggering Routines is a SDK limitation or not. Some people seem to have had success with Hass.io's integration, however I've not managed to get it to work (not looked into it too much)
I'm working on V3 so will see if I can get it working

Managing Google Assistant for the enterprise

We are exploring the use of Google Home and LG Google Assistant TVs for an enterprise application. This would require managing hundreds of devices in a single building. Amazon Alexa has "Amazon for Business" for doing this for Alexa devices.
Is there anything similar for Google Assistant devices? Is there an efficient way to manage hundreds of devices? Can device management be done remotely?
We would like to use Google Assistant devices because of superior AI technology but feel we might have to use Alexa because of the need to manage so many devices.
Please advise.
An action built for the Google Assistant runs entirely in the cloud, so you shouldn't need to worry about device management. The same action can be accessed by multiple devices at the same time, via HTTP requests.
For more information, check out Google's codelab here.

Is it possible to integrate Google Assistant with my custom app?

I would like to integrate Google assistant inside my app. The idea is that I have a app which provides various press services, like giving latest news and such. I would like to integrate Google assistant for handling some particular requests. For example the user may ask, "what did the Lakers yesterday?" If i search this on Google or ask to the assistant, i will get a card with the score of yesterday's game. I would like, from inside my app, to replicate this interaction, that is sending the request to Google assistant and showing the answer that Google return to the user (or at least opening Google assistant with the answer)
Is such a thing possible?
I was looking at Google Assistant service sdk (https://developers.google.com/assistant/sdk/guides/service/python/) and it says:
The Google Assistant Service gives you full control over the integration with the Assistant by providing a streaming endpoint. Stream a user audio query to this endpoint to receive a Google Assistant audio response.
Is this possible only with audio interaction? I'm not quite certain this is the solution I should look into
The Google Assistant SDK Service allows you to send both audio or text to the Assistant and you'll get back responses including audio, display text, and rich HTML visual content.
For mobile apps, there's less support compared to Python, but it's still doable. For example, there's a version of the SDK for Android Things, which means for IoT devices like a Raspberry Pi. You can go through this project and remove all the IoT references, but it's something you'd need to do yourself.

Sirikit supported services and its extended usage

I have a roadside assistance service application. It has some of the functionalities similar to ride booking app(Eg. Uber). How far can i leverage iOS 10 Sirikit? May be, apple can reject it. But i need to know the technical feasibility.
My application's functionality - I am struck in the middle of road with a flat tire. I need a tow assistance for my car. I give my current location and i ask my app to tow to my preferred dealer location. I pay for the service and wait for the provider to respond. I receive continuous updates from my service provider regarding the driver.
1st Step tried: I am trying to open my app with the statement "Siri, get a roadside assistance for my flat tire". I need to open my app and capture FLAT TIRE as a parameter. But I couldn't.
I tried using AppIntentVocabulary.plist. It was not working. I am missing something and there are no complete tutorials over the internet. Any help is much appreciated.
Sample Project:
Github Link for my simple Siri Integration:
https://github.com/vivinjeganathan/SiriExample
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
SiriKit is a way for you to make your content available through Siri. It also lets you add support for your services to the Maps app. To support SiriKit, you use the Intents framework and Intents UI framework to implement one or more extensions that you then include inside your iOS app. When the user requests specific types of services through Siri or Maps, the system uses your extensions to provide those services.
Add SiriKit support only if your app implements one of the following types of services:
Audio or video calling
Messaging
Payments
Searching photos
Workouts
Ride booking
Check this Link: http://airflypan.com/foundation-course/233
http://airflypan.com/foundation-course/233
You really can't do this in any sensible way for the user. Although you app's usecase will map to ride-hailing intent the vocabulary will not. Siri gives you almost no options to influence the vocabulary she is using to communicate to your User. If you could only teach you users that requesting a tow is actually requesting a ride... :)
Apple have analyzed the domains they are supporting and the vocabulary that is used in these domains and "taught" them to Siri in every language/culture that Siri is supporting. This makes total sense from Apple's point of view because you'd be hard pressed to be able to do this yourself.

Resources