Running on Ubuntu 16.04 on a X86 64 architecture.
I am following the Google Assistant SDK for python guide :
https://developers.google.com/assistant/sdk/guides/library/python/embed/run-sample
Everything seems to work well except that I get no sound when I run the test. Hereafter the output :
ON_CONVERSATION_TURN_STARTED
ON_END_OF_UTTERANCE
ON_RECOGNIZING_SPEECH_FINISHED:
{'text': 'TF1 21h'}
ON_RESPONDING_STARTED:
{'is_error_response': False}
ON_RESPONDING_FINISHED
"TF1 à 21h" is the phrase I gave but I don't get any spoken answer.
Does not seem to be a sound system issue since
when I go to dialogflow console training/history there is no trace of the call, so I assume the call din't reach dialogflow.
One more thing my dialgflow app is in French.
Any idea how I could find where is the issue ?
The Google Assistant SDK gives you the ability to programmatically make audio requests to the Google Assistant. In ordinary usage you won't be using Dialogflow at all.
All requests to the Assistant can be seen by looking at your activity so you can see if there is an audio response.
The audio requirements for the Google Assistant SDK library include ALSA, which may or may not be available on your computer.
Related
I want to be able to enable a user on a video networking web platform to only need to grant camera permission one time, and be able to have separate video chats with multiple users.Part of the 'event" will have multiple one to one video chats. There is a one to one video chat with one user. it ends. there is another 1 to one video chat with another user. It ends, etc.... As it is this permission is needed to be granted for each separate video chat. I am having this issue primarily with ios on safari. I am having someone else build this web platform and the person is not able to solve this issue with the video plug in they are using. They claim it is an issue with mac devices that cannot grant permission to particular websites. But I know that this issue has been solved with other networking platforms. Can I accomplish this with tokbox (vonage)? Or please tell me what video platform to use and the specific way to accomplish this. I am not a developer but will pass on exactly what you give me to my developer team. I am considering having the website be rebuilt with tokbox but first want to be sure that I can accomplish this. The website it being built with PHP but this issue is so sognificant that I might have it bilt from scratch in whatever way is needed. Thank you very much!!!! I know this issue is solveable as I've seen this on other platforms - Zoom and other video networking platforms like remo.. thanks!!!
The Agora web SDK requests the browser for camera and microphone permissions. Now remembering these permissions is done by the browser itself and not the JavaScript SDK.
As to answer how other platforms offer this is because those are native apps and not a mobile website. So they don't have to play by the rules of the browser. If you are interested in that, you can take a look at our IOS SDK.
https://docs.agora.io/en/Video/start_call_ios?platform=iOS
I'm using google assistant relay (https://github.com/greghesp/assistant-relay/) on Raspberry PI. My objective is to allow my automation (jeedom) server to run Google actions. For basic instructions, everything is ok, such as switch on the light of a room. But when trying to run a Google Home routine command (for example, "lunch time" that should light on living room, light off tv room) assistant doesn't run routine but answer such as if routine was not existing, for example it tries to search for a restaurant named "lunch time".
I registered a device (obtaining model id and device id) and put reference of this device in the relay system but it does not work better, same result.
In Google Home app, I set for this device permission to execute with personal data but same result.
I expect to launch routines with my relay in order to allow my jeedom server to put some advanced tasks to Google.
Developer of Assistant Relay here.
I'm not sure if the triggering Routines is a SDK limitation or not. Some people seem to have had success with Hass.io's integration, however I've not managed to get it to work (not looked into it too much)
I'm working on V3 so will see if I can get it working
I would like to integrate Google assistant inside my app. The idea is that I have a app which provides various press services, like giving latest news and such. I would like to integrate Google assistant for handling some particular requests. For example the user may ask, "what did the Lakers yesterday?" If i search this on Google or ask to the assistant, i will get a card with the score of yesterday's game. I would like, from inside my app, to replicate this interaction, that is sending the request to Google assistant and showing the answer that Google return to the user (or at least opening Google assistant with the answer)
Is such a thing possible?
I was looking at Google Assistant service sdk (https://developers.google.com/assistant/sdk/guides/service/python/) and it says:
The Google Assistant Service gives you full control over the integration with the Assistant by providing a streaming endpoint. Stream a user audio query to this endpoint to receive a Google Assistant audio response.
Is this possible only with audio interaction? I'm not quite certain this is the solution I should look into
The Google Assistant SDK Service allows you to send both audio or text to the Assistant and you'll get back responses including audio, display text, and rich HTML visual content.
For mobile apps, there's less support compared to Python, but it's still doable. For example, there's a version of the SDK for Android Things, which means for IoT devices like a Raspberry Pi. You can go through this project and remove all the IoT references, but it's something you'd need to do yourself.
If I find the offline translation of the ios system, I have not found any api about the azure ios offline translation on the official website.
It is not currently available for iOS. Seems like it was a feature that was available previously, but it is now limited. Their website doesn't include a section for iOS translation anymore.
I wish I could provide a valid source online but my information comes from several hours on customer service calls with Microsoft Representatives who either told me that (1) the feature doesn't exist, (2) the feature used to exist but they can't find the pages anymore, (3) that the feature never existed and that it will be coming soon to iOS.
I apologise for the possibility of the title of my question would lead to confusion of the problem. For that I will explain my purpose in detail.
We are currently developing our own wifi speaker which is built with MIPS. The speaker comes with an app that will be used to manage it. One of the features that would we would like to include in the app is accessing contents of Spotify and be able to play them on the speakers.
Unfortunately, after going through the iOS SDK Documentation, and did some tests on Web API Console provided by the official of Spotify, I noticed that Spotify does not allow developers to directly get URL of a song, except for preview purposes. I also wasn't able to find any way to get the data bytes of the music streamed from the server. Every content comes with a corresponding URI which is used for a request.
For the device(WiFi Speaker) part, we recently tried to contact Spotify and ask for an SDK that can be used for development. However, one problem is that Spotify told us that they have SDK for x86, and ARMs architecture only. They don't have MIPS.
Now, here are my questions:
Is there any way for me to push music from an app to the WiFi Speakers without having to use SDK (for backend device)?
If Spotify can provide an SDK for our device, then how can we integrate the SDK with our platform?
I'll explain my 2nd question for clarity. Like for instance, in Android and iOS, these are popular platforms and are widely used by mobile devices. So if they provide SDKs for the two OS, then they can use default system frameworks to access the player for playing the content. (In iOS, it's the AVFoundation Framework). However, if Spotify were able to provide the SDK that we need, how would we able to integrate that with our own platform?
I will answer your question no 1:
You should be able to push music from an app using a buffer that you can read from using Core Audio and also forward to a device of your choice. I think what you are looking for can be found at CocoaLibSpotify