I found the speech recognition API but not the speech synthesis in the dart SDK. Is it not exposed in Dartium? I need some sort of tts, I could use Google translate_tts but unfortunately Dartium does not play mp3.
Any ideas on how to get some kind of tts in dart(web app) is highly appreciated.
Are you referring to http://developer.chrome.com/extensions/tts.html ?
If so, Dart doesn't yet have all of Chrome's extension and apps APIs natively wired up. In the meantime, you can use the JS-interop library (https://github.com/dart-lang/js-interop) to access those APIs.
I haven't tried the tts API in Dartium, though.
Related
I'm making a desktop app using electron and want to build a voice chat inside the app, i understand that WebRTC is the principal solution for this, in my research i read about Assignaling, Stun and Turn server's. But my realy question is, if i have a client using Electron, can i made the app send streaming voice from microphone to a Server(Elixir, python or Node) and the server just broadcast for everyone in the room ? or a just need the signaling, Stun and Turn ?
If someone have a tip in material where i can learn about the solution i realy grateful.
Thank's.
For this, you can use webRTC just as you would in the Browser. I tryed to develop a voice chat using peerJS and socket.io, but It seems like if there is a problem with electron that prevents voice connections outside of your network. For one-to-many-broadcasting you can use mediaSoup, for example. More information can be found here.
If you manage to get webRTC media streams working somehow, I'd be happy if you let me know
I want to develop one audio-video calling application. So, I have decided to use Google WebRTC, so Google WebRTC is good for calling functionality?. Is Google WebRTC support to Conference Calling?. If it is not, then what are the different limitation of Google WebRTC? Please suggest me another calling SDK's for iOS Swift.
Running on Ubuntu 16.04 on a X86 64 architecture.
I am following the Google Assistant SDK for python guide :
https://developers.google.com/assistant/sdk/guides/library/python/embed/run-sample
Everything seems to work well except that I get no sound when I run the test. Hereafter the output :
ON_CONVERSATION_TURN_STARTED
ON_END_OF_UTTERANCE
ON_RECOGNIZING_SPEECH_FINISHED:
{'text': 'TF1 21h'}
ON_RESPONDING_STARTED:
{'is_error_response': False}
ON_RESPONDING_FINISHED
"TF1 à 21h" is the phrase I gave but I don't get any spoken answer.
Does not seem to be a sound system issue since
when I go to dialogflow console training/history there is no trace of the call, so I assume the call din't reach dialogflow.
One more thing my dialgflow app is in French.
Any idea how I could find where is the issue ?
The Google Assistant SDK gives you the ability to programmatically make audio requests to the Google Assistant. In ordinary usage you won't be using Dialogflow at all.
All requests to the Assistant can be seen by looking at your activity so you can see if there is an audio response.
The audio requirements for the Google Assistant SDK library include ALSA, which may or may not be available on your computer.
I have searched the whole web but did not get any document for passing the XMPP IQ to WebRTC. Also I see is the XMPP Jingle class, but cannot find any document for integrate this.
Can someone help me with setting up a two-way video call using XMPP and WebRTC? By providing a working sample of Objective-C code?
I have tried:
Checking https://github.com/YK-Unit/AppRTCDemo and many other Github projects.
Are you want to support different platforms to get live video streaming?
So, i have a very good experience with https://www.nanocosmos.de/ library.
It's supports iOS, Android and Web. Has demo applications and trial period.
About documentation:
XMPP Framework: https://github.com/robbiehanson/XMPPFramework/wiki/IntroToFramework
It's very good documented.
WebRTC Native Code:
https://webrtc.org/native-code/ios/
It's contains all information about WebRTC.
I can't use an app on iOS, it has to be in-the-browser javascript, and it has to be video chat.
Can I support this on quickblox? I know that webRTC is not currently available on iOS.
I am sure this used to be a supported case before the move to webRTC?
Does the deprecated API offer this? Do we know when we can expect webRTC to be supported in the browser on iOS?
WebRTC javascript API should work on iOS Opera/Chrome browsers, but not in Safari unfortunately