For using Google Cloud Speech API via the streaming API(Performing Streaming Speech Recognition on an Audio Stream) with sox, how to configure it to run with FLAC?
Related
We are implementing Avaya Google Speech integration, where Avaya IVR platform is capturing user's voice and internally using StreamingRecognize API call to send audio stream to Google Cloud Speech (GCS).
We want to know if there is any feature available in GCS to store the audio (voice input) for every request and access it as per demand.
i suggest you use Google Cloud Storage save audio~
google cloud speech only support data logging
How can I create event of type Quick (using Google Hangouts On Air) using YouTube Live Streaming API? At the documentation of Youtube Live Streaming API I couldn't find. Please help me.
enter image description here
I tried using google speech API for converting my audio speech input to text with raspberry pi 0 and python code. I added my billing account and enabled the GOOGLE SPEECH API service in Google console.I generated my own API key for google speech API and tried using it in python script but when I run the script I get error[32] broken pipe,connection failed.Please may I know the reason for not being able to access the API key.Does google speech API require any authentication process or any problem with the code.My python code is below
#!/usr/bin/env python3
import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone() as source:
print("Say something!")
audio = r.listen(source)
try:
print (r.recognize_google(audio, key="my_api_key"))
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
Error is:
say something!
could not request results from google speech recognition services
recognition connection failed:[ERROR 32] broken pipe
I wanna use Google Voice Service by not microphone but video file.
for example, A Video File is playing on my computer and Google Speech Recognition Program is recognizing the video's Audio stream.
ex) Auto caption function of Youtube.
How can I use G.S.R??
This is a great question, Google does provide a way of doing that through the Web Speech API. Here's a link to an example usage, and a demo site from Google here.
However, you would have to extract the audio from the video first and then feed the audio to the API.
There's also the Cloud Speech API, which is free up to a certain point. It can be found here.
I started learning about the WebRTC and interested if the API could be used for peer-to-peer streaming of a Youtube video for example. I could not find any articles on this. Would it be possible to use the API to stream and synchronize a Youtube video to two people in real-time?
No you will not be able to use YouTube as a WebRTC peer. The media stream from YouTube will not be able to perform the STUN and DTLS exchanges or setup the required SRTP stream.
What you could do is write a custom application that acted as an intermediary between YouTube and WebRTC peers. The custom application would need to be able to pull the stream down from YouTube and then forward it to any WebRTC peers that connected to it.
You need an intermediary gateway to do that.
I have read on this page you can convert WebRTC stream to RTMP H.264+AAC for YouTube Live.They use Flashphoner.