Google speech API error[32] broken pipe when using own API key - google-cloud-speech

I tried using google speech API for converting my audio speech input to text with raspberry pi 0 and python code. I added my billing account and enabled the GOOGLE SPEECH API service in Google console.I generated my own API key for google speech API and tried using it in python script but when I run the script I get error[32] broken pipe,connection failed.Please may I know the reason for not being able to access the API key.Does google speech API require any authentication process or any problem with the code.My python code is below
#!/usr/bin/env python3
import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone() as source:
print("Say something!")
audio = r.listen(source)
try:
print (r.recognize_google(audio, key="my_api_key"))
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
Error is:
say something!
could not request results from google speech recognition services
recognition connection failed:[ERROR 32] broken pipe

Related

Is audio available on Google cloud platform to Speech-To-Text?

We are implementing Avaya Google Speech integration, where Avaya IVR platform is capturing user's voice and internally using StreamingRecognize API call to send audio stream to Google Cloud Speech (GCS).
We want to know if there is any feature available in GCS to store the audio (voice input) for every request and access it as per demand.
i suggest you use Google Cloud Storage save audio~
google cloud speech only support data logging

Google Cloud Speech Streaming API with FLAC

For using Google Cloud Speech API via the streaming API(Performing Streaming Speech Recognition on an Audio Stream) with sox, how to configure it to run with FLAC?

google Speech API for iOS Out of vocabulary training set

I am working on a project which uses Google Speech API in iOS project. The project involves Voice input to recognize many terms which are basically jargons. The Google speech API gracefully fails to recognize this voice input for this jargons.
Is there a way to train google speech API to learn this jargons and easily recognize them while giving voice input in mobile iOS app?
I believe you're referring to (recently rebranded) Google Cloud Speech-to-Text API. If so, there is no way to train it right now.

Google Speech Recognition on video file

I wanna use Google Voice Service by not microphone but video file.
for example, A Video File is playing on my computer and Google Speech Recognition Program is recognizing the video's Audio stream.
ex) Auto caption function of Youtube.
How can I use G.S.R??
This is a great question, Google does provide a way of doing that through the Web Speech API. Here's a link to an example usage, and a demo site from Google here.
However, you would have to extract the audio from the video first and then feed the audio to the API.
There's also the Cloud Speech API, which is free up to a certain point. It can be found here.

Using the Watson Speech to Text service in an iOS app with Bluemix

We are creating an iOS application on Bluemix and we are trying to link the Speech to Text service. We've bound the service to the application, but now we don't know how to utilize the service within our app.
How do we use the Speech to Text API in our iOS app with our back end hosted on Bluemix?
You have two options:
You make the call to the Watson Speech to Text service directly from your iOS application. You can either invoke the REST API directly from your iOS app using something like RestKit, or you can use the Watson Speech iOS SDK to make that invocation easier.
You can send all the received audio to your app on Bluemix (serving as a mobile back end) and invoke the Speech to Text REST API from there. This will offload computation from the mobile device, but will most likely increase the latency of getting back the audio transcription to your mobile phone.
Additionally, there is now a Watson iOS SDK which includes the Speech to Text service. This seems like an ideal solution over using the REST API directly if you plan to do a lot of work with Watson.

Resources