Google speech-to-text api return "|" result using C# - google-cloud-speech

I'm using Google Speech-To-Text to recognize via mic in Windows PC device.
My target language is "ja-JP" and I've already coded app that can recognize some japanese sentences. But I sometime got a strange result Like a "2010|ニセンジュー,ニーゼロイチゼロ 年代|ネンダイ".
I confused such result that included "|". Anybody knows how to create a correct result? Please help me.
I created my application, using Windows C# Google Nuget Library, "Google.Cloud.Speech.V1" version 1.2.0. Language is "ja-JP".
Here is my config information.
var recogConfig = new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "ja-JP",
Model = "command_and_search",
ProfanityFilter = false,
};
StreamingConfig = new StreamingRecognitionConfig()
{
Config = recogConfig,
InterimResults = true,
SingleUtterance = true,
};
I expect the output of "transcript" is "2010年代の".
but the actual output is "2010|ニセンジュー,ニーゼロイチゼロ 年代|ネンダイ の|ノ".
StreamingRecognizeResponse result is below.
{[ { "alternatives": [ { "transcript": "2010|ニセンジュー,ニーゼロイチゼロ 年代|ネンダイ の|ノ" } ], "isFinal": true, "resultEndTime": "2.820s" } ]}

Related

Column_limit for yapf and pylsp on neovim

I'm running nvim 0.9 with config I took from kickstart.nvim, so nvim-lspconfig, mason plus other stuff.
I configured yapf, based on how I understand the LSP docs and the kickstart.nvim, yet it is not respecting custom column_limit, it seems to be stuck to 79 line length. If yapf is actually the one doing the formatting.
Here is the Format command:
vim.api.nvim_buf_create_user_command(bufnr, 'Format', function(_)
vim.lsp.buf.format()
end, { desc = 'Format current buffer with LSP' })
And config for pylsp (autopep8 switched off like the docs say):
pylsp = {
plugins = {
autopep8 = {
enabled = false
},
yapf = {
enabled = true,
args = '--style={based_on_style: google column_limit: 120}'
},
pylint = {
enabled = true,
maxLineLength = 120
},
}
}
I'm new to Lua, I'm missing something but can't figure it out where or get a nice search hit on it.

Trouble reading user accessible files

I am using nativescript-mediafilepicker as means of choosing a file, and this can read external storage successfully (I have downloaded a PDF to the 'downloads' folder on iOS and I am able to pick it.) I then try to load the file using the file system module from nativescript library, and this fails because it is listed as NativeScript encountered a fatal error: Uncaught Error: You can’t save the file “com.xxxxxx” because the volume is read only. This doesn't make sense as I am trying to read anyway - I don't understand where the saving part is from. The error comes from fileSystemModule.File.fromPath() line.
Something to note that file['file'] is file:///Users/adair/Library/Developer/CoreSimulator/Devices/82F397CE-B0B3-4ADD-AD52-805265C7AC49/data/Containers/Data/Application/7B47A8BD-6DBA-42CF-8792-38A8C5E61174/tmp/com.xxxxxx/test.pdf
Is the file automatically being pulled to an application specific directory after this media picker?
getFiles() {
let extensions = [];
if (app.ios) {
extensions = [kUTTypePDF]; // you can get more types from here: https://developer.apple.com/documentation/mobilecoreservices/uttype
} else {
extensions = ["pdf"];
}
const mediaFilePicker = new Mediafilepicker();
const filePickerOptions = {
android: {
extensions,
maxNumberFiles: 1,
},
ios: {
extensions,
maxNumberFiles: 1,
},
};
masterPermissions
.requestPermissions([masterPermissions.PERMISSIONS.READ_EXTERNAL_STORAGE,masterPermissions.PERMISSIONS.WRITE_EXTERNAL_STORAGE])
.then((fulfilled) => {
console.log(fulfilled);
mediaFilePicker.openFilePicker(filePickerOptions);
mediaFilePicker.on("getFiles", function (res) {
let results = res.object.get("results");
let file = results[0];
console.dir(file);
let fileObject = fileSystemModule.File.fromPath(file["file"]);
console.log(fileObject);
});
mediaFilePicker.on("error", function (res) {
let msg = res.object.get("msg");
console.log(msg);
});
mediaFilePicker.on("cancel", function (res) {
let msg = res.object.get("msg");
console.log(msg);
});
})
.catch((e) => {
console.log(e);
});
},
The issue I have experienced is resultant of the expectation of File.fromPath and what is returned by the file picker. File picker is returning a "file://path" URI, and File.fromPath is expecting a string of just "path".
Simply using the following instead is enough.
let fileObject = fileSystemModule.File.fromPath(file["file"].replace("file://","");

Using Google Assistant Change Firebase Database Value

I Created a android app in which if a press a button and value changes in Firebase database (0/1) , i want to do this using google assistant, please help me out, i searched out but didn't found any relevant guide please help me out
The code to do this is fairly straightforward - in your webhook fulfillment you'll need a Firebase database object, which I call fbdb below. In your Intent handler, you'll get a reference to the location you want to change and make the change.
In Javascript, this might look something like this:
app.intent('value.update', conv => {
var newValue = conv.prameters.value;
var ref = fbdb.ref('path/to/value');
return ref.set(newValue)
.then(result => {
return conv.ask(`Ok, I've set it to ${newValue}, what do you want to do now?`);
})
.catch(err => {
console.error( err );
return conv.close('I had a problem with the database. Try again later.');
});
return
});
The real problem you have is what user you want to use to do the update. You can do this with an admin-level connection, which can give you broad access beyond what your security rules allow. Consult the authentication guides and be careful.
I am actually working on a project using Dialogflow webhook and integrated Firebase database. To make this posible you have to use the fulfilment on JSON format ( you cant call firebasedatabase in the way you are doing)
Here is an example to call firebase database and display a simple text on a function.
First you have to take the variable from the json.. its something loike this (on my case, it depends on your Entity Name, in my case it was "tema")
var concepto = request.body.queryResult.parameters.tema;
and then in your function:
'Sample': () => {
db.child(variable).child("DESCRIP").once('value', snap => {
var descript = snap.val(); //firebasedata
let responseToUser = {
"fulfillmentMessages": [
{ //RESPONSE FOR WEB PLATFORM===================================
'platform': 'PLATFORM_UNSPECIFIED',
"text": {
"text": [
"Esta es una respuesta por escritura de PLATFORM_UNSPECIFIED" + descript;
]
},
}
]
}
sendResponse(responseToUser); // Send simple response to user
});
},
these are links to format your json:
Para formatear JSON:
A) https://cloud.google.com/dialogflow-enterprise/docs/reference/rest/Shared.Types/Platform
B) https://cloud.google.com/dialogflow-enterprise/docs/reference/rest/Shared.Types/Message#Text
And finally this is a sample that helped a lot!!
https://www.youtube.com/watch?v=FuKPQJoHJ_g
Nice day!
after searching out i find guide which can help on this :
we need to first create chat bot on dialogflow/ api.pi
Then need to train our bot and need to use webhook as fullfillment in
response.
Now we need to setup firebase-tools for sending reply and doing
changes in firebase database.
At last we need to integrate dialogflow with google assistant using google-actions
Here is my sample code i used :
`var admin = require('firebase-admin');
const functions = require('firebase-functions');
admin.initializeApp(functions.config().firebase);
var database = admin.database();
// // Create and Deploy Your First Cloud Functions
// // https://firebase.google.com/docs/functions/write-firebase-functions
//
exports.hello = functions.https.onRequest((request, response) => {
let params = request.body.result.parameters;
database.ref().set(params);
response.send({
speech: "Light controlled successfully"
});
});`

createAnswer format mediaConstraints - convertDartToNative_Dictionary

Have a working webrtc app written in js, now try to port it to dart. Are stuck on format, when setting media constrains for createAnswer.
https://api.dartlang.org/stable/1.24.3/dart-html/RtcPeerConnection/createAnswer.html
In js the working media constrain is:
var mediaConstraints = {
optional: [{RtpDataChannels: true}],
mandatory: {
OfferToReceiveAudio: false,
OfferToReceiveVideo: false
}
};
With dart createAnswer mediaContraints are converted with convertDartToNative_Dictionary. I have tried various different formats, but stuck with Error: OperationError: Malformed constraints object. from js_helper.dart:1772.
Have in dart tried e.g.
Map mediaConstraints = {
"optional" : [{"RtpDataChannels" : true}],
"mandatory" : {
"OfferToReceiveAudio" : false,
"OfferToReceiveVideo" : false
}
};
Basically something converted with convertDartToNative_Dictionary
that matches the js object above is needed. Easy to test, so even loose tips are welcome.

How to upload file using LrHttp.posMultipart in Lua

I need to send the image file using multipart request from Lightroom to my local web service using Lua language.
I have tested using sending headers also but not working...
I have created a function :
function testupload(filepath) --created inside LrTasks
local url = "http://localhosturl"
local mycontent = {
{
name = "lightroom_message",
value = "sent from lightroom plugin multiparta"
},
{
name = 'file',
filePath = filepath,
fileName = LrPathUtils.leafName(filepath),
contentType = 'image/jpeg'
--contentType = 'multipart/form-data'
}
}
local response, headers = LrHttp.postMultipart(url, mycontent)
end
But my web service is not getting called properly and I am using LrHttp.postMultipart() method to do so..
If I am sending just this param to web service (then working fine):
{
name = "lightroom_message",
value = "sent from lightroom plugin multiparta"
}
but when I include my file payload then its not working using pure Lua implementation.
Everything was correct but just a technical mistake...I was trying to call the testupload() function from inside LRtasks..but we dont need to call it in separate task and the function works perfect

Resources