Xamarin couchbase server and sync gateway not syncing - xamarin.android

In Xamarin, I am trying to sync the local database with sync gateway, but it is not syncing.
I am trying it with sample https://github.com/couchbaselabs/mini-hacks/tree/master/kitchen-sync
Local database is getting updated but it is not syncing with sync gateway
I have provided the sync url:http://localhost:4984/sync_gateway/
Also updated config file
{
"interface": ":4984",
"adminInterface": ":4985",
"log":["CRUD+", "REST+", "Changes+", "Attach+"],
"databases": {
"sync_gateway": {
"server": "walrus:data",
"bucket": "sync_gateway",
"sync": `function(doc)
{
channel(doc.channels);
}`,
"users": {
"GUEST": {
"disabled": true,
"admin_channels": [ "*" ]
}
}
}
On checking the data bucket, item count is not getting added up. Can anyone help me out.

Related

Thingsboard Upload Converter with multiple timestamps

My device takes measuremets more often than it communicates with MQTT broker, so there can be more than one timestamb in each message, like this:
my/device/telemetry 1651396728000:22,13;1651400328000:25,10;...so on
I want to use built-in Thingsboard MQTT Integration with my custom Upload Converter, but I can't find proper format for result object with multiple timestamps in it (how it was in Gateway Telemetry API)
The output of your data converter should be an array like this:
var result = [
{
"deviceName": "88888888",
"deviceType": "tracker",
"attributes": {
"att1": "val1",
},
"telemetry": {
"ts": 1652738915000,
"values": {
"blah": "blooo",
"External Voltage": 12812
}
}
},
{same},
{similar}
]

Firebase Storage Emulator rules not working

My app is working just fine but when I use the Firebase emulators, I have a problem with the Storage rules not being obeyed.
I have downloaded the storage.rules file and it is in the same directory as the firebase-json file. The Emulator suite launches just fine and I can see that the Storage emulator is working.
However, when I try to upload an image (as I do in the live app) I get an error.
Error while uploading file: Error Domain=FIRStorageErrorDomain Code=-13021 "User does not have permission to access gs://my-stuff-7796d.appspot.com/Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg." UserInfo={object=Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg, ResponseBody={"error":{"code":403,"message":"Permission denied. No WRITE permission."}}, bucket=my-stuff-7796d.appspot.com, data={length = 74, bytes = 0x7b226572 726f7222 3a7b2263 6f646522 ... 73696f6e 2e227d7d }, data_content_type=application/json; charset=utf-8, NSLocalizedDescription=User does not have permission to access gs://my-stuff-7796d.appspot.com/Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg., ResponseErrorDomain=com.google.HTTPStatus, ResponseErrorCode=403}
The storage.rules are:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
Again, running against the live Firebase works just fine and the rules are obeyed.
Here is my firebase.json file
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"storage": {
"rules": "storage.rules"
},
"emulators": {
"auth": {
"port": 9099
},
"firestore": {
"port": 8080
},
"storage": {
"port": 9199
},
"ui": {
"enabled": true
}
}
}
When I launch my app, this is the code I initialize after call FirebaseApp.configure
Auth.auth().useEmulator(withHost:"localhost", port:9099)
Storage.storage().useEmulator(withHost:"localhost", port:9199)
let settings = Firestore.firestore().settings
settings.host = "localhost:8080"
settings.isPersistenceEnabled = false
settings.isSSLEnabled = false
Firestore.firestore().settings = settings
What am I missing, or is this a bug?
I also had this issue. It seems to have been resolved for me in version 11.8.0. As a temporary workaround, I resorted to allowing all reads/writes so that I wasn't having to use my production environment and pay for usage. Not an ideal solution, but it unblocked me.
But for those who might be having similar issues, try updating to the latest firebase-tools:
npm install -g firebase-tools
Be sure to address any issues with:
npm audit fix
Or make the following change to the storage.rules file:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write;
}
}
}

With Google Cloud Speech-to-text, why do I get different results for the same audio file, depending on which bucket do I put it into?

I am trying to use Google Cloud Speech-to-text, using the client libraries, from a node.js environment, and I see something I don't understand: I get a different result for the same example audio file, and the same configuration, depending on whether I am using it from the original sample bucket, or from my own bucket.
There are the requests and responses:
The baseline is Google's own test data file, available here: https://storage.googleapis.com/cloud-samples-tests/speech/brooklyn.flac
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "How old is the Brooklyn Bridge?",
"confidence": 0.9831430315971375
}
]
}
]
}
So far, so good. But, if I download this audio file, re-upload it to my own bucket, and do the same, then:
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://goe-transcript-creation/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "how old is",
"confidence": 0.8902621865272522
}
]
}
]
}
As you can see this is the same request. The re-uploaded audio data is here: https://storage.googleapis.com/goe-transcript-creation/brooklyn.flac
This the exact same file as in the first example... not a bit of difference.
Still, the results are different; I only get half of the sentence.
What am I missing here? Thanks.
Update 1:
The same thing happens with the CLI tool, too:
$ gcloud ml speech recognize gs://cloud-samples-tests/speech/brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad encoding..
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad sample rate hertz.
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC --sample-rate=16000
{
"results": [
{
"alternatives": [
{
"confidence": 0.8902483,
"transcript": "how old is"
}
]
}
]
}
It's also interesting that when pulling the audio from the other bucket, I need to specify encoding and sample rate, otherwise it doesn't work... but it's not necessary when I am using the original test bucket.
Update 2:
If I don't use Google Cloud Storage, but upload the data directly in the speech-to-text request, it works as intended:
$ gcloud ml speech recognize brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
So the problem doesn't seems to be with the recognition itself, but accessing the audio data. The obvious guess would be that maybe it's the fault of the uploading, and the data is somehow corrupted along the way?
We can verify that by pulling the data from the cloud, and comparing with the original. It doesn't seem to be broken.
So maybe it's a problem when the S-T-T service is accessing the storage service? But why with one bucket only? Or is it some kind of file metadata problem?

install plugin for Open Distro

Amazon Elasticsearch Service offers k-Nearest Neighbor (k-NN) search which can enhance search by similarity use cases.
https://aws.amazon.com/about-aws/whats-new/2020/03/build-k-nearest-neighbor-similarity-search-engine-with-amazon-elasticsearch-service/
I tried this official code that I found here...
https://github.com/opendistro-for-elasticsearch/k-NN
PUT /myindex
{
"settings" : {
"index": {
"knn": true
}
},
"mappings": {
"properties": {
"my_vector1": {
"type": "knn_vector",
"dimension": 2
},
"my_vector2": {
"type": "knn_vector",
"dimension": 4
},
"my_vector3": {
"type": "knn_vector",
"dimension": 8
}
}
}
}
Getting this error:
"unknown setting [index.knn] please check that any required plugins
are installed, or check the breaking changes documentation for removed
settings"
How do I check if my Elastic installation supports this feature?
t2.small and t2.medium instance types are not supported. (It is not mentioned anywhere in the documentation.) It worked as expected when r5.large instance type was selected.

Sensu /checks API call appears empty

I am running Sensu as a series of Docker containers (sensu-server, sensu-api, n sensu-clients, rabbitmq and redis). While the clients successfully register themselves and run checks requested by the server, and whose checks will be reported via handlers and via /clients, API calls to /checks return nothing.
Server config:
{
"rabbitmq":{
"host": "rabbitmq"
},
"redis":{
"host":"redis"
},
"api":{
"host":"api",
"port":4567
}
"handlers": { ... },
"checks": { ... }
}
API config:
{
"rabbitmq":{
"host":"rabbitmq"
},
"redis":{
"host":"redis"
},
"api":{
"host":"api",
"port":4567
}
}
Client config:
{
"client":{
"name":"openshift-{{ .Env.AVAILABILITY_ZONE }}",
"address":"{{ .Env.HOSTNAME }}",
"subscriptions":[
"{{ .Env.AVAILABILITY_ZONE }}",
"any-client"
]
},
{
"rabbitmq":{
"host":"rabbitmq"
}
}
}
I solved this in a similar scenario - our configuration didn't give the api & servers (which ran inside separate docker containers) a copy of the check definitions.
Here's the Github issue that lead me to it: https://github.com/sensu/uchiwa/issues/83#issuecomment-51917336

Resources