Firebase Realtime Database rules for specific property - firebase-realtime-database

I use Firebase Realtime Database in my project. JSON structure is:
"records" : {
"FwdxA8vqUBW79nT9o9447RT4Yqa2" : {
"-Lzqsastr4Ywnljvsn-b" : {
"creationDate" : 1.580395032274364E9,
"filename" : "4822DF69-8170-4BEE-92CC-18D3435A4E52",
"length" : "00:07",
"likeCount" : 0,
"link" : "",
"playingCount" : 15,
"recordId" : "-Lzqsastr4Ywnljvsn-b",
"recordUrl" : "https://firebasestorage.googleapis.com.....,
"source" : "",
"title" : "Ftbf",
"userUid" : "FwdxA8vqUBW79nT9o9447RT4Yqa2"
},
"FwdxA8vqUBW79nT9o9447RT4Yqa2" - it's userUid
"-Lzqsastr4Ywnljvsn-b" - it's recordId
How to set security rules just for "likeCount" and "playingCount":
".write": "auth != null",
and for the whole "records" :
"$uid" : {
".write": "auth != null && auth.uid == $uid",
}
Update: I did it)). So, the correct rules for me are:
{
"rules": {
".read": "auth != null",
"records": {
".read": "auth != null",
"$uid": {
".write": "auth != null && auth.uid == $uid",
"$recordId": {
"playingCount": {
".write": "auth != null",
},
"likeCount": {
".write": "auth != null",
},
}
}
},
}
}

Related

Why does this .indexOn: 'g' no longer work as it should?

I can no longer place activeDrivers in the correct position to make it work. "acriveDrivers" is a direct branch of the root, so the way I placed it is as if it were a child of drivers, which seems wrong to me, but I can't find the solution.
The rules at the beginning were like this (practically no security):
{
"rules": {
".read":true,
".write": true,
"activeDrivers": {
".indexOn": ["g"]
}
}
}
New rules are:
"rules": {
"drivers": {
"$uid": {
".read": "auth !== null && auth.uid === $uid",
".write": "auth !== null && auth.uid === $uid",
"activeDrivers": {
".indexOn": ["g"]
}
}
}
}
This is the JSON of the Database:
{
"ALL Ride Requests": {
"-NJlIOrp49pNBSuuEawT": {
"destination": {
"latitude": "41.929612",
"longitude": "12.4858561"
},
"destinationAddress": "Duke's",
"driverId": "waiting",
"origin": {
"latitude": "41.919225",
"longitude": "12.5266267"
},
"originAddress": "Circonvallazione Nomentana, 270, 00162 Roma RM, Italy",
"time": "2022-12-20 21:50:48.437521",
"userName": "Peter Parker",
"userPhone": "090012321354"
},
},
"activeDrivers": {
"31uMBJKvl6PF7BVKQnqHORTrWRP2": {
".priority": "sr2yt23wk8",
"g": "sr2yt23wl8",
"l": [
41.926275,
12.5376567
]
},
"G3OJ79KiLeMmxqoUpJ1NggIKbiF2": {
".priority": "sr2yt23wk8",
"g": "sr2yt23rk8",
"l": [
41.926275,
12.5376567
]
},
"filTJu3xjiQYTzehD71kzY1Oybz1": {
".priority": "sr2yt23wk8",
"g": "sr2yy23wk8",
"l": [
41.926275,
12.5376567
]
}
},
"drivers": {
"G3OJ79KiLeMmxqoUpJ1NggIKbiF2": {
"earnings": "0.0",
"email": "simo#hotmail.it",
"id": "G3OJ79KiLeMmxqoUpJ1NggIKbiF2",
"name": "Simone Demichele Test",
"newRideStatus": "idle",
"password": "testpassword",
"phone": "3452362585",
"token": "fFIwv1IHRv2ylXyW-qRVGQ:APA91bH7IfcNBBi7Y53wRjQKN12-nBUgFHHpf7F0LeWCstG_MIqt-mkobRN6nvUZxqbMnMlXU2yMHdE-efYykUdtcXl-91wW5rGyQcpMl6Dij6bxC8snQRkAMGBhQUmyqYW6sBhY6Ul3"
},
"Gzs0qxNiE6VZrwNGPYpFPRB7J8p1": {
"email": "giacomo#gmail.com",
"id": "Gzs0qxNiE6VZrwNGPYpFPRB7J8p1",
"name": "giacomo",
"newRideStatus": "idle",
"phone": "134634564352",
"token": "cdG8W0lFRZ2F0jAeG2WbnO:APA91bHTtbtSg18sPNxcH92TIdM3UuX-qcim2-h0I_1YPcBgucIGO0I3ACIQEGVpmsg-EKcRZXFIMIXlxGk1iBw-V0hmDZBVWDyFh44yH2iM8fZgrhRsQzyVBdXdLxGMI0Vz67hQFUQU"
},
"OLhsoowb9FXTBtWQmGysGLLBgNz2": {
"email": "gino#gmail.com",
"id": "OLhsoowb9FXTBtWQmGysGLLBgNz2",
"name": "Gino",
"phone": "4568965432",
"token": "ewktnxHKRSCBWwilm_SDfd:APA91bFchfCs_33zZNGNtbUHPf6TBC3Vrua7U4QT0bX_e4cr6Z62XSPr5TwZyL-BQ12fmDa3XKnAqeLDQy7NMUKp9m8bq676jS_i5n1vjNMFYJ4tHfIYjntEwYKvOcsTp0yh4ZSvjuxb"
},
"ZYqM7SbG3IhRBAGCfXRZHHs7XdG3": {
"email": "laura#gmail.com",
"id": "ZYqM7SbG3IhRBAGCfXRZHHs7XdG3",
"name": "Laura",
"newRideStatus": "idle",
"phone": "1212432554325",
"token": "cdG8W0lFRZ2F0jAeG2WbnO:APA91bHTtbtSg18sPNxcH92TIdM3UuX-qcim2-h0I_1YPcBgucIGO0I3ACIQEGVpmsg-EKcRZXFIMIXlxGk1iBw-V0hmDZBVWDyFh44yH2iM8fZgrhRsQzyVBdXdLxGMI0Vz67hQFUQU"
},
"filTJu3xjiQYTzehD71kzY1Oybz1": {
"email": "maria#gmail.com",
"id": "filTJu3xjiQYTzehD71kzY1Oybz1",
"name": "Maria",
"newRideStatus": "idle",
"phone": "3726761919",
"token": "fFIwv1IHRv2ylXyW-qRVGQ:APA91bH7IfcNBBi7Y53wRjQKN12-nBUgFHHpf7F0LeWCstG_MIqt-mkobRN6nvUZxqbMnMlXU2yMHdE-efYykUdtcXl-91wW5rGyQcpMl6Dij6bxC8snQRkAMGBhQUmyqYW6sBhY6Ul3"
}
},
"users": {
"yHc7xseCT0QePnpolGHZvdr8ARV2": {
"email": "peter#gmail.com",
"id": "yHc7xseCT0QePnpolGHZvdr8ARV2",
"name": "Peter Parker",
"phone": "090012321354"
}
}
}
This is the query
Position pos = await Geolocator.getCurrentPosition(
desiredAccuracy: LocationAccuracy.high,
);
driverCurrentPosition = pos;
Geofire.initialize("activeDrivers");
Geofire.setLocation(
currentFirebaseUser!.uid,
driverCurrentPosition!.latitude,
driverCurrentPosition!.longitude
);

how to merge two aggregation results in elasticsearch

I am facing issues with elasticsearch aggregation grouping inside top_hits. or i need unique students count in the tophits
Elastic search mapping:
{
"board" : {
"properties" : {
"notApplied" : {
"type" : "date"
}
}
}
}
Query :
{
"size": 0,
"query": {},
"aggs": {
"notApplied": {
"filter": {
"exists": {
"field": "board.notApplied"
}
},
"aggs": {
"top_student_hits": {
"top_hits": {
"sort": [
{
"board.notApplied": {
"order": "desc"
}
}
],
"script_fields": {
"dues": {
"script": {
"source": "if (doc.containsKey('board.notApplied') && doc['board.notApplied'].size() != 0) { (doc['board.notApplied'].value.toInstant().toEpochMilli()-params.date)/86400000 } else { 0; }",
"params": {
"date": 1669939199059 // --> < 1 day
}
}
}
},
"_source": {
"includes": [
"id",
"studentName",
"usercode",
"board.notApplied",
"userId"
]
},
"size": 5
}
}
}
}
}
}
Output for the above query :
{
"took" : 11,
...
"aggregations" : {
"notApplied" : {
"doc_count" : 42,
"top_student_hits" : {
"hits" : {
"total" : {
"value" : 42,
"relation" : "eq"
},
"max_score" : null,
"hits" : [
{
"_index" : "applications",
"_type" : "_doc",
"_id" : "4b85533822f91e9b99392f16dedaae1f",
"_score" : null,
"_source" : {
"board" : {
"notApplied" : "2022-10-25T00:00:00.000Z"
},
"studentName" : "Joe",
"id" : "4b85533822f91e9b99392f16dedaae1f",
"userId" : "45a47d1314041ab287a277679ff19922"
},
"fields" : {
"dues" : [
-37
]
},
"sort" : [
1666656000000
]
},
{
"_index" : "applications",
"_type" : "_doc",
"_id" : "1897f32d2d7f691e42c3fe6ebe631c7d",
"_score" : null,
"_source" : {
"board" : {
"notApplied" : "2022-10-25T00:00:00.000Z"
},
"studentName" : "Joe",
"id" : "1897f32d2d7f691e42c3fe6ebe631c7d",
"userId" : "45a47d1314041ab287a277679ff19922"
},
"fields" : {
"dues" : [
-37
]
},
"sort" : [
1666656000000
]
},
{
"_index" : "applications",
"_type" : "_doc",
"_id" : "f0b25dc9a911782ace5af36db7bfbc1f",
"_score" : null,
"_source" : {
"board" : {
"notApplied" : "2022-10-25T00:00:00.000Z"
},
"studentName" : "Sam",
"id" : "f0b25dc9a911782ace5af36db7bfbc1f",
"userId" : "d84f9e5231daa902c37921de9126cad7"
},
"fields" : {
"dues" : [
-37
]
},
"sort" : [
1666656000000
]
},
{
"_index" : "applications",
"_type" : "_doc",
"_id" : "e7f84fa978a553e77716ab479d3d6ce5",
"_score" : null,
"_source" : {
"board" : {
"notApplied" : "2022-10-13T00:00:00.000Z"
},
"id" : "e7f84fa978a553e77716ab479d3d6ce5",
"studentName" : "Sam",
"userId" : "d84f9e5231daa902c37921de9126cad7"
},
"fields" : {
"dues" : [
-49
]
},
"sort" : [
1665619200000
]
},
{
"_index" : "applications",
"_type" : "_doc",
"_id" : "9cba9f6b0d7a28ef739b321291d00170",
"_score" : null,
"_source" : {
"board" : {
"notApplied" : "2022-09-20T00:00:00.000Z"
},
"studentName" : "Ctest17 ",
"id" : "9cba9f6b0d7a28ef739b321291d00170",
"userId" : "ddaf6d6162c8317fd90fec0b870132ce"
},
"fields" : {
"dues" : [
-72
]
},
"sort" : [
1663632000000
]
}
]
}
}
}
}
}
I am getting the exact results but it has been duplicated by userId.
i need a result in top_hits without duplicates or the buckets should be grouped by userId. also the result should be sort desc by (dues or notApplied) field.
can any one help me to resolve this?

Syntax error in database rules while deploying

While deploying my website to firebase,i get a syntax error in database rules showing:
2:5: Expected rules property.
I am listing down the code in json
{
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
]
},
"database": {
"rules": "firebase.json"
},
"emulators": {
"auth": {
"port": 9099
},
"database": {
"port": 9000
},
"hosting": {
"port": 5000
},
"ui": {
"enabled": true
}
}
}
The rules value should point to a rules file, not at the firebase.json file itself. Something like:
"rules": "database.rules.json"
That rules file must define valid database rules. For example:
{
"rules": {
".read": false,
".write": false
}
}

Can my FireBase database layout cause issues if I use this approach?

I have a FireBase structure that looks like this. I want to represent all countries in the world.
In this sample code there is only 9 countries and only United States and Venezuela have data to demonstrate my problem.
I have denormalization and flattens the data as much as I can.
What happens her is that user can search for street addresses like
US/California/Orange County/Orange/3138 E Maple Ave
In the db below it looks like this:
US/ADMINISTRATIVE_AREA_LEVEL_1/
US/ADMINISTRATIVE_AREA_LEVEL_2
US/LOCALITY
US/STREET_ADDRESS
....
....
"AE": {
"name": "United Arab Emirates"
},
"GB": {
"name": "United Kingdom"
},
"US": {
"name": "United States"
"ADMINISTRATIVE_AREA_LEVEL_1": {
"hjg86tghg8hubyhiuhb88ihi": {
"level1": "California"
},
},
"ADMINISTRATIVE_AREA_LEVEL_2": {
"hjg86tghg8hubyhiuhb88ihi": {
"level2": "Orange County"
},
},
"LOCALITY": {
"hjg86tghg8hubyhiuhb88ihi": {
"level2": "Orange"
},
},
"STREET_ADDRESS": {
"hjg86tghg8hubyhiuhb88ihi": {
"3138 E Maple Ave": {
}
}
},
"USER_LIST": {
"hjg86tghg8hubyhiuhb88ihi": {
"name": "Jhon Doe",
}
},
"CHAT_LIST": {
"hjg86tghg8hubyhiuhb88ihi": {
"title": "Wam-Bam-CHAT",
}
},
"chat_members": {
"hjg86tghg8hubyhiuhb88ihi": {
}
},
"chat_messages": {
"hjg86tghg8hubyhiuhb88ihi": {
},
},
"UM": {
"name": "United States Minor Outlying Islands"
},
"UY": {
"name": "Uruguay"
},
"UZ": {
"name": "Uzbekistan"
},
"VU": {
"name": "Vanuate"
},
"VE": {
"name": "Venezuela"
"ADMINISTRATIVE_AREA_LEVEL_1": {
"swdkewsjdr34378943489324": {
"level1": "California"
},
},
"ADMINISTRATIVE_AREA_LEVEL_2": {
"swdkewsjdr34378943489324": {
"level2": "Orange County"
},
},
"LOCALITY": {
"swdkewsjdr34378943489324": {
"level2": "Orange"
},
},
"STREET_ADDRESS": {
"swdkewsjdr34378943489324": {
"3138 E Maple Ave": {
}
}
},
"USER_LIST": {
"swdkewsjdr34378943489324": {
"name": "Jhon Doe",
}
},
"CHAT_LIST": {
"swdkewsjdr34378943489324": {
"title": "Wam-Bam-CHAT",
}
},
"chat_members": {
"swdkewsjdr34378943489324": {
}
},
"chat_messages": {
"swdkewsjdr34378943489324": {
},
},
"VN": {
"name": "Viet Nam"
....
....
When I create the Realtime Database Rules likes this: I have to create 240 root nods because of 240 countries right.
ThereĀ“s a loot of ".read": "$uid === auth.uid" and a loot of duplicate json since all ADMINISTRATIVE_AREA_LEVEL_1 and others looks the same.
If i put the ADMINISTRATIVE_AREA_LEVEL_1 as a root i will maybe have miljons of entries and for the STREET_ADDRESS there is 154 million alone for United States not to speak of the "world". So I group them like this, having the Country as key root node.
Small sample:
{
"rules": {
"SE": {
"ADMINISTRATIVE_AREA_LEVEL_1": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
}
},
"VE": {
"ADMINISTRATIVE_AREA_LEVEL_1": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
}
}
}
}
My question is how can I furthermore make this more efficient and what bottleneck(s) can I look forward to. Is there a way to have this structure and centrally set a rule without explicitly writing it at all location
UPDATE
Probably i get this wrong but anyway here goes, after #FrankvanPuffelen answer I try this: Must test run this but does this work on all 240 countries in my above code..
{
"rules": {
"$country": {
"ADMINISTRATIVE_AREA_LEVEL_1": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
},
"ADMINISTRATIVE_AREA_LEVEL_2": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
},
"LOCALITY": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
},
....and more
}
}
}
UPDATE
Yes it works with a little tweek
{
"rules": {
"$hubaBuba": {
"ADMINISTRATIVE_AREA_LEVEL_1": {
".read": "auth != null",
".write": "auth != null",
},
"ADMINISTRATIVE_AREA_LEVEL_2": {
".read": "auth != null",
".write": "auth != null",
},
"LOCALITY": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
},
....and more
}
}
If the rules for each country are the same, you can use a wildcard for the country:
{
"rules": {
"$country": {
"ADMINISTRATIVE_AREA_LEVEL_1": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
}
}
}
}

How to define type for a specific field in ElasticSearch for Rails

I am struggling with elasticsearch-rails.
I have the following mapping:
{
"listings" : {
"mappings" : {
"listing" : {
"properties" : {
"address" : {
"type" : "string"
},
"authorized" : {
"type" : "boolean"
},
"categories" : {
"properties" : {
"created_at" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"id" : {
"type" : "long"
},
"name" : {
"type" : "string"
},
"parent_id" : {
"type" : "long"
},
"updated_at" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"url_name" : {
"type" : "string"
}
}
},
"cid" : {
"type" : "string"
},
"city" : {
"type" : "string"
},
"country" : {
"type" : "string"
},
"created_at" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"featured" : {
"type" : "boolean"
},
"geojson" : {
"type" : "string"
},
"id" : {
"type" : "long"
},
"latitude" : {
"type" : "string"
},
"longitude" : {
"type" : "string"
},
"name" : {
"type" : "string"
},
"phone" : {
"type" : "string"
},
"postal" : {
"type" : "string"
},
"province" : {
"type" : "string"
},
"thumbnail_filename" : {
"type" : "string"
},
"updated_at" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"url" : {
"type" : "string"
}
}
}
}
}
}
I would like to change the type for the geojson field from string to geo_point so I can use the geo_shape query on it.
I tried this in my model:
settings index: { number_of_shards: 1 } do
mappings dynamic: 'false' do
indexes :geojson, type: 'geo_shape'
end
end
with peculiar results. When I queried the mapping with $ curl 'localhost:9200/_all/_mapping?pretty', the geojson field still shows as type: string.
Within a Rails console, if I do Listing.mappings.to_hash, it seems to show that the geojson field is of type geo_shape.
And yet when running this query:
Listing.search(query: { fuzzy_like_this: { fields: [:name], like_text: "gap" } }, query: { fuzzy_like_this_field: { city: { like_text: "San Francisco" } } }, query: { geo_shape: { geojson: { shape: { type: :envelope, coordinates: [[37, -122],[38,-123]] } } } }); response.results.total; response.results.map { |r| puts "#{r._score} | #{r.name}, #{r.city} (lat: #{r.latitude}, lon: #{r.longitude})" }
ES complains that the geojson field is not of type geo_shape.
What am I missing? How do I tell ES that I want the geojson field to be of type geo_shape and not string?
The issue was that I didn't delete and recreate the mapping.
In the rails console, I ran Model.__elasticsearch__.delete_index! and then Model.__elasticsearch__.create_index! followed by Model.import

Resources