Is there a way to rate limit writing to the DB to every 5 minutes? - firebase-realtime-database

I am writing an app that uses firebase realtime database to store data, and I want each user to be able to append/delete 1 element from a section of the database with an interval of 5 minutes (per user).
Right now I am storing the last time a user appended/deleted an element in a user data section of the database. I’ve managed to make rules that disallow appending/deleting when this timestamp is less then 5 minutes from the current time:
".write": "
(!data.exists() ?
now - root.child('userData').child(auth.uid).child('lastPlaced').val() > 295000
: !newData.exists() ?
now - root.child('userData').child(auth.uid).child('lastDeleted').val() > 295000
: false)
"
But the problem is that a malicious user could append an element without resetting userData/$uid/lastPlaced, enabling them to append and delete elements disregarding the timer.
Is there a better way to do this?

Related

How to auto update node attributes in ne04j after a specific time period

Is there a way to update an existing attribute in a node to a new value after a specific time period passes ?
For example:
Node 'Offer' has the below attributes
Offer Name (String)
Offer ID (Numeric)
CREATED_ON (Timestamp)
IS_VALID ('Yes'/'No')
Process:
When an initial "Offer" is created IS_VALID value is "Yes"
After 7 days of creation IS_VALID value should be set to "No" automatically for each of the offers
I know that there is TTL , but I don't want my node to be deleted after 7 days, rather the attribute to be updated to "No" from "Yes"
Is there any way I can achieve this in Neo4j?
You can use apoc procedure calls for scheduling background jobs. apoc.periodic.schedule() makes sense in your case.
CALL apoc.periodic.schedule('offerValidater',
"MATCH (n: Offer) WHERE apoc.date.currentTimestamp() - n.CREATED_ON > 604800000 SET n.IS_VALID = 'No'",
86400)
Here I assumed that offer.CREATED_ON is in epoch milliseconds. If you are using other format timestamps you need to do a comparison accordingly. offerValidater routine is scheduled to run with a delay of 86400 seconds (1 day).

Microsoft Graph "messages" delta request truncates too many results with date filter

I think I've found a bug with the date filtering on the delta API.
I'm finding on one of the email accounts I'm working with using Office 365 Graph API that the "messages" graph API delta request is returning a different number of items than are actually in a folder for the expected time range. There are 150,000 items covering 10 years in the folder but delta only returns the last 5,000-ish items covering the last 60 or so days.
Paging Works Fine
When querying the graph API for the folder "Inbox" it has 154,045 total items and 57456 unread items.
IUserMailFoldersCollectionPage foldersPage =
await client.Users[mailboxid].MailFolders.Request().GetAsync();
I can skip over 10,000, 50,000 or more messages using paging.
model.messages = await client.Users[mailboxid].MailFolders[folderid].Messages.Request().Top(top)
.Skip(skip).GetAsync();
Delta with Date Filter doesn't work
But when looping with nextToken and deltaTokens, the deltaToken appears after 5000 or so email messages. Basically it seems like it's only returning results for the last couple months even though the filter is saying find messages for the last 20 years.
Here is the example for how we generate the Delta request. The time is hardcoded here but in reality it is a variable.
var sFilter = $"receivedDateTime ge {DateTimeOffset.UtcNow.AddYears(-20).ToString("yyyy-MM-dd")}";
model.messages = await client.Users[mailboxid].MailFolders[folderid].Messages.Delta().Request()
.Header("Prefer", "odata.maxpagesize=" + maxpagesize)
.Filter(sFilter)
.OrderBy("receivedDateTime desc")
.GetAsync();
And then on each paging operation I do the following. "nexttoken" is either the next or delta link depending on what came back from the first request.
model.messages = new MessageDeltaCollectionPage();
model.messages.InitializeNextPageRequest(client, nexttoken);
model.messages = await model.messages.NextPageRequest
.Header("Prefer", "odata.maxpagesize=" + maxpagesize)
.GetAsync();
Delta without Filter works
If I do the exact same code for delta above but remove the "Filter" operation on date, then I get all the messages in the folder.
This isn't a great solution since I normally only need messages for the last year or 2 years and if there are 15 years of messages it is a huge waste to query everything.
Update on 12/3/2019
I'm still getting this issue. I recently switched back to trying to use Delta again whereas before I was querying everything from the server even though I might only need the last month of data. But that's super wasteful.
This code works fine for most mailboxes but sometimes I encounter a mailbox with this issue.
My code looks like this.
string sStartingTime = startingTime.ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss") + "Z";
var messageCollectionPage = await client.Users[mailboxsource.GetMailboxIdFromAccountID()].MailFolders[folder.Id].Messages.Delta().Request()
.Filter("receivedDateTime+ge+" + Uri.EscapeDataString(sStartingTime))
.Select(select)
.Header("Prefer", "odata.maxpagesize=" + preferredPageSize)
.OrderBy("receivedDateTime desc")
.GetAsync(cancellationToken);
At around 5000 results the Delta request just stops returning results even though there are 66K items in the folder.
Paul, my peers confirmed there is indeed a 5000-item limit if you apply $filter to a delta query of the message resource.
Within the next day, the docs will also be updated with this information. Thank you for your patience and support!

InfluxDB: How to create a continuous query to calculate delta values?

I'd like to calculate the delta values for a series of measurements stored in an InfluxDB. The values are readings from an electricity meter taken every 5 minutes. The values increase over time. Here is subset of the data to give you an idea (commands shown below are executed in the InfluxDB CLI):
> SELECT "Haushaltstromzaehler - cnt" FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z'
name: myhome_measurements
time Haushaltstromzaehler - cnt
---- --------------------------
2018-02-02T10:00:12.610811904Z 11725.638
2018-02-02T10:05:11.242021888Z 11725.673
2018-02-02T10:10:10.689827072Z 11725.707
2018-02-02T10:15:12.143326976Z 11725.736
2018-02-02T10:20:10.753357056Z 11725.768
2018-02-02T10:25:11.18448512Z 11725.803
2018-02-02T10:30:12.922032896Z 11725.837
2018-02-02T10:35:10.618788096Z 11725.867
2018-02-02T10:40:11.820355072Z 11725.9
2018-02-02T10:45:11.634203904Z 11725.928
2018-02-02T10:50:11.10436096Z 11725.95
2018-02-02T10:55:10.753853952Z 11725.973
Calculating the differences in the InfluxDB CLI is pretty straightforward with the difference() function. This gives me the electricity consumed within the 5 minutes intervals:
> SELECT difference("Haushaltstromzaehler - cnt") FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z'
name: myhome_measurements
time difference
---- ----------
2018-02-02T10:05:11.242021888Z 0.03499999999985448
2018-02-02T10:10:10.689827072Z 0.033999999999650754
2018-02-02T10:15:12.143326976Z 0.02900000000045111
2018-02-02T10:20:10.753357056Z 0.0319999999992433
2018-02-02T10:25:11.18448512Z 0.03499999999985448
2018-02-02T10:30:12.922032896Z 0.033999999999650754
2018-02-02T10:35:10.618788096Z 0.030000000000654836
2018-02-02T10:40:11.820355072Z 0.03299999999944703
2018-02-02T10:45:11.634203904Z 0.028000000000247383
2018-02-02T10:50:11.10436096Z 0.02200000000084401
2018-02-02T10:55:10.753853952Z 0.02299999999922875
Where I struggle is getting this to work in a continuous query. Here is the command I used to setup the continuous query:
CREATE CONTINUOUS QUERY cq_Haushaltstromzaehler_cnt ON myhomedb
BEGIN
SELECT difference(sum("Haushaltstromzaehler - cnt")) AS "delta" INTO "Haushaltstromzaehler_delta" FROM "myhome_measurements" GROUP BY time(1h)
END
Looking in the InfluxDB log file I see that no data is written in the new 'delta' measurement from the continuous query execution:
...finished continuous query cq_Haushaltstromzaehler_cnt, 0 points(s) written...
After much troubleshooting and experimenting I now understand why no data is generated. Setting up a continuous query requires to use the GROUP BY time() statement. This in turn requires to use an aggregate function within the differences() function. The problem now is that the aggregate function returns only one value for the time period specified by GROUP BY time(). Obviously, the differences() function cannot calculate a difference from just one value. Essentially, continuous query executes a command like this:
> SELECT difference(sum("Haushaltstromzaehler - cnt")) FROM "myhome_measurements" WHERE time >= '2018-02-02T10:00:00Z' AND time < '2018-02-02T11:00:00Z' GROUP BY time(1h)
>
I'm now somewhat clueless as to how to make this work and appreciate any advice you might have.
Does it help using the last aggregate function? Not tested this as a cq yet.
Select difference(last(T1_Consumed)) AS T1_Delta, difference(last(T2_Consumed)) AS T2_Delta
from P1Data
where time >= 1551648871000000000 group by time(1h)
DIFFERENCE() would calculate delta from the "aggregated" value taken from previous group, not within current group.
So fill free to use selector function there - since your counters seemed to be cumulative, LAST() should be working well.

grafana-influxdb get multiple rows for last timestamp

I am using telegraf-influxdb-grafana together. But I could not get rows for only last timestamp.
Here is what I am doing;
Collecting DB statistics(Running queries at that time) with Telegraf(exec plugin).
Storing output to influxdb
Trying to monitor running queries over grafana
But I need to get all rows at last timestamp.
Here is what I've tried;
> select * from postgresql_running_queries where time=(select max(time) from postgresql_running_queries)
ERR: error parsing query: found SELECT, expected identifier, string, number, bool at line 1, char 54
Here is what I want to see;
Time DB USER STATE QUERY
2017-06-06 14:25.00 mydb myuser active my_query
2017-06-06 14:25.00 mydb myuser idle in transaction my_query2
2017-06-06 14:25.00 mydb2 myuser2 active my_query3
Can any one help me to achive this?
I am open to any solution.
select last(fieldname) from measurment_name;
Query in this format will return last timestamp data from the InfluxDB.
But I am surprised with the fact that you are expecting 3 values for a single timestamp (unless you have different TAG values, refer this documentation how to store duplicate points). You will a ONLY ONE record for a given timestamp. InfluxDB overwrites previous content if there is another entry for same timestamp, here is why.
Your results will be something like (if you don't have different TAG value):
Time DB USER STATE QUERY
2017-06-06 14:25.00 mydb2 myuser2 active my_query3
EDIT:
Based on comment, my guess is you are using TAGs to differentiate, still above query should work, if not, you may try by adding WHERE clause.

Esper EPL statement each time a value has increased a multiple

I am looking for an EPL statement which fires an event each time a certain value has increased by a specified amount, with any number of events in between, for example:
Considering a stream, which continuously provides new prices.
I want to get a notification, e.g. if the price is greater than the first price + 100. Something like
select * from pattern[a=StockTick -> every b=StockTick(b.price>=a.price+100)];
But how to realize that I get the next event(s), if the increase is >= 200, >=300 and so forth?
Diverse tests with context and windows has not been successful so far, so I appreciate any help! Thanks!
The contexts would be the right way to go.
You could start by defining a start event like this:
create schema StartEvent(threshold int);
And then have context that uses the start event:
create context ThresholdContext inititiated by StartEvent as se
terminated after 5 years
context ThresholdContext select * from pattern[a=StockTick -> every b=StockTick(b.price>=context.se.threshold)];
You can generate the StartEvent using "insert into" from the same pattern (probably want to remove the "every") or have the listener send in a StartEvent or declare another pattern that fires just once for creating a StartEvent.

Resources