Im trying to create a pattern which is triggered by having at least 100 events within 1 minutes over 10 minutes.
This is what I have:
create context C
start pattern[every ( [10] ( [100](Enter) )where timer:within(60sec))]
end pattern[every (timer:interval(120sec) and not S)];
How do I get the start and end time of the context? It context.startTime works, but context.endTime returns null. I need to compute the time of the start pattern and the end pattern.
Thanks
Related
I am using an API request and this API has a limit of 2 calls a second and 250 records per request. That's the gist.
I have created this background job that also has the option of a second background job. This may be overkill.
Flow:
Order webhooks on creates from shopify
Cron job once per day for that days orders in case webhooks fail.
Goal for request:
If there are >= 250 records/orders in the first API request, to then create a second background job in a new worker to fetch page 2 in about 3 minutes, and if page 2 has >= 250, to then create a new background job in the same worker has page 2 (after 2 completes) 3 minutes after page 2 jobs started fetch page 3, and so on.. I use n for the page and add 1 to n if the 250 statement is true.
Cron Job:
shops = Shop.all
shops.map do |shop|
if shop.company.present?
ShopifyOrderUpdatesWorker.perform_later(shop)
end
end
Background job 1: (for first API call)
def perform(shop)
n = 1
orders = ShopifyAPI::Order.find(:all, params: {created_at_min: 1.day.ago.beginning_of_day.iso8601}, limit: 250, page: n )
while (orders.count >= 250) || n == 1
unless n =< 1
while n > 1 && orders.count >= 250
orders = ShopifyAPI::Order.find(:all, params: {created_at_min: 1.day.ago.beginning_of_day.iso8601 }, limit: 250, page: n)
#while orders.count >= 250 || n == 2
t = 3
ShopifyOrderUpdatesLimitWorker.delay_for(t.minutes).perform_later(orders)
n += 1 #add page to API call request
t += +3 #add 3 minutes for each loop to buffer the api call queue to avoid api limits to be safe
#end
end
end
if n == 1
orders.map do |order|
#code here
end
end
n += 1
end
end
Background job 2: (for any API call after the first)
def perform(orders)
orders.map do |order|
#code here
end
end
This way, all shops can update "quickly" without being in queue behind other shops. Shops that have a lot of orders will wait the same time they would in either case of doing all of this in one action or in 2.
Is this overkill? Done right for code
In reality, it is probably very rare a webhook will fail so the chances of the second background job being called is slim.
Any possible improvements or suggestions for the code?
This may not be the right place to ask this question, but if anyone has experience with shopify or similar api situations, what are you doing?
sleep(30) if ShopifyAPI.credit_maxed?
add this in your loop
When planning your capacity and rate limiting - at first you should estimate your data size, because strategy for couple shops with rarely over couple of pages each and thousands of shops with multiple pages is very different.
Because of Shopify's rate limiting is based on "Leaky bucket" algorithm with default bucket size of 40 (at the time of writing, see official docs) - you can make a burst of up to 40 api calls in a row without any rate limiting, that'll suffice for a few shops with a couple of pages each.
If you have/plan to have more than that (or just want to be polite) - easiest way is to make a separate queue for these tasks with a single worker, so that tasks do not run in parallel. Then add a sleep(0.5)(or more) after each api call - and you should not saturate your limit even if you have other arbitrary calls somewhere else in your app. But be ready to receive a rare 429 Too Many Requests, in which case just wait a little more and repeat that call.
Here are two events:AppStartEvent and AppCrashEvent.
I need to count the number of two events over a period of time, and then to calculate the count(AppStartEvent)/count(AppCrashEvent).
My EPL is here
create context ctx4NestInCR
context ctx4Time initiated #now and pattern [every timer:interval(1 minute)] terminated after 15 minutes,
context ctx4AppName partition by appName from AppStartEvent, appName from AppCrashEvent
<------------------->
context ctx4NestInCR select count(s),count(c) from AppStartEvent as s, AppCrashEvent as c output last when terminated
And it does not work
Error starting statement: Joins require that at least one view is specified for each stream, no view was specified for s
Your post doesn't have the join? It only has the context and that wouldn't produce the message. I would suggest to correct the post.
You can also join streams by merging the two streams and treating them as one.
insert into AppEvent select 'crash' as condition from AppCrashEvent;
insert into AppEvent select 'start' as condition from AppStartEvent;
select count(condition='crash')/count(condition='start') from AppEvent;
I'm new to Esper (NEsper, actually) and I've been trying (unsuccesfully) to create an statement to detect when an event starts.
For example, suppose I have an event type called "Started betting" and I want to consider it is happening after 10 minutes of having "proof" of that. With what I've been using as the statement, after 10 minutes the update() method is triggered every time there is "proof".
I've tried something like
from (...), StartedBetting as st
where st is null AND (...)
but didn't work (event was never detected).
Hope I've made myself clear.
Any hints will be appreciated.
So if I understand you right, when receiving any start-betting event you simply want to delay for 10 minutes and then get called? The "proof" part is not clear. But this would do it:
select * from pattern [every StartBetting -> timer:interval(10 min)]
I am looking for an EPL statement which fires an event each time a certain value has increased by a specified amount, with any number of events in between, for example:
Considering a stream, which continuously provides new prices.
I want to get a notification, e.g. if the price is greater than the first price + 100. Something like
select * from pattern[a=StockTick -> every b=StockTick(b.price>=a.price+100)];
But how to realize that I get the next event(s), if the increase is >= 200, >=300 and so forth?
Diverse tests with context and windows has not been successful so far, so I appreciate any help! Thanks!
The contexts would be the right way to go.
You could start by defining a start event like this:
create schema StartEvent(threshold int);
And then have context that uses the start event:
create context ThresholdContext inititiated by StartEvent as se
terminated after 5 years
context ThresholdContext select * from pattern[a=StockTick -> every b=StockTick(b.price>=context.se.threshold)];
You can generate the StartEvent using "insert into" from the same pattern (probably want to remove the "every") or have the listener send in a StartEvent or declare another pattern that fires just once for creating a StartEvent.
I have run with a problem which i believe is Active Records fault. I am parsing an XML file which contains jobs. This xml file contains nodes which indicate walltime in the time format 00:00:00. I also have a model which will accept these jobs. However, when the time is larger than an actual 24H time, Active record inserts it as NULL. Examples below:
INSERT INTO `jobs` (`jobid`, `walltime`) VALUES('71413', 'NULL')
INSERT INTO `jobs` (`jobid`, `walltime`) VALUES('71413', '15:24:10')
Any ideas? Thank you!
The standard SQL time and datetime data types aren't intended to store a duration. Probably in agreement with those standards, ActiveRecord's time attribute assignment logic uses the time parsing rules of the native Ruby Time class to reject invalid time of day.
The way to store durations, as you intend, is either:
Store the duration as an integer (e.g. "number of seconds"), or
Store two (date)times, a start and an end, and use date arithmetic on them.
class Thing < ActiveRecord::Base
...
def duration
return start - end
end
def duration=(length)
start = Time.now
end = start + length
end
...
end