I have the following requirement:
read events from a pub sub topic
take a window of duration 30 mins and period 1 minute
in that window if 3 events for a given id all match match some predicate then i need to raise an event in a different pub sub topic
The event should be raised as soon as the 3rd event comes in for the grouping id as this is for detecting fraudulent behaviour. In one pane there many be many ids that have 3 events that match my predicate so i may need to emit multiple events per pane
I am able to write a function which consumes a PCollection does the necessary grouping, logic and filtering and emit events according to my business logic.
Questions:
The output PCollection contains duplicates due to the overlapping sliding windows. I understand this is the expected behaviour of sliding windows but how can I avoid this whilst staying in the same dataflow pipeline. I realise I could dedupe in an external system but that is just adding complexity to my system.
I also need to write some sort of trigger that fires each and every time my condition is reached in a window
Is dataflow suitable for this type of realtime detection scenario
Many thanks
You can rewindow the output PCollection into the global window (using the regular Window.into()) and dedupe using a GroupByKey.
It sounds like you're already returning the events of interest as a PCollection. In order to "do something for each event", all you need is a ParDo.of(whatever action you want) applied to this collection. Triggers do something else: they control what happens when a new value V arrives for a particular key K in a GroupByKey<K, V>: whether to drop the value, or buffer it, or to pass the buffered KV<K, Iterable<V>> for downstream processing.
Yes :)
I'm thinking about designing an event processing system.
The rules per se are not the problem.
What bogs my is how to store event data so that I can efficiently answer questions/facts like:
If number of events of type A in the last 10 minutes equals N,
and the average events of type B per minute over the last M hours is Z,
and the current running average of another metric is Y...
then
fire some event (or store a new fact/event).
How do Esper/Drools/MS StreamInsight store their time dependant data so that they can efficiently calculate event stream properties? ¿Do they just store it in SQL databases and continuosly query them?
Do the preprocess the rules so they can know beforehand what "knowledge" they need to store?
Thanks
EDIT: I found what I want is called Event Stream Processing, and the wikipedia example shows what I would like to do:
WHEN Person.Gender EQUALS "man" AND Person.Clothes EQUALS "tuxedo"
FOLLOWED-BY
Person.Clothes EQUALS "gown" AND
(Church_Bell OR Rice_Flying)
WITHIN 2 hours
ACTION Wedding
Still the question remains: how do you implement such a data store? The key is "WITHIN 2 hours" and the ability to process thousands of events per second.
Esper analyzes the rule and only stores derived state (aggregations etc., if any) and if needed by the rule also a subset of events. Esper allows defining contexts like described in the book by Opher Etzion and Peter Niblet. I recommend reading. By specifying a context Esper can minimize the amount of state it retains and can make queries easier to read.
It's not difficult to store events happening within a time window of a certain length. The problem gets more difficult if you have to consider additional constraints: here an analysis of the rules is indicated so that you can maintain sets of events matching the constraints.
Storing events in an (external) database will be too slow.
I have a use case where a system transaction happen/completed over a period of time and with multiple "building up" steps. each step in the process generates one or more events (up to 22 events per transaction). All events within a transaction have a shared and unique (uuid) correlation ID.
An example is for a transaction X: will have the building blocks of EventA, EventB, EventC... and all tagged with a unique correlation identifier.
The ultimate goal here is to switch from persisting all the separate events in an RDBMS and query a consolidated view (lots of joins) To: be able to persist only 1 encompassing transaction record that will consolidate attributes from each step in the transaction.
My research so far led me toward reading about Esper (Java stack here) and WSo2/WS02 CEP. In my case each event is submitted/enqueued into JMS, and I am wondering if a solution like WS02/WSo2 CEP can be used to consolidate JMS events/messages (streams) and based on correlation ID (and maximum time limit 30 min) produce one consolidated record and send it down JMS to ultimately persist in a DB.
Since I am still in research mode, I was wondering if I am on the right path for a solution?
Anybody achieved such thing using WS02/WSo2 CEP, or is it over kill ? other recommendations?
Thanks
-S
You can use WSO2 CEP by integrating that to JMS to send and receive events and by using Siddhi Pattern queries[1] to consolidate events arriving from the same transaction.
30 min is a reasonable time period and its recommended to test the scenario with some test data set because you must need enough memory in the servers for CEP to handle the states. This will greatly depend on the event rate.
AFAIK this is not an over kill in a enterprise deployment.
[1]https://docs.wso2.com/display/CEP200/Patterns
I would recommend you to try esper patterns. For multievent based system where some particular information is to be collected patterns works the best way.
A sample example would be:
select * from TemperatureEvent
match_recognize (
measures A as temp1, B as temp2, C as temp3, D as temp4
pattern (A B C D)
define
A as A.temperature > 100,
B as (A.temperature < B.value),
C as (B.temperature < C.value),
D as (C.temperature < D.value) and D.value >
(A.value * 1.5))
Here, we have 4 events and 5 conditions involving these events. Example is taken from demo project.
Simplified example:
I have a to-do. It can be future, current, or late based on what time it is.
Time State
8:00 am Future
9:00 am Current
10:00 am Late
So, in this example, the to-do is "current" from 9 am to 10 am.
Originally, I thought about adding fields for "current_at" and "late_at" and then using an instance method to return the state. I can query for all "current" todos with now > current and now < late.
In short, I'd calculate the state each time or use SQL to pull the set of states I need.
If I wanted to use a state machine, I'd have a set of states and would store that state name on the to-do. But, how would I trigger the transition between states at a specific time for each to-do?
Run a cron job every minute to pull anything in a state but past the transition time and update it
Use background processing to queue transition jobs at the appropriate times in the future, so in the above example I would have two jobs: "transition to current at 9 am" and "transition to late at 10 am" that would presumably have logic to guard against deleted todos and "don't mark late if done" and such.
Does anyone have experience with managing either of these options when trying to handle a lot of state transitions at specific times?
It feels like a state machine, I'm just not sure of the best way to manage all of these transitions.
Update after responses:
Yes, I need to query for "current" or "future" todos
Yes, I need to trigger notifications on state change ("your todo wasn't to-done")
Hence, my desire to more of a state-machine-like idea so that I can encapsulate the transitions.
I have designed and maintained several systems that manage huge numbers of these little state machines. (Some systems, up to 100K/day, some 100K/minute)
I have found that the more state you explicitly fiddle with, the more likely it is to break somewhere. Or to put it a different way, the more state you infer, the more robust the solution.
That being said, you must keep some state. But try to keep it as minimal as possible.
Additionally, keeping the state-machine logic in one place makes the system more robust and easier to maintain. That is, don't put your state machine logic in both code and the database. I prefer my logic in the code.
Preferred solution. (Simple pictures are best).
For your example I would have a very simple table:
task_id, current_at, current_duration, is_done, is_deleted, description...
and infer the state based on now in relation to current_at and current_duration. This works surprisingly well. Make sure you index/partition your table on current_at.
Handling logic on transition change
Things are different when you need to fire an event on the transition change.
Change your table to look like this:
task_id, current_at, current_duration, state, locked_by, locked_until, description...
Keep your index on current_at, and add one on state if you like. You are now mangling state, so things are a little more fragile due to concurrency or failure, so we'll have to shore it up a little bit using locked_by and locked_until for optimistic locking which I'll describe below.
I assume your program will fail in the middle of processing on occassion—even if only for a deployment.
You need a mechanism to transition a task from one state to another. To simplify the discussion, I'll concern myself with moving from FUTURE to CURRENT, but the logic is the same no matter the transition.
If your dataset is large enough, you constantly poll the database to discover to discover tasks requiring transition (of course, with linear or exponential back-off when there's nothing to do); otherwise you use or your favorite scheduler whether it is cron or ruby-based, or Quartz if you subscribe to Java/Scala/C#.
Select all entries that need to be moved from FUTURE to CURRENT and are not currently locked.
(updated:)
-- move from pending to current
select task_id
from tasks
where now >= current_at
and (locked_until is null OR locked_until < now)
and state == 'PENDING'
and current_at >= (now - 3 days) -- optimization
limit :LIMIT -- optimization
Throw all these task_ids into your reliable queue. Or, if you must, just process them in your script.
When you start to work on an item, you must first lock it using our optimistic locking scheme:
update tasks
set locked_by = :worker_id -- unique identifier for host + process + thread
, locked_until = now + 5 minutes -- however this looks in your SQL langage
where task_id = :task_id -- you can lock multiple tasks here if necessary
and (locked_until is null OR locked_until < now) -- only if it's not locked!
Now, if you actually updated the record, you own the lock. You may now fire your special on-transition logic. (Applause. This is what makes you different from all the other task managers, right?)
When that is successful, update the task state, make sure you still use the optimistic locking:
update tasks
set state = :new_state
, locked_until = null -- explicitly release the lock (an optimization, really)
where task_id = :task_id
and locked_by = :worker_id -- make sure we still own the lock
-- no-one really cares if we overstep our time-bounds
Multi-thread/process optimization
Only do this when you have multiple threads or processes updating tasks in batch (such as in a cron job, or polling the database)! The problem is they'll each get the similar results from the database and will then contend to lock each row. This is inefficient both because it will slow down the database, and because you have threads basically doing nothing but slowing down the others.
So, add a limit to how many results the query returns and follow this algorithm:
results = database.tasks_to_move_to_current_state :limit => BATCH_SIZE
while !results.empty
results.shuffle! # make sure we're not in lock step with another worker
contention_count = 0
results.each do |task_id|
if database.lock_task :task_id => task_id
on_transition_to_current task_id
else
contention_count += 1
end
break if contention_count > MAX_CONTENTION_COUNT # too much contention!
done
results = database.tasks_to_move_to_current_state :limit => BATCH_SIZE
end
Fiddle around with BATCH_SIZE and MAX_CONTENTION_COUNT until the program is super-fast.
Update:
The optimistic locking allows for multiple processors in parallel.
By have the lock timeout (via the locked_until field) it allows for failure while processing a transition. If the processor fails, another processor is able to pick up the task after a timeout (5 minutes in the above code). It is important, then, to a) only lock the task when you are about to work on it; and b) lock the task for how long it will take to do the task plus a generous leeway.
The locked_by field is mostly for debugging purposes, (which process/machine was this on?) It is enough to have the locked_until field if your database driver returns the number of rows updated, but only if you update one row at a time.
Managing all those transitions at specific times does seem tricky. Perhaps you could use something like DelayedJob to schedule the transitions, so that a cron job every minute wouldn't be necessary, and recovering from a failure would be more automated?
Otherwise - if this is Ruby, is using Enumerable an option?
Like so (in untested pseudo-code, with simplistic methods)
ToDo class
def state
if to_do.future?
return "Future"
elsif to_do.current?
return "Current"
elsif to_do.late?
return "Late"
else
return "must not have been important"
end
end
def future?
Time.now.hour <= 8
end
def current?
Time.now.hour == 9
end
def late?
Time.now.hour >= 10
end
def self.find_current_to_dos
self.find(:all, :conditions => " 1=1 /* or whatever */ ").select(&:state == 'Current')
end
One simple solution for moderately large datasets is to use a SQL database. Each todo record should have a "state_id", "current_at", and "late_at" fields. You can probably omit the "future_at" unless you really have four states.
This allows three states:
Future: when now < current_at
Current: when current_at <= now < late_at
Late: when late_at <= now
Storing the state as state_id (optionally make a foreign key to a lookup table named "states" where 1: Future, 2: Current, 3: Late) is basically storing de-normalized data, which lets you avoid recalculating the state as it rarely changes.
If you aren't actually querying todo records according to state (eg ... WHERE state_id = 1) or triggering some side-effect (eg sending an email) when the state changes, perhaps you don't need to manage state. If you're just showing the user a todo list and indicating which ones are late, the cheapest implementation might even be to calculate it client side. For the purpose of answering, I'll assume you need to manage the state.
You have a few options for updating state_id. I'll assume you are enforcing the constraint current_at < late_at.
The simplest is to update every record: UPDATE todos SET state_id = CASE WHEN late_at <= NOW() THEN 3 WHEN current_at <= NOW() THEN 2 ELSE 1 END;.
You probably will get better performance with something like (in one transaction) UPDATE todos SET state_id = 3 WHERE state_id <> 3 AND late_at <= NOW(), UPDATE todos SET state_id = 2 WHERE state_id <> 2 AND NOW() < late_at AND current_at <= NOW(), UPDATE todos SET state_id = 1 WHERE state_id <> 1 AND NOW() < current_at. This avoids retrieving rows that don't need to be updated but you'll want indices on "late_at" and "future_at" (you can try indexing "state_id", see note below). You can run these three updates as frequently as you need.
Slight variation of the above is to get the IDs of records first, so you can do something with the todos that have changed states. This looks something like SELECT id FROM todos WHERE state_id <> 3 AND late_at <= NOW() FOR UPDATE. You should then do the update like UPDATE todos SET state_id = 3 WHERE id IN (:ids). Now you've still got the IDs to do something with later (eg email a notification "20 tasks have become overdue").
Scheduling or queuing update jobs for each todo (eg update this one to "current" at 10AM and "late" at 11PM) will result in a lot of scheduled jobs, at least two times the number of todos, and poor performance -- each scheduled job is updating only a single record.
You could schedule batch updates like UPDATE state_id = 2 WHERE ID IN (1,2,3,4,5,...) where you've pre-calculated the list of todo IDs that will become current near some specific time. This probably won't work out so nicely in practice for several reasons. One being some todo's current_at and late_at fields might change after you've scheduled updates.
Note: you might not gain much by indexing "state_id" as it only divides your dataset into three sets. This is probably not good enough for a query planner to consider using it in a query like SELECT * FROM todos WHERE state_id = 1.
The key to this problem that you didn't discuss is what happens to completed todos? If you leave them in this todos table, the table will grow indefinitely and your performance will degrade over time. The solution is partitioning the data into two separate tables (like "completed_todos" and "pending_todos"). You can then use UNION to concatenate both tables when you actually need to.
State machines are driven by something. user interaction or the last input from a stream, right? In this case, time drives the state machine. I think a cron job is the right play. it would be the clock driving the machine.
for what it's worth it is pretty difficult to set up an efficient index on a two columns where you have to do a range like that.
now > current && now < late is going to be hard to represent in the database in a performant way as an attribute of task
id|title|future_time|current_time|late_time
1|hello|8:00am|9:00am|10:00am
Never try to force patterns into problems. Things are the other way around. So, go directly to find a good solution for it.
Here is an idea: (for what I understood yours is)
Use persistent alerts and one monitored process to "consume" them. Secondarily, query them.
That will allow you to:
keep it simple
keep it cheap to maintain. Secondarily it also will keep you mentally more
fresh to do something else.
keep all the logic in code only (as it should).
I stress the point of having that process monitored with some kind of watchdog so you are ensured to send those alerts in time (or, in a worst case scenario, with some delay after a crash or things like that).
Note that: the fact of having persisted those alerts allows you this two things:
make/keeps your system resilient (more fault tolerant) and
make you able to query future and current items (by playing around with querying the alerts' time range as best fits your needs)
In my experience, a state machine in SQL is most useful when you have an external process acting on something, and updating the database with it's state. For example, we have a process that uploads and converts videos. We use the database to keep track of what is happening to a video at any time, and what should happen to it next.
In your case, I think you can (and should) use SQL to solve your problem instead of worrying about using a state machine:
Make a todo_states table:
todo_id todo_state_id datetime notified
1 1 (future) 8:00 0
1 2 (current) 9:00 0
1 3 (late) 10:00 0
Your SQL query, where all the real work happens:
SELECT todo_id, MAX(todo_state_id) AS todo_state_id
FROM todo_states
WHERE time < NOW()
GROUP BY todo_id
The currently active state is always the one you select. If you want to notify the user just once, insert the original state with notify = 0, and bump it on the first select.
Once the task is "done", you can either insert another state into the todo_states table, or simply delete all the states associated with a task and raise a "done" flag in the todo item, or whatever is most useful in your case.
Don't forget to clean out stale states.
I have a table in my database that stores event totals, something like:
event1_count
event2_count
event3_count
I would like to transition from these simple aggregates to more of a time series, so the user can see on which days these events actually happened (like how Stack Overflow shows daily reputation gains).
Elsewhere in my system I already did this by creating a separate table with one record for each daily value - then, in order to collect a time series you end up with a huge database table and the need to query 10s or 100s of records. It works but I'm not convinced that it's the best way.
What is the best way of storing these individual events along with their dates so I can do a daily plot for any of my users?
When building tables like this, the real key is having effective indexes. Test your queries with the EXAMINE statement or the equivalent in your database of choice.
If you want to build summary tables you can query, build a view that represents the query, or roll the daily results into a new table on a regular schedule. Often summary tables are the best way to go as they are quick to query.
The best way to implement this is to use Redis. If you haven't worked before with Redis I suggest you to start. You will be surprised how fast this can get :). The way I would do such a thing is to use the Hash data structure Redis provides. Just assign every user to his Hash (making a unique key for every user like "user:23:counters"). Inside this Hash you can store a daily timestamp as "05/06/2011" as the field and increment its counter every time an event happens or whatever you want to do with that!
A good start would be this thread. It has a simple, beginner level solution. Time Series Starter. If you are ok with rails models: This is a way it could work. For a sol called "irregular" time series. So this is a event here and there, but not in a regular interval. Like a sensor that sends data when your door is opened.
The other thing, and that is what I was looking for in this thread is regular time series db: Values come at a interval. Say 60/minute aka 1 per second for example a temperature sensor. This all boils down to datasets with "buckets" as you are suspecting right: A time series table gets long, indexes suck at a point etc. Here is one "bucket" approach using postgres arrays that would a be feasible idea.
Its not done as "plug and play" as far as I researched the web.