TS delayed delta - check for finished job - delayed-job

Is there a way to check whether a TS delayed delta job has finished? I have a scenario in which I need to run a new search in an after_save callback and of course I'd like to see the changes to the delta index reflected in the search results.
Here are some details of my example:
I have a model called Feature which has many annotations (Annotation model). The index looks like this:
define_index do
indexes annotations.value, :as => :annotations
# other indexes
set_property :delta => :delayed
end
When the "value" of an annotation changes I update the delta attribute of the associated feature in an Annotation model callback. Setting the delta attribute to true spins off a delayed_job task to update the delta index. In a separate callback I'd like to perform a new search against the updated delta index, but I noticed that the search never reflects the current state of the index. This is no doubt because the delta jobs are not finished yet.
What would be the best strategy to deal with this these timing issues?

I can think of only one. to query the delayed_jobs table
ActiveRecord::Base.connection.execute("select count(1) from delayed_jobs where handler like '%%'")
If the job has succeeded, the entry is sure to have gone. This is the only way i can think of. Or disable delayed delta for this model alone if its not a big deal.

Related

Why need to run delta indexing after new records and change in records

As per my search and understanding in delta indexing when we add new records or do changes in records we need to re index sphinx to show that data otherwise it will not show.
But I check that data is updating without re indexing. So what the purpose of re indexing delta
With Thinking Sphinx, there's the distinction between a full re-index where all the indices are reprocessed (via rake ts:index and rake Ts:rebuild), and processing a single index.
When you have delta indexing enabled, it means that the delta index for a given model is automatically processed straight after the change to a record, or adding a new record. This is either done as part of the standard callback process (when using :delta => true) or via a background worker (Sidekiq, DelayedJob, etc) if you're using the appropriate delta gem for those.
All of this means that you don't need to run a full reprocessing of all indices for the change to be present - the delta index is reprocessed automatically, and the record's changes are reflected in Sphinx.
One catch worth noting is that the more changes that happen, the larger the delta index gets, and thus the slower it is to process. So, a full re-index is still required on a regular basis (hourly? daily? depends on your app) to keep delta processing times fast.

Mass updating many records in Rails with Resque?

If I had to update 50,000 users, how would I go about it in a way that is best with a background processing library and not a N+1 issue?
I have users, membership, and points.
Memberships are related to total point values. If the membership is modified with point values I have to run through all of the users to update their proper membership. This is what I need to queue so the server isn't hanging for 30+ minutes.
Right now I have in a controller action
def update_memberberships
User.find_each do |user|
user.update_membership_level! # looks for a Membership defined by x points and assigns it to user. Then Saves the user.
end
end
This is a very expensive operation. How would I optimize for processing and in background so the post is near instantaneous from the form?
You seem to be after how to get this done with Resque or delayed_job. I'll give an example with delayed_job.
To create the job, add a method to app/models/user.rb:
def self.process_x_update
User.where("z = 1").find_each(:batch_size => 5000) do |user|
user.x = user.y + 3
user.save
end
end
handle_asynchronously :process_x_update
This will update all User records where z = 1, setting user.x = user.y + 3. This will complete this in batches of 5,000, so that performance is a bit more linear.
This will cause User.process_x_update to complete very quickly. To actually process the job, you should be running rake jobs:work in the background or start a cluster of daemons with ./script/delayed_job start
One other thing: can you move this logic to one SQL statement? That way you could have one statement that's fast and atomic. You'd still want to do this in the background as it could take some time to process. You could do something like:
def process_x_update
User.where("z = 1").update_all("x = y + 3")
end
handle_asynchronously :process_x_update
You're looking for update_all. From the docs:
Updates all records with details given if they match a set of conditions supplied, limits and order can also be supplied.
It'll probably still take awhile on the SQL side, but you can at least do it with one statement. Check out the documentation to see usage examples.

State machine transitions at specific times

Simplified example:
I have a to-do. It can be future, current, or late based on what time it is.
Time State
8:00 am Future
9:00 am Current
10:00 am Late
So, in this example, the to-do is "current" from 9 am to 10 am.
Originally, I thought about adding fields for "current_at" and "late_at" and then using an instance method to return the state. I can query for all "current" todos with now > current and now < late.
In short, I'd calculate the state each time or use SQL to pull the set of states I need.
If I wanted to use a state machine, I'd have a set of states and would store that state name on the to-do. But, how would I trigger the transition between states at a specific time for each to-do?
Run a cron job every minute to pull anything in a state but past the transition time and update it
Use background processing to queue transition jobs at the appropriate times in the future, so in the above example I would have two jobs: "transition to current at 9 am" and "transition to late at 10 am" that would presumably have logic to guard against deleted todos and "don't mark late if done" and such.
Does anyone have experience with managing either of these options when trying to handle a lot of state transitions at specific times?
It feels like a state machine, I'm just not sure of the best way to manage all of these transitions.
Update after responses:
Yes, I need to query for "current" or "future" todos
Yes, I need to trigger notifications on state change ("your todo wasn't to-done")
Hence, my desire to more of a state-machine-like idea so that I can encapsulate the transitions.
I have designed and maintained several systems that manage huge numbers of these little state machines. (Some systems, up to 100K/day, some 100K/minute)
I have found that the more state you explicitly fiddle with, the more likely it is to break somewhere. Or to put it a different way, the more state you infer, the more robust the solution.
That being said, you must keep some state. But try to keep it as minimal as possible.
Additionally, keeping the state-machine logic in one place makes the system more robust and easier to maintain. That is, don't put your state machine logic in both code and the database. I prefer my logic in the code.
Preferred solution. (Simple pictures are best).
For your example I would have a very simple table:
task_id, current_at, current_duration, is_done, is_deleted, description...
and infer the state based on now in relation to current_at and current_duration. This works surprisingly well. Make sure you index/partition your table on current_at.
Handling logic on transition change
Things are different when you need to fire an event on the transition change.
Change your table to look like this:
task_id, current_at, current_duration, state, locked_by, locked_until, description...
Keep your index on current_at, and add one on state if you like. You are now mangling state, so things are a little more fragile due to concurrency or failure, so we'll have to shore it up a little bit using locked_by and locked_until for optimistic locking which I'll describe below.
I assume your program will fail in the middle of processing on occassion—even if only for a deployment.
You need a mechanism to transition a task from one state to another. To simplify the discussion, I'll concern myself with moving from FUTURE to CURRENT, but the logic is the same no matter the transition.
If your dataset is large enough, you constantly poll the database to discover to discover tasks requiring transition (of course, with linear or exponential back-off when there's nothing to do); otherwise you use or your favorite scheduler whether it is cron or ruby-based, or Quartz if you subscribe to Java/Scala/C#.
Select all entries that need to be moved from FUTURE to CURRENT and are not currently locked.
(updated:)
-- move from pending to current
select task_id
from tasks
where now >= current_at
and (locked_until is null OR locked_until < now)
and state == 'PENDING'
and current_at >= (now - 3 days) -- optimization
limit :LIMIT -- optimization
Throw all these task_ids into your reliable queue. Or, if you must, just process them in your script.
When you start to work on an item, you must first lock it using our optimistic locking scheme:
update tasks
set locked_by = :worker_id -- unique identifier for host + process + thread
, locked_until = now + 5 minutes -- however this looks in your SQL langage
where task_id = :task_id -- you can lock multiple tasks here if necessary
and (locked_until is null OR locked_until < now) -- only if it's not locked!
Now, if you actually updated the record, you own the lock. You may now fire your special on-transition logic. (Applause. This is what makes you different from all the other task managers, right?)
When that is successful, update the task state, make sure you still use the optimistic locking:
update tasks
set state = :new_state
, locked_until = null -- explicitly release the lock (an optimization, really)
where task_id = :task_id
and locked_by = :worker_id -- make sure we still own the lock
-- no-one really cares if we overstep our time-bounds
Multi-thread/process optimization
Only do this when you have multiple threads or processes updating tasks in batch (such as in a cron job, or polling the database)! The problem is they'll each get the similar results from the database and will then contend to lock each row. This is inefficient both because it will slow down the database, and because you have threads basically doing nothing but slowing down the others.
So, add a limit to how many results the query returns and follow this algorithm:
results = database.tasks_to_move_to_current_state :limit => BATCH_SIZE
while !results.empty
results.shuffle! # make sure we're not in lock step with another worker
contention_count = 0
results.each do |task_id|
if database.lock_task :task_id => task_id
on_transition_to_current task_id
else
contention_count += 1
end
break if contention_count > MAX_CONTENTION_COUNT # too much contention!
done
results = database.tasks_to_move_to_current_state :limit => BATCH_SIZE
end
Fiddle around with BATCH_SIZE and MAX_CONTENTION_COUNT until the program is super-fast.
Update:
The optimistic locking allows for multiple processors in parallel.
By have the lock timeout (via the locked_until field) it allows for failure while processing a transition. If the processor fails, another processor is able to pick up the task after a timeout (5 minutes in the above code). It is important, then, to a) only lock the task when you are about to work on it; and b) lock the task for how long it will take to do the task plus a generous leeway.
The locked_by field is mostly for debugging purposes, (which process/machine was this on?) It is enough to have the locked_until field if your database driver returns the number of rows updated, but only if you update one row at a time.
Managing all those transitions at specific times does seem tricky. Perhaps you could use something like DelayedJob to schedule the transitions, so that a cron job every minute wouldn't be necessary, and recovering from a failure would be more automated?
Otherwise - if this is Ruby, is using Enumerable an option?
Like so (in untested pseudo-code, with simplistic methods)
ToDo class
def state
if to_do.future?
return "Future"
elsif to_do.current?
return "Current"
elsif to_do.late?
return "Late"
else
return "must not have been important"
end
end
def future?
Time.now.hour <= 8
end
def current?
Time.now.hour == 9
end
def late?
Time.now.hour >= 10
end
def self.find_current_to_dos
self.find(:all, :conditions => " 1=1 /* or whatever */ ").select(&:state == 'Current')
end
One simple solution for moderately large datasets is to use a SQL database. Each todo record should have a "state_id", "current_at", and "late_at" fields. You can probably omit the "future_at" unless you really have four states.
This allows three states:
Future: when now < current_at
Current: when current_at <= now < late_at
Late: when late_at <= now
Storing the state as state_id (optionally make a foreign key to a lookup table named "states" where 1: Future, 2: Current, 3: Late) is basically storing de-normalized data, which lets you avoid recalculating the state as it rarely changes.
If you aren't actually querying todo records according to state (eg ... WHERE state_id = 1) or triggering some side-effect (eg sending an email) when the state changes, perhaps you don't need to manage state. If you're just showing the user a todo list and indicating which ones are late, the cheapest implementation might even be to calculate it client side. For the purpose of answering, I'll assume you need to manage the state.
You have a few options for updating state_id. I'll assume you are enforcing the constraint current_at < late_at.
The simplest is to update every record: UPDATE todos SET state_id = CASE WHEN late_at <= NOW() THEN 3 WHEN current_at <= NOW() THEN 2 ELSE 1 END;.
You probably will get better performance with something like (in one transaction) UPDATE todos SET state_id = 3 WHERE state_id <> 3 AND late_at <= NOW(), UPDATE todos SET state_id = 2 WHERE state_id <> 2 AND NOW() < late_at AND current_at <= NOW(), UPDATE todos SET state_id = 1 WHERE state_id <> 1 AND NOW() < current_at. This avoids retrieving rows that don't need to be updated but you'll want indices on "late_at" and "future_at" (you can try indexing "state_id", see note below). You can run these three updates as frequently as you need.
Slight variation of the above is to get the IDs of records first, so you can do something with the todos that have changed states. This looks something like SELECT id FROM todos WHERE state_id <> 3 AND late_at <= NOW() FOR UPDATE. You should then do the update like UPDATE todos SET state_id = 3 WHERE id IN (:ids). Now you've still got the IDs to do something with later (eg email a notification "20 tasks have become overdue").
Scheduling or queuing update jobs for each todo (eg update this one to "current" at 10AM and "late" at 11PM) will result in a lot of scheduled jobs, at least two times the number of todos, and poor performance -- each scheduled job is updating only a single record.
You could schedule batch updates like UPDATE state_id = 2 WHERE ID IN (1,2,3,4,5,...) where you've pre-calculated the list of todo IDs that will become current near some specific time. This probably won't work out so nicely in practice for several reasons. One being some todo's current_at and late_at fields might change after you've scheduled updates.
Note: you might not gain much by indexing "state_id" as it only divides your dataset into three sets. This is probably not good enough for a query planner to consider using it in a query like SELECT * FROM todos WHERE state_id = 1.
The key to this problem that you didn't discuss is what happens to completed todos? If you leave them in this todos table, the table will grow indefinitely and your performance will degrade over time. The solution is partitioning the data into two separate tables (like "completed_todos" and "pending_todos"). You can then use UNION to concatenate both tables when you actually need to.
State machines are driven by something. user interaction or the last input from a stream, right? In this case, time drives the state machine. I think a cron job is the right play. it would be the clock driving the machine.
for what it's worth it is pretty difficult to set up an efficient index on a two columns where you have to do a range like that.
now > current && now < late is going to be hard to represent in the database in a performant way as an attribute of task
id|title|future_time|current_time|late_time
1|hello|8:00am|9:00am|10:00am
Never try to force patterns into problems. Things are the other way around. So, go directly to find a good solution for it.
Here is an idea: (for what I understood yours is)
Use persistent alerts and one monitored process to "consume" them. Secondarily, query them.
That will allow you to:
keep it simple
keep it cheap to maintain. Secondarily it also will keep you mentally more
fresh to do something else.
keep all the logic in code only (as it should).
I stress the point of having that process monitored with some kind of watchdog so you are ensured to send those alerts in time (or, in a worst case scenario, with some delay after a crash or things like that).
Note that: the fact of having persisted those alerts allows you this two things:
make/keeps your system resilient (more fault tolerant) and
make you able to query future and current items (by playing around with querying the alerts' time range as best fits your needs)
In my experience, a state machine in SQL is most useful when you have an external process acting on something, and updating the database with it's state. For example, we have a process that uploads and converts videos. We use the database to keep track of what is happening to a video at any time, and what should happen to it next.
In your case, I think you can (and should) use SQL to solve your problem instead of worrying about using a state machine:
Make a todo_states table:
todo_id todo_state_id datetime notified
1 1 (future) 8:00 0
1 2 (current) 9:00 0
1 3 (late) 10:00 0
Your SQL query, where all the real work happens:
SELECT todo_id, MAX(todo_state_id) AS todo_state_id
FROM todo_states
WHERE time < NOW()
GROUP BY todo_id
The currently active state is always the one you select. If you want to notify the user just once, insert the original state with notify = 0, and bump it on the first select.
Once the task is "done", you can either insert another state into the todo_states table, or simply delete all the states associated with a task and raise a "done" flag in the todo item, or whatever is most useful in your case.
Don't forget to clean out stale states.

rails thinking sphinx,How do i set delta to true after reindexing?

When I run a reindexing task(rake ts:reindex), it automatically sets delta value to false.But I definitely want delta indexing working after reindexing. So I want to set the delta value back to 'true'. How can I do that??
You don't need delta indexing after your reindex as the main index will be up to date and complete. Your model should only set the delta flag to true after your next update, which is when your main index will be incomplete.
Thinking Sphinx will automatically set delta to true when you make changes to a model instance.
The only cases where this is not the case is when you're actually changing an association instance, instead of the indexed model, or you're changing the indexed model in some way which doesn't fire the callbacks. #update_attribute (note: singular) does not fire callbacks. #save and #update_attributes do.
So: how are you changing your model instances? Is delta indexing not occurring when you make those changes?

Thinking Sphinx delta indexing - delta index not getting updated

I have delta indexing setup for thinking sphinx on one of my models. When ever a record gets updated, the delta is being set to True, but I don't see the index getting updated with the changes made to the record. I have my sphinx configuration files updated with the delta property changes. Any idea why the delta indexing is not getting triggered?
According to the documentation after you update the database and the model, you should do this:
rake thinking_sphinx:rebuild
Maybe you've omit that step..

Resources