I want to add a revision counter to my rails app.
Not the number of commits necessary but the number of live pushes/deployments for example.
I'm using github as my remote repo.
Any suggestions?
Thanks
There's not one magic solution.
But basically, you should execute some code every time you deploy your application that increments the number of deployments by one.
One solution would be to create a capistrano task which would increment this.
namespace :deploy do
desc "Increments the number of deployments"
task :increment do
Config.find_by_key('deployments').update('value = value + 1'
end
end
It will take the uplet "deployments" in a config database (which you have to implement, this way or an other).
And in your capistrano recipes, you add the following :
after "deploy", "deploy:increment"
Every time you deploy your application, the deployment value in the config model will be updated by one.
This is only one example of a possible implementation. You might want to store the number of deployments somewhere else.
The main idea is to have the code executed every time you deploy.
Related
I am creating an automatic raffling system. I have a draw button that will run a draw function to select a winner or winners and it sends an email to the admin.
I want this to be a completely automated system so that the admin only has to create the raffles and they receive an email with who won after the draw date has passed. My raffles have a draw date associated with them and once that passes, I need the function to be called.
How do I tell the application to check the time/date to see if any of the raffle draw times have passed? I have looked everywhere and cannot seem to find a way to do it.
You could use the whenever gem to define a job that runs hourly (or however often you want), checks the draw dates, and runs the draw for any that have passed.
I use Clockwork in my rails apps whenever I need to schedule things. Simply set it up to run a job when you want and do your logic within that job to see which raffles need to be processed. Example:
Clockwork config
every(1.day, 'Raffle::CheckJob', at: '01:00')
Job
Raffle.not_complete.find_each(batch_size: 10) do |raffle|
if raffle.has_ended?
// logic
end
end
You should write a rake task and add it's execution to your crontab on server. You can use whenever gem to simplify crontab scripting and auto update on each deploy (whenever-capistrano/whenever-mina). Example of your rake task:
namespace :raffle do
task :check do
Raffle.get_winners.each do |w|
Mailer.send_win_mail(w).deliver_later
end
end
end
deliver_later is background execution in queue by the queue driver you use (DelayedJob/Rescue/Backburner etc)
I discovered the problem, it had nothing to do with locks.
It seems that in production, I had a jobs:work running permanently, that was called I don't know how! So all the jobs processed by that process would do something somewhere else!
And that somewhere else is not my database, so I just killed it and all started to work fine.
Sorry, for wasting your time!!
Sorry, forgot to tell that I'm working with rails 2.3.8!
I have asynchronous updates to the same row, same column from different background process. I'm using the delayed_jobs gem.
What I want to do is:
ActiveRecord::Base.connection.execute(
"Update table_name set column = column + #{updated_number}
where id = #{self.id}").
My database is mysql and the table where I write is InnoDB.
So the problem is, running that query in different delayed_jobs will cause some data increments to be lost. please note that (column = column + #{updated_number}) I want to increment the current value on the table!
Using rails lock doesn't work because each delayed job run in a different process, I was thinking more like if the table had some locks to do updates safely.
And one more thing, using lock!, On my development code, I run 3 times the rake jobs:work, then I confirm on the delayed_job table that 3 different process locked 3 jobs, and is the development code it works perfectly.
But when put that code in production it doesn't work. The lost of increment data is still there.
Use pessimistic locking:
your_object.with_lock do
your_object.column += updated_number
your_object.save!
end
This will make sure the updates are synchronized via DB.
EDIT: I have totally rewritten this question for clarity. I got no comments and no answers earlier.
I am maintaining a 2.x Rails app with plenty of statistical data. Some data is real and some is estimated for the future years. Every year I need to update estimated data with real data and calculate new estimates.
I have been using BIG yml-files and migrations for loading the data into the app every year. My migrations are full of estimation calculations and data corrections.
Problem
My migrations are full of none-schema related material and I can't even dream of doing db:migrate:reset without waiting few hours (if it even works). I'd love to see my migrations nice and clean - only with schema related modifications. But how I am suppose to update the data every year if not using migrations?
Help needed
I'd like to hear your comments and answers. I'm not looking for a silver bullet - more like best practises and ideas how people are handling similar situation.
It sounds like you have a large operation (data load using yml files) once a year but smaller operations once a month.
From my experience with statistical data you will probably end up doing more and more of these operations to clean and add more data.
I would use a job processing framework like resque and resque scheduler.
You can schedule the jobs to run once a month, year, day or constantly running. A job is something like loading yml files (or sets of yml files) or cleaning up data. You can control parameters to send to your job so you can use one class but alternate how it updates or cleans your data based on the way you enqueue or schedule the job.
First of all, I have to say that this is a very interesting question. As far as i know, it isn't a good idea loading data from migrations. Generally speaking you should use db/seeds.rb for data loading in your db and I think it could be a good idea to write a little class helper to put in your lib dir and then call it from db/seeds.rb. I image you could organize you files in the following way:
lib/data_loader.rb
lib/years/2009.rb
lib/years/2010.rb
Obviously, you should clear your migrations and write code for lib/data_loader.rb in the way you should prefer but I was only trying to offer a general idea of how I'd organize my code if I have to face a problem like that.
I'm not sure I've replied to your question in a way that helps but I hope it does.
If I were you I would go with creating custom rake task. You will have access to all you models and activerecord connections and once a year you will end up doing:
rake calculate
I have a situation where I need to load data from CSV files that change infrequently and update data from the Internet daily. I'll include a somewhat complete example on how to do the former.
First I have a rake file in lib/tasks/update.rake:
require 'update/from_csv_files.rb'
namespace :update do
task :csvfiles => :environment do
Dir.glob('db/static_data/*.csv') do |file|
Update::FromCsvFiles.load(file)
end
end
end
The => :environment means we will have access to the database via the usual models.
Then I have code in the lib/update/from_csv_files.rb file to do the actual work:
require 'csv'
module Update
module FromCsvFiles
def FromCsvFiles.load(file)
csv = CSV.open(file, 'r')
csv.each do |row|
id = row[0]
s = Statistic.find_by_id(id)
if (s.nil?)
s = Statistic.new
s.id= id
end
s.survey_area = row[1]
s.nr_of_space_men = row[2]
s.save
end
end
end
end
Then I can just run rake update:csvfiles whenever my CSV files changes to load the new data. I also have another task that is set up in a similar way to update my daily data.
In your case you should be able to write some code to load your YML files or make your calculations directly. To handle your smaller corrections you could make a generic method for loading YML files and call it with specific files from the rake task. That way you only need to include the YML file and update the rake file with a new task. To handle execution order you can make a rake task that calls the other rake tasks in the appropriate order. I'm just throwing around some ideas now, you know better than me.
I'm using thinking_sphinx and am delta indexing a model.
The delta index works but there is small bug. When I create a new product it is index. However, when I update that product it is not getting index right away. I have to update or create a new product before that old updated product is indexed.
Not quite sure where to start.
My recommendation would be to use delayed_delta indexing instead of straight delta indexing (which can be slow and if you have a few updates in a few seconds, can cause you all kinds of problems).
It takes two steps:
Change your define_index block to have a set_property :delta => :delayed
Create a short script to make sure the delayed indexing jobs get run. Here's the one I use:
#!/usr/bin/env ruby
## this script is for making sure and delayed_jobs get run
## it is used by thinking sphinx
require File.dirname(__FILE__) + '/../config/environment'
# you can also put the definition of this in config/environments/*.rb so it's different for test, production and development
JobRunnerPidFile = "#{RAILS_ROOT}/tmp/pids/job_runner.pid"
if File.exists?(JobRunnerPidFile)
old_pid = File.read(JobRunnerPidFile).to_i
begin
if Process.getpgid(old_pid) > 0
# still running, let's exit silently...
exit(0)
end
rescue
# looks like nothing is running, so let's carry on
end
end
File.open(JobRunnerPidFile, "w") {|f| f.write "#{$$}\n" }
Delayed::Worker.new.start
You can run that script from cron every 5 minutes (it'll only run one instance) or if you have a monitoring service (e.g., monit) you can have it make sure it's running.
Make sure to restart that script when ever you deploy a new version of your code.
When a new resource is created and it needs to do some lengthy processing before the resource is ready, how do I send that processing away into the background where it won't hold up the current request or other traffic to my web-app?
in my model:
class User < ActiveRecord::Base
after_save :background_check
protected
def background_check
# check through a list of 10000000000001 mil different
# databases that takes approx one hour :)
if( check_for_record_in_www( self.username ) )
# code that is run after the 1 hour process is finished.
user.update_attribute( :has_record )
end
end
end
You should definitely check out the following Railscasts:
http://railscasts.com/episodes/127-rake-in-background
http://railscasts.com/episodes/128-starling-and-workling
http://railscasts.com/episodes/129-custom-daemon
http://railscasts.com/episodes/366-sidekiq
They explain how to run background processes in Rails in every possible way (with or without a queue ...)
I've just been experimenting with the 'delayed_job' gem because it works with the Heroku hosting platform and it was ridiculously easy to setup!!
Add gem to Gemfile, bundle install, rails g delayed_job, rake db:migrate
Then start a queue handler with;
RAILS_ENV=production script/delayed_job start
Where you have a method call which is your lengthy process i.e
company.send_mail_to_all_users
you change it to;
company.delay.send_mail_to_all_users
Check the full docs on github: https://github.com/collectiveidea/delayed_job
Start a separate process, which is probably most easily done with system, prepending a 'nohup' and appending an '&' to the end of the command you pass it. (Make sure the command is just one string argument, not a list of arguments.)
There are several reasons you want to do it this way, rather than, say, trying to use threads:
Ruby's threads can be a bit tricky when it comes to doing I/O; you have to take care that some things you do don't cause the entire process to block.
If you run a program with a different name, it's easily identifiable in 'ps', so you don't accidently think it's a FastCGI back-end gone wild or something, and kill it.
Really, the process you start should be "deamonized," see the Daemonize class for help.
you ideally want to use an existing background job server, rather than writing your own. these will typically let you submit a job and give it a unique key; you can then use the key to periodically query the jobserver for the status of your job without blocking your webapp. here is a nice roundup of the various options out there.
I like to use backgroundrb, its nice it allows you to communicate to it during long processes. So you can have status updates in your rails app
I think spawn is a great way to fork your process, do some processing in background, and show user just some confirmation that this processing was started.
What about:
def background_check
exec("script/runner check_for_record_in_www.rb #{self.username}") if fork == nil
end
The program "check_for_record_in_www.rb" will then run in another process and will have access to ActiveRecord, being able to access the database.