My rails app has several custom rake tasks defined under lib/tasks/*
When any of these tasks run, I'd like to output a log START and STOP message.
For an individual rake task, I can do something like
namespace :foo do
task :bar, [:some_arg] => :some_pre_requisite do |t, args|
Rails.logger.info "Rake START - {task: \"#{t}\", args: #{args}}"
# Do stuff
Rails.logger.info "Rake STOP - {task: \"#{t}\", args: #{args}}"
end
end
This runs a rake task that takes an argument :some_arg, runs a pre-requisite task (:some_pre_requisite) and then runs the task. It outputs log statements before and after saying that my task has started and stopped.
If I have several dozen tasks, this gets really repetitive. I'd love to "wrap" the rake task in some other method that
prints the START statement
invokes/yields/runs the actual rake task called
prints the STOP statement
I know I could set up the first print as a pre requisite and then the second print as a separate enhanced task, but that feels messy. It stores both print statements in different locations, and plus I'd have to manually enhance each task individually.
Any better thoughts on how this could be acheived?
Thanks!
About the after hook:
Rake::Task['db:migrate'].enhance do
puts "AFTER"
end
http://ruby-doc.org/stdlib-2.0.0/libdoc/rake/rdoc/Rake/Task.html#method-i-enhance
For the before hook (untested):
task :before do
puts "BEFORE"
end
Rake::Task['db:migrate'].enhance(['before'])
Related
I have a rails application that make use of sidekiq and sidekiq-cron to run some rake tasks on a schedule.
Here is a sample worker
def perform
RailsApp::Application.load_tasks
# generate output json
Rake::Task["json:output_data"].execute
# run r-script
Rake::Task["r_script:generate_questions"].execute
end
And a sample task
task output_data: :environment do
p "generating output data"
unless Dir.exist?("vendor/r_script/data/output_json")
FileUtils.mkdir_p("vendor/r_script/data/output_json")
end
unless Dir.exist?("vendor/r_script/data/input_json")
FileUtils.mkdir_p("vendor/r_script/data/input_json")
end
Rake::Task["json:generate_master_json"].execute
Rake::Task["json:generate_raters"].execute
Rake::Task["json:generate_ratees"].execute
end
The first time I queue the worker using the cron tab in the sidekiq web interface, it works fine. "generate output data" prints once as expected. Every time I run it again, "generate output data" prints an extra time. For every nth time enqueuing the sidekiq job, it prints n times.
How do I prevent this from happening? Do I need to clear the tasks somehow?
I wanted to create a logger to output some messages to the screen as well as the log file. I had created a method in the rake task that I was going to pass the message to so I could output to both screen and log file, if required. This returned argument 1 of 0 message when run. To investigate this further I created a simple rake task below just to see what was going on.
Task
namespace :x do
desc "test"
task :y => :environment do
puts "hello"
end
def logger
puts "logger"
end
end
Surprising, when I ran the task it would run the logger method twice before running the actual task (see output).
Output
$ rake x:y
logger
logger
hello
This explained my argument error because it wouldn't have any arguments passed to the logger method if it was run before the task.
My question is two fold:
1) Why does it run the method at all as it's not referenced in the task?
2) Why does it run the method twice? (I would expect once if it was going to run it but twice is just plain odd)
I'll go back to putting all the logging in the task and remove the method. It would be useful to know why the simple example does what it does.
EDIT
I've done a screen dump showing the code and results below.
I'm trying to split down a rake task that i have,
What im wanting to do is after the rake task completes, It fires off another rake task.
Is this possible and if so how?
You can use enhance to extend one task with other:
task :extra_behavior do
# extra
end
Rake::Task["first:task"].enhance do
Rake::Task[:extra_behavior].invoke
end
Reference
Reference
Passing a task as an argument to enhance causes it to run BEFORE the task you are "enhancing".
Rake::Task["task_A"].enhance(["task_B"])
# Runs task_B
# Runs task_A
Passing a task to enhance in a block causes it to run AFTER the task you are "enhancing".
Rake::Task["task_A"].enhance do
Rake::Task["task_B"].execute
end
# Runs task_A
# Runs task_B
Reference: Rake Task enhance Method Explained
I have a rake task in a rails 4 application that is called from my controller and runs in background, like this:
system "rake fetch_news SOURCE=#{src} CATS=#{cat} C_ID=#{c_id} S_ID=#{s_id} FEED_ID=#{f_id} &"
It runs perfect in background, does it's job, but I need to start is again with same parameters, after a delay of 1.5 -- 2 minutes.
Informally, I see this like:
desc "Fetch news for specific category"
task :fetch_news => :environment do
... code of my background task ...
sleep 120
<< insert here some code to restart the task >>
end
but I don't know how to implement this :(. Initially I thought I can define a function into my task, call the function, sleep, call it again, but the task doesn't permit to define a function into.
I searched on Google for solutions, but I don't find anything to help me. Can you give me an idea?
Thanks!
Instead of rake tasks you should use:
CRON - in case if parameters is fixed. Look at whenever if you want to simplify work with CRON.
Sidekiq - in case if you want to change parameters in runtime. Sidekiq has ability to define recurring jobs.
I have a batch of rake tasks that run sequentially:
task :batch_tasks => :environment do
Rake::Task["db:drop"].execute
Rake::Task["db:create"].execute
Rake::Task["db:migrate"].execute
Rake::Task["db:seed"].execute
Rake::Task["db:test:prepare"].execute
Rake::Task["custom_task_1"].execute
end
Here's what's in custom_task_1:
task :custom_task_1 => :environment do
puts "begin custom task"
orders = Order.all #three records
orders.each do |order|
puts "Do something to Order\n"
end
puts "end custom task"
end
When I run the above batch process, here's what happens:
rake batch_tasks
begin custom task
end custom task
But if I run the custom task AFTER the batch process, here's what happens:
rake custom_task_1
begin custom task
Do something to Order
Do something to Order
Do something to Order
end custom task
One thing to note, when I run debugger on rake batch_tasks with a breakpoint after rake db:seed, a check on eval Order.all returns an empty array []. However, Order.all does have data immediately after all of the rake tasks are finished.
What am I missing about rake db:seed and having access to ActiveRecord data in the next task called?
As mu is too short suggested, this is related to the need to reload models before using them in migrations. The reason for this is that all of your rake tasks run in a common environment, ie, it only loads rails once. As such, the table definitions may already be established. You will probably have to use reset_column_information to load the new values.
Alternately, the sequence of tasks you are doing look like they should be run independently, which could be a good use case for capistrano or thor.
So the quick fix was to move the db:test:prepare line to the end of the batch.
I believe the problem stemmed from the environment being switched from development to test and then the custom task was running in the last environment, which then had an empty test database.
The db:seed command probably switches the environment back to dev and the custom task runs against the correct database.
task :batch_tasks => :environment do
Rake::Task["db:drop"].execute
Rake::Task["db:create"].execute
Rake::Task["db:migrate"].execute
Rake::Task["db:seed"].execute
Rake::Task["custom_task_1"].execute
Rake::Task["db:test:prepare"].execute # <-- Moved after all other tasks
end