def create
fr = Front.new
fr.name = params[:name]
fr.original_image.attach(params[:img]) # If I comment this line the server doesn't stop
if fr.save
puts "Got image"
render_success_data(fr)
end
end
After sending a post request with the following parameters.
The server actually saves the item and does the linking correctly and also saves the image on the S3 bucket.
As you can see the model is saved successfuly commited then an AnalyzeJob is starting and the server just stops
Tried creating a project from scratch again thinking it was some configuration problem but I encountered the same result.
I'm working on Windows 10 so I thought it might be a system incompatibility but I didn't find anything related.
EDITED:
Here is the method render_success_data
Related
I have a ruby on rails web application deployed on Heroku.
This web app fetches some job feeds of given URLs as XMLs. Then regulates these XMLs and creates a single XML file. It worked pretty well for a while. However, since the #of URLs and job ads increases, it does not work at all. This process sometimes takes up to 45 secs since there are over 35K job vacancies (Heroku sends timeout after 30 secs). I am having an H12 timeout error. This error led me to read this worker dynos and background processing.
I figured out that I should apply the approach below:
Scalable-approach Heroku
Now I am using Redis and Sidekiq on my project. And I am able to create a background worker to do all the dirty work. But here is my question.
Instead of doing this call in the controller class:
def apply
send_data Aggregator.new(providers: providers).call,
type: 'text/xml; charset=UTF-8;',
disposition: 'attachment; filename=indeed_apply_yes.xml'
end
I am doin this perform_async call.
def apply
ReportWorker.perform_async(Time.now)
redirect_to health_path #and returns status 200 ok
end
I implemented this class: ReportWorker calls the Aggregator Service. data_xml is the field that I need to show somewhere or be downloaded automatically when it's ready.
class ReportWorker
include Sidekiq::Worker
sidekiq_options retry: false
data_xml = nil
def perform(start_date)
url_one = 'https://www.examplea.com/abc/download-xml'
url_two = 'https://www.exampleb.com/efg/download-xml'
cursor = 'stop'
providers = [url_one, url_two, cursor]
puts "SIDEKIQ WORKER GENERATING THE XML-DATA AT #{start_date}"
data_xml = Aggregator.new(providers: providers).call
puts "SIDEKIQ WORKER GENERATED THE XML-DATA AT #{Time.now}"
end
end
I know that It's not recommended to make send_data/file methods accessible out of Controller classes. Well, any suggestions on how to do it?
Thanks in advance!!
Do you can set up some database on your application? And then store record about completed jobs there, also you can save the entire file in database, but i recommend some cloud storage (like amazon s3).
And after that you can show current status of queued jobs on some page for user, with button 'download' after job has done
I am using apartment gem in one of my projects. And I have a requirement to log one particular type of activity in each of the tenants. For this I created a excluded model, and in the action where the activity happens I added the function to log it. Due to the data I am trying to log, lot of queries are run when i run this method. Thus i decided to move it to a background worker (Sidekiq). But when the worker runs its saying that its giving errors like.
Undefined method name for nil class
Now the code which gives this error is post.author.name.
This code works properly if we call it directly but breaks when we do it through sidekiq. Has this issue happened to anyone else before? any known solutions?
Worker code is
def perform(post_id, subdomain)
LogTransaction.create_post(post_id, subdomain)
end
The LogTransaction.create_post
def self.create_post post_id, subdomain
post = Post.find(post_id)
Apartment::Tenant.switch('public')
create(post_name: post.name, subdomain: subdomain, author_name: post.author.name)
end
Use this gem in your application, this gem will store which schema from which the job was initiated and it will run in that schema.
https://github.com/influitive/apartment-sidekiq
I have a Resque job which pulls a csv list of data off of a remote server and then runs through the +40k entries to add any new items to an existing database table. The job is running fine however it severely slows down the response time of any subsequent requests to the server. In the console which I've launched 'bundle exec rails server', I see not print statements though the job is running. However once I hit my rails server (via a page referesh), I see multiple SELECT / INSERT statements roll by before the server responds. The SELECT/INSERT statements are clearly generated by my Resque job but oddly they wait to print to the console unit I hit the server through the browser.
It sure feels like I'm doing something wrong or not following the 'rails way'. Advice?
Here is the code in my Resque job which does the SELECT/INSERTS
# data is an array of hashes formed from parsing csv input. Max size is 1000
ActiveRecord::Base.transaction do
data.each do |h|
MyModel.find_or_create_by_X_and_Y( h[:x], h[:y], h )
end
end
Software Stack
Rails 3.2.0
postgresql 9.1
Resque 1.20.0
EDIT
I've finally take the time to debug this a bit more. Even a very simple worker, like below, slows down the next server response. In the console where I've launched the rail sever process I see that the delay occurs b/c stdout from the worker is being printed only after I ping the server.
def perform()
s = Time.now
0.upto( 90000 ) do |i|
Rails.logger.debug i * i
end
e = Time.now
Rails.logger.info "Start: #{s} ---- End #{e}"
Rails.logger.info "Total Time: #{e - s }"
end
I can get the rails server back to its normal responsiveness again if I suppress stdout when I launch rails but it doesn't seem like that should be necessary... bundle exec rails server > /dev/nul
Any input on a better way to solve this issue?
I think this answer to "Logging issues with Resque" will help.
The Rails server, in development mode, has the log file open. My understanding -- I need to confirm this -- is that it flushes the log before writing anything new to it, in order to preserve the order. If you have the Rails server attached to a terminal, it wants to output all of the changes first! This can lead to large delays if your workers have written large quantities to the log.
Note: this has been happening to me for some time, but I just put my finger on it recently.
I'm having big issues trying to get delayed_job working with Amazon S3 and Paperclip. There are a few posts around about how to do it, but for whatever reason it's simply not working for me. I've removed a couple of things to how others are doing it - originally I had a save(validations => false) in regenerate_styles, but that seemed to cause an infinite loop (due to the after save catch), and didn't seem to be necessary (since the URLs have been saved, just the images not uploaded). Here's the relevant code from my model file, submission.rb:
class Submission < ActiveRecord::Base
has_attached_file :photo ...
...
before_photo_post_process do |submission|
if photo_changed?
false
end
end
after_save do |submission|
if submission.photo_changed?
Delayed::Job.enqueue ImageJob.new(submission.id)
end
end
def regenerate_styles!
puts "Processing photo"
self.photo.reprocess!
end
def photo_changed?
self.photo_file_size_changed? ||
self.photo_file_name_changed? ||
self.photo_content_type_changed? ||
self.photo_updated_at_changed?
end
end
And my little ImageJob class that sites at the bottom of the submission.rb file:
class ImageJob < Struct.new(:submission_id)
def perform
Submission.find(self.submission_id).regenerate_styles!
end
end
As far as I can tell, the job itself gets created correctly (as I'm able to pull it out of the database via a query).
The problem arises when:
$ rake jobs:work
WARNING: Nokogiri was built against LibXML version 2.7.8, but has dynamically loaded 2.7.3
[Worker(host:Jarrod-Robins-MacBook.local pid:21738)] New Relic Ruby Agent Monitoring DJ worker host:MacBook.local pid:21738
[Worker(host:MacBook.local pid:21738)] Starting job worker
Processing photo
[Worker(host:MacBook.local pid:21738)] ImageJob completed after 9.5223
[Worker(host:MacBook.local pid:21738)] 1 jobs processed at 0.1045 j/s, 0 failed ...
The rake task then gets stuck and never exits, and the images themselves don't appear to have been reprocessed.
Any ideas?
EDIT: just another point; the same thing happens on heroku, not just locally.
Delayed job is capturing a stack trace for all failed jobs. It’s saved in the last_error column of the delayed_jobs table. Use a database gui too see whats going on.
If you should be using Collective Ideas fork with ActiveRecord as backend you can query the model as usual. To fetch an array of all stack traces for example do
Delayed::Job.where('failed_at IS NOT NULL').map(&:last_error)
By default failed jobs are deleted after 25 failed attempts. It may be that there are no jobs anymore. Prevent deletion for debugging purposes by setting
Delayed::Worker.destroy_failed_jobs = false
in your config/initializers/delayed_job_config.rb
I've been trying to solve a problem for a few weeks now. I am running rspec tests for my Rails app, and they are working fine except for one error that I can't seem get my head around.
I am using MySQL with the InnoDB engine.
I have set config.use_transactional_fixtures = true in spec_helper.rb
I load my test fixtures manually with the command rake spec:db:fixtures:load.
The rspec test is being written for a BackgrounDRb worker, and it is testing that a record can have its state updated (through the state_machine gem).
Here is my problem:
I have a model called Listings. The rspec test calls the update_sold_items method within a file called listing_worker.rb.
This method calls listing.sell for a particular record, which sets the listing record's 'state' column to 'sold'.
So far, this is all working fine, but when the update_sold_items method finishes, my rspec test fails here:
listing = Listing.find_by_listing_id(listing_id)
listing.state.should == "sold"
expected: "sold",
got: "current" (using ==)
I've been trying to track down why the state change is not persisting, but am pretty much lost. Here is the result of some debugging code that I placed in the update_sold_items method during the test:
pp listing.state # => "current"
listing.sell!
listing.save!
pp listing.state # => "sold"
listing.reload
pp listing.state # => "current"
I cannot understand why it saves perfectly fine, but then reverts back to the original record whenever I call reload, or Listing.find etc.
Thanks for reading this, and please ask any questions if I haven't given enough information.
Thanks for your help,
Nathan B
P.S. I don't have a problem creating new records for other classes, and testing those records. It only seems to be a problem when I am updating records that already exist in the database.
I suspect, like nathan, transaction issues. Try putting a Listing.connection.execute("COMMIT") right before your first save call to break the transaction and see what changes. That will break you out of the transaction so any additional rollback calls will be non-effectual.
Additionally, by running a "COMMIT" command, you could pause the test with a debugger and inspect the database from another client to see what's going on.
The other hypothesis, if the transaction experimentation doesn't yield any results, is that perhaps your model really isn't saving to the database. Check your query logs. (Specifically find the update query).
These kind of issues really stink! Good luck!
If you want to investigate what you have in DB while running tests you might find this helpful...
I have a rspec test where I save #user.save and it works like a charm, but then I wanted to see if it's really saved in the DB.
I opened rails console for test environment
rails c test
ran
User.all
and as expected got nothing
I ran my spec that contains:
user_attr_hash = FactoryGirl.attributes_for(:user)
#user = User.new user_attr_hash
#user.save
binding.pry
I thought that stopping the test after save would mean that it's persisted, but that's not the case. It seems that COMMIT on the connection is fired later (I have no idea when:\ )
So, as #Tim Harper suggests, you have to fire that commit yourself in the pry console:
pry(#<RSpec::Core::ExampleGroup::Nested_1>)> User.connection.execute("COMMIT")
Now, if you run User.all in your rails console you should see it ;)