Rails 4 ActiveModel won't update_columns when tested with RSpec - ruby-on-rails

In a normal test using human and browser, everything is work as expected. However, when I use rspec, I can see that I have:
D, [2014-08-16T13:48:09.510013 #19418] DEBUG -- : SQL (0.6ms) UPDATE "system_flights_cacheds" SET "client_stuff" = '{"captcha":"656556"}' WHERE "system_flights_cacheds"."guid" = '5647046e-4194-498e-a0d7-512614b147d8'
But I cannot believe that actually my database record is not updated. Previously I used .save, but with no success in fact it creates SAVEPOINT.
My code in trouble is basically an API endpoint:
cache = System::Flights::Cached.search_cache options
# update database, when the captcha is present. this way, the worker
# when updating the database can see the changes and act accordingly!
if cache && params[:captcha]
# remember, anyone can (basically) see the captcha. thus,
# this is a bit paranoid, only allow captcha update
# if the user is same! in the json, if not forgotten,
# captcha is only displayed when the user_id is equal
server_stuff = cache.server_stuff.with_indifferent_access
if server_stuff[:user_id] == current_user.id
cache.time_renewed = 10
cache.client_stuff_will_change!
cache.client_stuff ||= {}
cache.client_stuff[:captcha] = params[:captcha]
# cache.save!
cache.update_columns(client_stuff: cache.client_stuff)
end
else
# only spawn worker if there is no captcha parameter passed
spawn_search_worker({user_id: current_user.id, options: options})
end
The client can reach this anytime and it will span worker. When a new record is already in database but is_processed is false, the worker will quit. Thus, calling this multiple times will be ok as also be a means to check status if the work is done or not.
The worker will wait the client to enter for a captcha. So, we have class like WaitableLogin, that do basically:
max_repeat = 3 # 14
# annul flag, if set to true, the data will not get persisted.
annul = false
while max_repeat > 0
# interval of 5 secs that worker can check the database
sleep 5
max_repeat -= 1
# break if captcha already entered by client
# seek from the database if the client has posted
# the captcha text
cache = System::Flights::Cached.search_cache options
client_stuff = nil
client_stuff = cache.client_stuff.with_indifferent_access if cache && cache.client_stuff
if client_stuff && client_stuff[:captcha]
captcha_text = client_stuff[:captcha]
airline.fill_captcha(captcha_text).finalize_login
puts "SOMEHOW I AM HERE: #{captcha_text}"
# remove all server's stuff
cache.server_stuff_will_change!
cache.server_stuff.clear
cache.save!
annul = airline.in_login_page?
end
end
So, WaitableLogin will check if the client_stuff is updated. If it is, then we know that client has submitted the captcha (through the Endpoint, the worker will check if captcha is a param and will update the database if there's captcha field).
The control then transferred back to the Worker. You can see that there's a lot of code that use cache at many parts of the codes across files, cache is just variable name nothing to do with its semantic meaning in Rails or whatever.
When I run normally on browser, I don't see any problem. In fact, no SAVEPOINT even if I use .save. I thought, it is creating some bug somewhere with that SAVEPOINT so I decided to try using .update_columns. But, again, with no success.
This is what the test looks like
before(:each) do
System::Flights::Cached.delete_all
end
describe "requests" do
it "should process 2a1c1i" do
cached = nil
post("/api/v1/x.json", {
access_token: CommonFlightData::ACCESS_TOKEN,
business_token: CommonFlightData::BUSINESS_TOKEN,
captcha: ""
}.merge!(CommonFlightData.oneway_1a(from: "8-9-2014")))
puts "enter the captcha: "
captcha = STDIN.gets.chomp
puts "Entered: #{captcha}"
post("/api/v1/x.json", {
access_token: CommonFlightData::ACCESS_TOKEN,
business_token: CommonFlightData::BUSINESS_TOKEN,
captcha: captcha
}.merge!(CommonFlightData.oneway_1a(from: "8-9-2014")))
sleep 10
SO what am I missing at, I tired. No error raised. When I check .inspect after update_columns, it seems all is updated. But, when you see at the database, nothing is updated.
EDIT: I put lock_version so that I have optimistic locking (by default, I think). And turn out, as expected, it was set to 2.
EDIT 2: If I command an edit from a rails console at the time when the code asking for captcha, IT UPDATES the data. SO, why the RSpec spec that run the api endpoint to submit a captcha won't update the row. All real-life no spec-in-origin code is finely executed.

Related

Can Sidekiq run a loop with wait and see a change to the db?

I have a sidekiq worker that waits for a change to happen to a record made by a remote client. Something like the following:
#myworker async process to wait for client to confirm status
perform(myRecordID)
sendClient(myRecordID)
didClientAcknowledge = false
while !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
break
end
# wait for client to perform an update on the record to confirm status
sleep 5.seconds
end
Rails.logger.info("client got the message")
end
my problem is that although I can see that the client has in fact performed the acknowledgement and updated the record with correct status update (ACK_OK), my sidekiq thread continues to see the old status for myRecord.
I'm sure my logic is flawed here but it seems like the sidekiq process does not "see" changes to the DB...but if I used my rails console I can see that the client has in fact updated the DB as expected...
Thanks!
Edit 1
ok so here's a thought, instead of the loop, I'll schedule another call to the worker within 5 seconds... so here's the updated code:
perform(myRecordID, retry_count)
retry_count -= 1
if retry_count < 1
return
end
sendClient(myRecordID)
didClientAcknowledge = false
if !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
Rails.logger.info("client got the message")
return
end
# wait for client to perform an update on the record to confirm status
myWorker.perform_in(5.seconds)
end
Rails.logger.info("client got the message")
end
This seems to work, but will test a bit more..one challenge is having a retry count which means I need to maintain some sort of variable between calls to the worker...
edit2 possibly this can be done by passing in the time to the first call and then checking if a timeout has been surpassed before invoking the next instance...assuming time does not stand still as well inside the async call...
edit3 Adding the retry_count argument allows us to control how many times this worker will be spawned...

Stripe API auto_paging get all Stripe::BalanceTransaction except some charge

I'm trying to get all Stripe::BalanceTransaction except those they are already in my JsonStripeEvent
What I did =>
def perform(*args)
last_recorded_txt = REDIS.get('last_recorded_stripe_txn_last')
txns = Stripe::BalanceTransaction.all(limit: 100, expand: ['data.source', 'data.source.application_fee'], ending_before: last_recorded_txt)
REDIS.set('last_recorded_stripe_txn_last', txns.data[0].id) unless txns.data.empty?
txns.auto_paging_each do |txn|
if txn.type.eql?('charge') || txn.type.eql?('payment')
begin
JsonStripeEvent.create(data: txn.to_json)
rescue StandardError => e
Rails.logger.error "Error while saving data from stripe #{e}"
REDIS.set('last_recorded_stripe_txn_last', txn.id)
break
end
end
end
end
But It doesnt get the new one from the API.
Can anyone could help me for this ? :)
Thanks
I think it's because the way auto_paging_each works is almost opposite to what you expect :)
As you can see from its source, auto_paging_each calls Stripe::ListObject#next_page, which is implemented as follows:
def next_page(params={}, opts={})
return self.class.empty_list(opts) if !has_more
last_id = data.last.id
params = filters.merge({
:starting_after => last_id,
}).merge(params)
list(params, opts)
end
It simply takes the last (already fetched) item and adds its id as the starting_after filter.
So what happens:
You fetch 100 "latest" (let's say) records, ordered by descending date (default order for BalanceTransaction API according to Stripe docs)
When you call auto_paging_each on this dataset then, it takes the last record, adds its id as the
starting_after filter and repeats the query.
The repeated query returns nothing because there are noting newer (starting later) than the set you initially fetched.
As far as there are no more newer items available, the iteration stops after the first step
What you could do here:
First of all, ensure that my hypothesis is correct :) - put the breakpoint(s) inside Stripe::ListObject and check. Then 1) rewrite your code to use starting_after traversing logic instead of ending_before - it should work fine with auto_paging_each then - or 2) rewrite your code to control the fetching order manually.
Personally, I'd vote for (2): for me slightly more verbose (probably), but straightforward and "visible" control flow is better than poorly documented magic.

Nokogiri Timeout::Error when scraping own site

Nokogiri works fine for me in the console, but if I put it anywhere... Model, View, or Controller, it times out.
I'd like to use it 1 of 2 ways...
Controller
def show
#design = Design.find(params[:id])
doc = Nokogiri::HTML(open(design_url(#design)))
images = doc.css('.well img') ? doc.css('.well img').map{ |i| i['src'] } : []
end
or...
Model
def first_image
doc = Nokogiri::HTML(open("http://localhost:3000/blog/#{self.id}"))
image = doc.css('.well img')[0] ? doc.css('.well img')[0]['src'] : nil
self.update_attribute(:photo_url, image)
end
Both result in a timeout, though they work perfectly in the console.
When you run your Nokogiri code from the console, you're referencing your development server at localhost:3000. Thus, there are two instances running: one making the call (your console) and one answering the call (your server)
When you run it from within your app, you are referencing the app itself, which is causing an infinite loop since there is no available resource to respond to your call (that resource is the one making the call!). So you would need to be running multiple instances with something like Unicorn (or simply another localhost instance at a different port), and you would need at least one of those instances to be free to answer the Nokogiri request.
If you plan to run this in production, just know that this setup will require an available resource to answer the Nokogiri request, so you're essentially tying up 2 instances with each call. So if you have 4 instances and all 4 happen to make the call at the same time, your whole application is screwed. You'll probably experience pretty severe degradation with only 1 or 2 calls at a time as well...
Im not sure what default value of timeout.
But you can specify some timeout value like below.
require 'net/http'
http = Net::HTTP.new('localhost')
http.open_timeout = 100
http.read_timeout = 100
Nokogiri.parse(http.get("/blog/#{self.id}").body)
Finally you can find what is the problem as you can control timeout value.
So, with tyler's advice I dug into what I was doing a bit more. Because of the disconnect that ckeditor has with the images, due to carrierwave and S3, I can't get any info direct from the uploader (at least it seems that way to me).
Instead, I'm sticking with nokogiri, and it's working wonderfully. I realized what I was actually doing with the open() command, and it was completely unnecessary. Nokogiri parses HTML. I can give it HTML in for form of #design.content! Duh, on my part.
So, this is how I'm scraping my own site, to get the images associated with a blog entry:
designs_controller.rb
def create
params[:design][:photo_url] = Nokogiri::HTML(params[:design][:content]).css('img').map{ |i| i['src']}[0]
#design = Design.new(params[:design])
if #design.save
flash[:success] = "Design created"
redirect_to designs_url
else
render 'designs/new'
end
end
def show
#design = Design.find(params[:id])
#categories = #design.categories
#tags = #categories.map {|c| c.name}
#related = Design.joins(:categories).where('categories.name' => #tags).reject {|d| d.id == #design.id}.uniq
set_meta_tags og: {
title: #design.name,
type: 'article',
url: design_url(#design),
image: Nokogiri::HTML(#design.content).css('img').map{ |i| i['src']},
article: {
published_time: #design.published_at.to_datetime,
modified_time: #design.updated_at.to_datetime,
author: 'Alphabetic Design',
section: 'Designs',
tag: #tags
}
}
end
The Update action has the same code for Nokogiri as the Create action.
Seems kind of obvious now that I'm looking at it, lol. I dwelled on this for longer than I'd like to admit...

MongoDB― need to display status of db (running or not)

I am currently using MongoDB for tracking of various things in a Rails 2 app. I am using the following code to see if MongoDB is up and running and, depending upon the status, displaying a link or an "Offline" message.
This is only for admins, so it's not mission-critical, as the app will continue to run without MongoDB, but I do want to keep disabling the link in the menu when it's not running. However, I don't like the overhead of the below code (doesn't take long to run, but hope that there is a cleaner, faster way):
def verify_mongodb_status
begin
track = Track.first
#mongodb_running = true
rescue
#mongodb_running = false
logger.debug("***MongoDB not running.***")
notify_admin_about_errors("***MongoDB is not running***)
end
end
EDIT: I forgot to mention that I'm already doing a before_filter for this; the method sits in application_controller.rb.
I decided to go with action_caching as there doesn't seem to be a great way to do this. The result was quite a large speed increase from ~120ms to ~16-25ms:
def verify_mongodb_status
begin
track = Track.first
#mongodb_running = true
rescue => e
#mongodb_running = false
logger.debug("***MONGODB OFFLINE***: #{e}")
notify_admin_about_errors("MongoDB", "MongoDB error:\n#{e}", nil)
expire_action :action => :verify_mongodb_status
return
end
end
I'm adding logic now to keep from getting bombarded by emails when MongoDB goes offline (1 is enough).

find_or_create and race-condition in rails, theory and production

Hi I've this piece of code
class Place < ActiveRecord::Base
def self.find_or_create_by_latlon(lat, lon)
place_id = call_external_webapi
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
result
end
end
Then I'd like to do in another model or controller
p = Post.new
p.place = Place.find_or_create_by_latlon(XXXXX, YYYYY) # race-condition
p.save
But Place.find_or_create_by_latlon takes too much time to get the data if the action executed is create and sometimes in production p.place is nil.
How can I force to wait for the response before execute p.save ?
thanks for your advices
You're right that this is a race condition and it can often be triggered by people who double click submit buttons on forms. What you might do is loop back if you encounter an error.
result = Place.find_by_place_id(...) ||
Place.create(...) ||
Place.find_by_place_id(...)
There are more elegant ways of doing this, but the basic method is here.
I had to deal with a similar problem. In our backend a user is is created from a token if the user doesn't exist. AFTER a user record is already created, a slow API call gets sent to update the users information.
def self.find_or_create_by_facebook_id(facebook_id)
User.find_by_facebook_id(facebook_id) || User.create(facebook_id: facebook_id)
rescue ActiveRecord::RecordNotUnique => e
User.find_by_facebook_id(facebook_id)
end
def self.find_by_token(token)
facebook_id = get_facebook_id_from_token(token)
user = User.find_or_create_by_facebook_id(facebook_id)
if user.unregistered?
user.update_profile_from_facebook
user.mark_as_registered
user.save
end
return user
end
The step of the strategy is to first remove the slow API call (in my case update_profile_from_facebook) from the create method. Because the operation takes so long, you are significantly increasing the chance of duplicate insert operations when you include the operation as part of the call to create.
The second step is to add a unique constraint to your database column to ensure duplicates aren't created.
The final step is to create a function that will catch the RecordNotUnique exception in the rare case where duplicate insert operations are sent to the database.
This may not be the most elegant solution but it worked for us.
I hit this inside a sidekick job that retries and gets the error repeatedly and eventually clears itself. The best explanation I've found is on a blog post here. The gist is that postgres keeps an internally stored value for incrementing the primary key that gets messed up somehow. This rings true for me because I'm setting the primary key and not just using an incremented value so that's likely how this cropped up. The solution from the comments in the link above appears to be to call ActiveRecord::Base.connection.reset_pk_sequence!(table_name) This cleared up the issue for me.
begin
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
rescue ActiveRecord::StatementInvalid => error
#save_retry_count = (#save_retry_count || 1)
ActiveRecord::Base.connection.reset_pk_sequence!(:place)
retry if( (#save_retry_count -= 1) >= 0 )
raise error
end

Resources