How to apply lock on particular field so the same number is not generate again.
I have created algoritham in which it create string with using Year+000..+integer number
example : "20150001","20150002","20150003" etc.
The problem is that when the multiple user request for that number at that time the same number generated.
Following function i call
def get_algo_number(model_name,prefix) <br>
year = get_year
if model_name.count > 0
last_number = model_name.last.number
if last_number[2..5].to_i > year.to_i
return create_number(year,prefix)
else
# if letest generated number already exist then generate new number
return last_number.next
end
else
return create_number(year,prefix)
end
end
Please help if you have any solution regarding apply lock.
Thanks
Yes, i resolved this problem by using multi-threading.
Related
I'm trying to get all Stripe::BalanceTransaction except those they are already in my JsonStripeEvent
What I did =>
def perform(*args)
last_recorded_txt = REDIS.get('last_recorded_stripe_txn_last')
txns = Stripe::BalanceTransaction.all(limit: 100, expand: ['data.source', 'data.source.application_fee'], ending_before: last_recorded_txt)
REDIS.set('last_recorded_stripe_txn_last', txns.data[0].id) unless txns.data.empty?
txns.auto_paging_each do |txn|
if txn.type.eql?('charge') || txn.type.eql?('payment')
begin
JsonStripeEvent.create(data: txn.to_json)
rescue StandardError => e
Rails.logger.error "Error while saving data from stripe #{e}"
REDIS.set('last_recorded_stripe_txn_last', txn.id)
break
end
end
end
end
But It doesnt get the new one from the API.
Can anyone could help me for this ? :)
Thanks
I think it's because the way auto_paging_each works is almost opposite to what you expect :)
As you can see from its source, auto_paging_each calls Stripe::ListObject#next_page, which is implemented as follows:
def next_page(params={}, opts={})
return self.class.empty_list(opts) if !has_more
last_id = data.last.id
params = filters.merge({
:starting_after => last_id,
}).merge(params)
list(params, opts)
end
It simply takes the last (already fetched) item and adds its id as the starting_after filter.
So what happens:
You fetch 100 "latest" (let's say) records, ordered by descending date (default order for BalanceTransaction API according to Stripe docs)
When you call auto_paging_each on this dataset then, it takes the last record, adds its id as the
starting_after filter and repeats the query.
The repeated query returns nothing because there are noting newer (starting later) than the set you initially fetched.
As far as there are no more newer items available, the iteration stops after the first step
What you could do here:
First of all, ensure that my hypothesis is correct :) - put the breakpoint(s) inside Stripe::ListObject and check. Then 1) rewrite your code to use starting_after traversing logic instead of ending_before - it should work fine with auto_paging_each then - or 2) rewrite your code to control the fetching order manually.
Personally, I'd vote for (2): for me slightly more verbose (probably), but straightforward and "visible" control flow is better than poorly documented magic.
currently, I want to import above 55,000 records into my database from a CSV file. This is the code that I am using:
CSV.foreach(Rails.root.join('db/seeds/locations.csv'), headers: true) do |row|
val = Location.find_or_initialize_by(code: row[0])
val.name = row[1]
val.ecc = row[2] || 'MISSING'
val.created_by = User.find_by(name: 'anh')
val.updated_by = User.find_by(name: 'anh')
val.save!
end
However, it is too slow and I have just installed the gem 'postgres-copy'. I read the official documentation, and I believe I can use the class method copy_from to do the job, but if you read my current code, you can see that I am referring the data to the another table(association), and the documentation doesn't mention anything about association or validation. Therefore, I am wondering if there are any ways to solve it. This is the first time I use this gem. Thanks for reading.
I don't know that gem, but I would be very surprised if it can support multi-table copy since PostgreSQL's COPY works on a single table. 50K rows isn't all that many. You might try wrapping your insertions in transactions to avoid one commit per transaction. Doubt you want to wrap all 50K in a transaction though, but something like this:
User.connection.begin_transaction
i = 0
CSV.foreach(...) do |row|
... # your original code here
i += 1
if i % 500 == 0
User.connection.commit_transaction
User.connection.begin_transaction
end
end
User.connection.commit_transaction
This will insert your rows 500 records at a time and you should see a noticeable speed up. Play around with the value of 500 to find the sweet spot.
So, now I understand that I cannot take advantage of the COPY command in POSTGRESQL since it can't copy multiple tables. Therefore, I switch to the gem activerecord-import. Comparing with the method that Philip Hallstrom mentioned above, using activerecord-import give a faster result, 1m20s vs 1m54s to import above 8000 records.
This is my code after installing the gem activerecord-import. Hopefully, it can help other people.
locations = []
columns = [:code, :name, :ecc]
CSV.foreach(Rails.root.join('db/seeds/locations.csv'), headers: true) do |row|
val = Location.find_or_initialize_by(code: row[0])
val.name = row[1]
val.ecc = row[2] || 'MISSING'
val.created_by = User.find_by(name: 'anh')
val.updated_by = User.find_by(name: 'anh')
locations << val
end
Location.import columns, locations, validate: false
This code just displays the values inside the array model.request_reports
To get the most recent, I have to loop through and compare the current
report.updated_at with the last saved report.update_at value. One thing to find
out is what class the update_at field is and how to compare them against each other. The class is ActiveSupport::TimeZone
I need to keep track of the array index of the report that has the most recent updated_at as I loop so that I can access it after the loop.
The problem is, I don't know how to do this:
msg = ""
reports_arr = model.request_reports
reports_arr.each do |report|
updated_at = report.updated_at
if updated_at
msg = msg + "#{updated_at} --- "
msg = msg + "#{updated_at.class}---"
end
end
msg
To add to #meagar comment. You should be using the DB to do sorts on tables.
With that said we need to know what DB you are using as the exact command differs for each.
Mongo w/ Mongoid would be Model.order_by(:updated_at => 'desc').first
My loop had to go through the array and check by greatest date value because in the system Im using, it automatically sorts the reports array by the field "due_at" which is not the reports most recent updated record. Code below works for me.
msg = ""
reports_arr = model.request_reports
last_modified_report = model.last_modified_report
recent = nil
recent_report = nil
reports_arr.each_with_index do |report,index|
updated_at = report.updated_at
if index == 0
recent = updated_at
recent_report = report
end
if updated_at > recent
recent = updated_at
recent_report = report
end
last_modified_report = recent_report
end
msg = msg + "#{recent}---"
msg = msg + "#{recent_report}---"
msg = msg + "#{last_modified_report}"
model.last_modified_report = last_modified_report
model.save(validate: false)
msg
The OP's answer is only good if you absolutely cannot query the database for the info you want directly. I assume you only want the index so you can find the most recent one?
Even if automatic sorting is on one column, your query for the data can have it sorted on a different column.
model.request_reports.order_by(:updated_at => 'desc').first
If you have a default scope that's messing with your query, you can ask for an unscoped list, although I doubt a default ordering would cause any trouble.
model.unscoped.order_by(:updated_at => 'desc').first
You can string together queries that are already written: that can be useful even if request_reports is a query or scope you have somewhere.
It will be way less expensive than getting everything, and looping through it - you are always better off finding a way to get just the info you need in a db query if you can.
I'm currently doing live testing of a game I'm making for Android. The services are written in rails 3.1 and I'm using Postgresql. Some of my more technically savvy testers have been able to manipulate the game by recording their requests to the server and replaying them with high concurrency. I'll try to briefly describe the scenario below without getting caught up in the code.
A user can purchase multiple items, each item has its own record in the database.
The request goes to a controller action, which creates a purchase model to record information about the transaction.
The trade model has a method that sets up the purchase of the items. It essentially does a few logical steps to see if they can purchase the item. The most important is that they have a limit of 100 items per user at any given time. If all the conditions pass, a simple loop is used to create the number of items they requested.
So, what they are doing is, recording 1 valid request purchase via a proxy. Then replaying it with high concurrency, which essentially is allowing a few extra to slip through each time. So if they set it to purchase 100 quantity, they can get it up to 300-400 or if they do 15 quantity, they can get it up to like 120.
The above purchase method is wrapped in a transaction. However, even though its wrapped it won't stop it in certain circumstances where the requests are executing nearly at the same time. I'm guessing this may require some DB locking. Another twist in this that needs to be known is that at any given time rake task are being ran in cron jobs against the user table to update the players health and energy attributes. So, that cannot be blocked either.
Any assistance would be really awesome. This is my little hobby side project and I want to make sure the game is fair and fun for everyone.
Thanks so much!
Controller action:
def hire
worker_asset_type_id = (params[:worker_asset_type_id])
quantity = (params[:quantity])
trade = Trade.new()
trade_response = trade.buy_worker_asset(current_user, worker_asset_type_id, quantity)
user = User.find(current_user.id, select: 'money')
respond_to do |format|
format.json {
render json: {
trade: trade,
user: user,
messages: {
messages: [trade_response.to_s]
}
}
}
end
end
Trade Model Method:
def buy_worker_asset(user, worker_asset_type_id, quantity)
ActiveRecord::Base.transaction do
if worker_asset_type_id.nil?
raise ArgumentError.new("You did not specify the type of worker asset.")
end
if quantity.nil?
raise ArgumentError.new("You did not specify the amount of worker assets you want to buy.")
end
if quantity <= 0
raise ArgumentError.new("Please enter a quantity above 0.")
end
quantity = quantity.to_i
worker_asset_type = WorkerAssetType.where(id: worker_asset_type_id).first
if worker_asset_type.nil?
raise ArgumentError.new("There is no worker asset of that type.")
end
trade_cost = worker_asset_type.min_cost * quantity
if (user.money < trade_cost)
raise ArgumentError.new("You don't have enough money to make that purchase.")
end
# Get the users first geo asset, this will eventually have to be dynamic
potential_total = WorkerAsset.where(user_id: user.id).length + quantity
# Catch all for most people
if potential_total > 100
raise ArgumentError.new("You cannot have more than 100 dealers at the current time.")
end
quantity.times do
new_worker_asset = WorkerAsset.new()
new_worker_asset.worker_asset_type_id = worker_asset_type_id
new_worker_asset.geo_asset_id = user.geo_assets.first.id
new_worker_asset.user_id = user.id
new_worker_asset.clocked_in = DateTime.now
new_worker_asset.save!
end
self.buyer_id = user.id
self.money = trade_cost
self.worker_asset_type_id = worker_asset_type_id
self.trade_type_id = TradeType.where(name: "market").first.id
self.quantity = quantity
# save trade
self.save!
# is this safe?
user.money = user.money - trade_cost
user.save!
end
end
Sounds like you need idempotent requests so that request replay is ineffective. Where possible implement operations so that repeating them has no effect. Where not possible, give each request a unique request identifier and record whether requests have been satisfied or not. You can keep the request ID information in an UNLOGGED table in PostgreSQL or in redis/memcached since you don't need it to be persistent. This will prevent a whole class of exploits.
To deal with just this one problem create an AFTER INSERT OR DELETE ... FOR EACH ROW EXECUTE PROCEDURE trigger on the user items table. Have this trigger:
BEGIN
-- Lock the user so only one tx can be inserting/deleting items for this user
-- at the same time
SELECT 1 FROM user WHERE user_id = <the-user-id> FOR UPDATE;
IF TG_OP = 'INSERT' THEN
IF (SELECT count(user_item_id) FROM user_item WHERE user_item.user_id = <the-user-id>) > 100 THEN
RAISE EXCEPTION 'Too many items already owned, adding this item would exceed the limit of 100 items';
END IF;
ELIF TG_OP = 'DELETE' THEN
-- No action required, all we needed to do is take the lock
-- so a concurrent INSERT won't run until this tx finishes
ELSE
RAISE EXCEPTION 'Unhandled trigger case %',TG_OP;
END IF;
RETURN NULL;
END;
Alternately, you can implement the same thing in the Rails application by taking row-level lock on the customer ID before adding or deleting any item ownership records. I prefer to do this sort of thing in triggers where you can't forget to apply it somewhere, but I realise you might prefer to do it at the app level. See Pessimistic locking.
Optimistic locking is not a great fit for this application. You can use it by incrementing the lock counter on the user before adding/removing items, but it'll cause row churn on the users table and is really unnecessary when your transactions will be so short anyway.
We can't help much unless you show us your relevant schema and queries. I suppose that you do something like:
$ start transaction;
$ select amount from itemtable where userid=? and itemid=?;
15
$ update itemtable set amount=14 where userid=? and itemid=?;
commit;
An you should do something like:
$ start transaction;
$ update itemtable set amount=amount-1 returning amount where userid=? and itemid=?;
14
$ commit;
Hi I've this piece of code
class Place < ActiveRecord::Base
def self.find_or_create_by_latlon(lat, lon)
place_id = call_external_webapi
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
result
end
end
Then I'd like to do in another model or controller
p = Post.new
p.place = Place.find_or_create_by_latlon(XXXXX, YYYYY) # race-condition
p.save
But Place.find_or_create_by_latlon takes too much time to get the data if the action executed is create and sometimes in production p.place is nil.
How can I force to wait for the response before execute p.save ?
thanks for your advices
You're right that this is a race condition and it can often be triggered by people who double click submit buttons on forms. What you might do is loop back if you encounter an error.
result = Place.find_by_place_id(...) ||
Place.create(...) ||
Place.find_by_place_id(...)
There are more elegant ways of doing this, but the basic method is here.
I had to deal with a similar problem. In our backend a user is is created from a token if the user doesn't exist. AFTER a user record is already created, a slow API call gets sent to update the users information.
def self.find_or_create_by_facebook_id(facebook_id)
User.find_by_facebook_id(facebook_id) || User.create(facebook_id: facebook_id)
rescue ActiveRecord::RecordNotUnique => e
User.find_by_facebook_id(facebook_id)
end
def self.find_by_token(token)
facebook_id = get_facebook_id_from_token(token)
user = User.find_or_create_by_facebook_id(facebook_id)
if user.unregistered?
user.update_profile_from_facebook
user.mark_as_registered
user.save
end
return user
end
The step of the strategy is to first remove the slow API call (in my case update_profile_from_facebook) from the create method. Because the operation takes so long, you are significantly increasing the chance of duplicate insert operations when you include the operation as part of the call to create.
The second step is to add a unique constraint to your database column to ensure duplicates aren't created.
The final step is to create a function that will catch the RecordNotUnique exception in the rare case where duplicate insert operations are sent to the database.
This may not be the most elegant solution but it worked for us.
I hit this inside a sidekick job that retries and gets the error repeatedly and eventually clears itself. The best explanation I've found is on a blog post here. The gist is that postgres keeps an internally stored value for incrementing the primary key that gets messed up somehow. This rings true for me because I'm setting the primary key and not just using an incremented value so that's likely how this cropped up. The solution from the comments in the link above appears to be to call ActiveRecord::Base.connection.reset_pk_sequence!(table_name) This cleared up the issue for me.
begin
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
rescue ActiveRecord::StatementInvalid => error
#save_retry_count = (#save_retry_count || 1)
ActiveRecord::Base.connection.reset_pk_sequence!(:place)
retry if( (#save_retry_count -= 1) >= 0 )
raise error
end