Rails way to match two users in real time - ruby-on-rails

I'm developing a Rails application as a backend for a mobile app, via a JSON API. I've modeled pretty much everything, but there's one (core) process that needs to be designed and I'm not finding a clear way to implement it.
Besides from other features, the app must match two users that meet a certain conditions, mainly geographical. For the sake of the question and to simplify, let's say it needs to match users that are close to each other AND that are currently searching for a match (it's a synchronous experience), e.g.:
User A hits "Search partners" and a loading screen appears
User B hits "Search partners" and a loading screen appears
The users are, let's say, 5km apart. The experience should be:
They both see the loading screen for 5 seconds, while "the system" is looking for matches nearby (3km). After 5 seconds, it broadens the radius to 6km and it matches the two users. The two of them should navigate to the "Found a match" screen.
My main issue here is how to model this "looking for a match" status in Rails. I've thought of creating a table and model including a reference to the user and his position. But then I can't figure out how to deal with the "match query" without falling into a master-slave situation.
Basically, the ideal situation would be one in which both user's apps were in a kind of idle status and the backend, in case of a match, could notify them both, but in that case, the process in the backend should've to be not request-based but maybe a worker...I'm using Postgres with Postgis, so the storage of the user's position is possible, but not sure if maybe Redis would be a better choice, given the amount of changing rows...
I'm aware I'm being quite vague with my question, but it's really a matter of what approach to take, more than a code-level solution.
Thank you so much!

Sidekiq + WebSockets + psql should do just fine. To avoid master-slave situation match making should be based on invites between two looking for match users. The solution is pretty simple: when a user connects via a websocket to your rails app it starts FindMatchJob. The job checks if any other user in 5km distance range has invited us and accepts the first invites. Otherwise, it invites other users withing given range and shedules itself again with 1 second delay. I added some code as a proof of concept but it is not bulletproof in terms of concurrency issues. Also I simplified range expansion because I am lazy:)
class FindMatchJob < ApplicationJob
def perform(user, range: 5)
return unless user.looking_for_a_match?
invites = find_pending_invites(user, range)
return accept_invite(invite.first) if invites.any?
users_in_range = find_users_in_range(user, range)
users_in_range.each do |other_user|
Invite.create!(
inviting: user,
invited: other_user,
distance: other_user.distance_from(user)
)
end
self.class.set(wait: 1.second).perform_later(user, range: range + 1)
end
private
def find_pending_invites(user)
Invite.where('distance <= ?', distance).find_by(invited: user)
end
def accept_invite(invite)
notify_users(invite)
clear_other_invites(invite)
end
def find_users_in_range(user, range)
# somehow find those users
end
def notify_users(invite)
# Implement logic to notify users about a match via websockets
end
def clear_other_invites(invite)
Invite.where(
inviting: match.inviting
).or(
invited: match.inviting
).or(
inviting: match.invited
).or(
invited: match.invited
).delete_all
end
end
One more note. You may want to consider another tech stack to handle this issue. Rails are not the best in terms of concurrency. I would try with GoLang, it would be much better here.

Disclaimer: I have no prior experience of the feature you have in mind. But, I would do something like the following.
Assumptions:
All code below makes use of Postgis
PartnersLocation model has a t.st_point :geolocation, geographic: true attribute. Reference
Possible Solution:
Client-side connects to the Rails backend through ActionCable (websocket)
Example:
// JS
// when "Search Partner" is clicked, perform the following:
var currentRadius = 5;
// let XXX.XX and YYY.YY below to be the "current" location
// Request:
// POST /partners_locations.json
// { partners_location: { radius: 5, latitude: XXX.XX, longitude: YYY.YY } }
// Response:
// 201 Created
// { id: 14, user_id: 6, radius: 5, latitude: XXX.XX, longitude: YYY.YY }
// Using the `partners_location` ID above ^
App.cable.subscriptions.create(
{ channel: 'PartnersLocationsSearchChannel', id: 14 },
{
connected: function() {
this.search()
// search every 5 seconds, and incrementally increase the radius
setInterval(this.search, 5000)
},
// this function will be triggered when there is a match
received: function(data) {
console.log(data)
// i.e. will output: { id: 22, user_id: 7, latitude: XXX.XX, longitude: YYY.YY }
}
search: function() {
this.perform('search', { radius: currentRadius })
currentRadius += 1
}
}
)
Back-end side would be something like:
app/controllers/partners_locations_controller.rb:
class PartnersLocationsController < ApplicationController
def create
#partners_location = PartnersLocation.new(partners_location_params)
#partners_location.user = current_user
if #partners_location.save
render #partners_location, status: :created, location: #partners_location
else
render json: #partners_location.errors, status: :unprocessable_entity
end
end
private
def partners_location_params
params.require(:partners_location).permit(:radius, :latitude, :longitude)
end
end
app/channels/application_cable/partners_locations_search_channel.rb:
class PartnersLocationsSearchChannel < ApplicationCable::Channel
def subscribed
#current_partners_location = PartnersLocation.find(params[:id])
stream_for #current_partners_location
end
def unsubscribed
#current_partners_location.destroy
end
def search(data)
radius = data.fetch('radius')
#current_partners_location.update!(radius: radius)
partners_locations = PartnersLocation.where.not(
id: #current_partners_location.id
).where(
# TODO: update this `where` to do a Postgis two-circle (via radius) intersection query to get all the `partners_locations` of which their "radiuses" have already been taken accounted for.
# ^ haven't done this "circle-intersect" before, but probably the following will help: https://gis.stackexchange.com/questions/166685/postgis-intersect-two-circles-each-circle-must-be-built-from-long-lat-degrees?rq=1
)
partners_locations.each do |partners_location|
PartnersLocationsSearchChannel.broadcast_to(
partners_location,
#current_partners_location.as_json
)
PartnersLocationsSearchChannel.broadcast_to(
#current_partners_location,
partners_location.as_json
)
end
end
end
Code above still needs to be tweaked:
just update any of the code above to use JSON API instead
update the Client-side accordingly (it may not be JS). i.e. you can use:
https://github.com/danielrhodes/Swift-ActionCableClient
https://github.com/hosopy/actioncable-client-java
partners_locations#create still needs to be tweaked to save both latitude and longitude into the geolocation attribute
current_partner_location.as_json above still needs to be tweaked to return latitude and longitude instead of the attribute geolocation
def search above needs to be updated: the where Postgis 2-circle- intersect condition. I honestly don't know how to approach this. If anyone, please let me know. This is the closest I could find on the web
JS code above still needs to be tweaked to gracefully handle errors in connection, and also to stop the setInterval after successfully matching.
JS code above still needs to be tweaked to "unsubscribe" from the PartnersLocationsSearchChannel once the "loading-screen" has been closed, or when already found a "match", or something like that (depends on your requirements)

Related

cyclomatic complexity is too high rubocop for method

This is code I have using in my project.
Please suggest some optimizations (I have refactored this code a lot but I can't think of any progress further to optimize it )
def convert_uuid_to_emails(user_payload)
return unless (user_payload[:target] == 'ticket' or user_payload[:target] == 'change')
action_data = user_payload[:actions]
action_data.each do |data|
is_add_project = data[:name] == 'add_fr_project'
is_task = data[:name] == 'add_fr_task'
next unless (is_add_project or is_task)
has_reporter_uuid = is_task && Va::Action::USER_TYPES.exclude?(data[:reporter_uuid])
user_uuids = data[:user_uuids] || []
user_uuids << data[:owner_uuid] if Va::Action::USER_TYPES.exclude?(data[:owner_uuid])
user_uuids << data[:reporter_uuid] if has_reporter_uuid
users_data = current_account.authorizations.includes(:user).where(uid: user_uuids).each_with_object({}) { |a, o| o[a.uid] = {uuid: a.uid, user_id: a.user.id, user_name: a.user.name} }
if Va::Action::USER_TYPES.include? data[:owner_uuid]
data['owner_details'] = {}
else
data['owner_details'] = users_data[data[:owner_uuid]]
users_data.delete(data[:owner_uuid])
end
data['reporter_details'] = has_reporter_uuid ? users_data[data[:reporter_uuid]] : {}
data['user_details'] = users_data.values
end
end
Note that Rubocop is complaining that your code is too hard to understand, not that it won't work correctly. The method is called convert_uuid_to_emails, but it doesn't just do that:
validates payload is one of two types
filters the items in the payload by two other types
determines the presence of various user roles in the input
shove all the found user UUIDs into an array
convert the UUIDs into users by looking them up
find them again in the array to enrich the various types of user details in the payload
This comes down to a big violation of the SRP (single responsibility principle), not to mention that it is a method that might surprise the caller with its unexpected list of side effects.
Obviously, all of these steps still need to be done, just not all in the same method.
Consider breaking these steps out into separate methods that you can compose into an enrich_payload_data method that works at a higher level of abstraction, keeping the details of how each part works local to each method. I would probably create a method that takes a UUID and converts it to a user, which can be called each time you need to look up a UUID to get the user details, as this doesn't appear to be role-specific.
The booleans is_task, is_add_project, and has_reporter_uuid are just intermediate variables that clutter up the code, and you probably won't need them if you break it down into smaller methods.

Stripe API auto_paging get all Stripe::BalanceTransaction except some charge

I'm trying to get all Stripe::BalanceTransaction except those they are already in my JsonStripeEvent
What I did =>
def perform(*args)
last_recorded_txt = REDIS.get('last_recorded_stripe_txn_last')
txns = Stripe::BalanceTransaction.all(limit: 100, expand: ['data.source', 'data.source.application_fee'], ending_before: last_recorded_txt)
REDIS.set('last_recorded_stripe_txn_last', txns.data[0].id) unless txns.data.empty?
txns.auto_paging_each do |txn|
if txn.type.eql?('charge') || txn.type.eql?('payment')
begin
JsonStripeEvent.create(data: txn.to_json)
rescue StandardError => e
Rails.logger.error "Error while saving data from stripe #{e}"
REDIS.set('last_recorded_stripe_txn_last', txn.id)
break
end
end
end
end
But It doesnt get the new one from the API.
Can anyone could help me for this ? :)
Thanks
I think it's because the way auto_paging_each works is almost opposite to what you expect :)
As you can see from its source, auto_paging_each calls Stripe::ListObject#next_page, which is implemented as follows:
def next_page(params={}, opts={})
return self.class.empty_list(opts) if !has_more
last_id = data.last.id
params = filters.merge({
:starting_after => last_id,
}).merge(params)
list(params, opts)
end
It simply takes the last (already fetched) item and adds its id as the starting_after filter.
So what happens:
You fetch 100 "latest" (let's say) records, ordered by descending date (default order for BalanceTransaction API according to Stripe docs)
When you call auto_paging_each on this dataset then, it takes the last record, adds its id as the
starting_after filter and repeats the query.
The repeated query returns nothing because there are noting newer (starting later) than the set you initially fetched.
As far as there are no more newer items available, the iteration stops after the first step
What you could do here:
First of all, ensure that my hypothesis is correct :) - put the breakpoint(s) inside Stripe::ListObject and check. Then 1) rewrite your code to use starting_after traversing logic instead of ending_before - it should work fine with auto_paging_each then - or 2) rewrite your code to control the fetching order manually.
Personally, I'd vote for (2): for me slightly more verbose (probably), but straightforward and "visible" control flow is better than poorly documented magic.

Getting a Primary Key error in Rails using Sidekiq and Sidekiq-Cron

I have a Rails project that uses Sidekiq for worker tasks, and Sidekiq-Cron to handle scheduling. I am running into a problem, though. I built a controller (below) that handled all of my API querying, validation of data, and then inserting data into the database. All of the logic functioned properly.
I then tore out the section of code that actually inserts API data into the database, and moved it into a Job class. This way the Controller method could simply pass all of the heavy lifting off to a job. When I tested it, all of the logic functioned properly.
Finally, I created a Job that would call the Controller method every minute, do the validation checks, and then kick off the other Job to save the API data (if necessary). When I do this the first part of the logic seems to work, where it inserts new event data, but the logic where it checks to see if this is the first time we've seen an event for a specific object seems to be failing. The result is a Primary Key violation in PG.
Code below:
Controller
require 'date'
class MonnitOpenClosedSensorsController < ApplicationController
def holderTester()
#MonnitschedulerJob.perform_later(nil)
end
# Create Sidekiq queue to process new sensor readings
def queueNewSensorEvents(auth_token, network_id)
m = Monnit.new("iMonnit", 1)
# Construct the query to select the most recent communication date for each sensor in the network
lastEventForEachSensor = MonnitOpenClosedSensor.select('"SensorID", MAX("LastCommunicationDate") as "lastCommDate"')
lastEventForEachSensor = lastEventForEachSensor.group("SensorID")
lastEventForEachSensor = lastEventForEachSensor.where('"CSNetID" = ?', network_id)
todaysDate = Date.today
sevenDaysAgo = (todaysDate - 7)
lastEventForEachSensor.each do |event|
# puts event["lastCommDate"]
recentEvent = MonnitOpenClosedSensor.select('id, "SensorID", "LastCommunicationDate"')
recentEvent = recentEvent.where('"CSNetID" = ? AND "SensorID" = ? AND "LastCommunicationDate" = ?', network_id, event["SensorID"], event["lastCommDate"])
recentEvent.each do |recent|
message = m.get_extended_sensor(auth_token, recent["SensorID"])
if message["LastDataMessageMessageGUID"] != recent["id"]
MonnitopenclosedsensorJob.perform_later(auth_token, network_id, message["SensorID"])
# puts "hi inner"
# puts message["LastDataMessageMessageGUID"]
# puts recent['id']
# puts recent["SensorID"]
# puts message["SensorID"]
# raise message
end
end
end
# Queue up any Sensor Events for new sensors
# This would be sensors we've never seen before, from a Postgres standpoint
sensors = m.get_sensor_ids(auth_token)
sensors.each do |sensor|
sensorCheck = MonnitOpenClosedSensor.select(:SensorID)
# sensorCheck = MonnitOpenClosedSensor.select(:SensorID)
sensorCheck = sensorCheck.group(:SensorID)
sensorCheck = sensorCheck.where('"CSNetID" = ? AND "SensorID" = ?', network_id, sensor)
# sensorCheck = sensorCheck.where('id = "?"', sensor["LastDataMessageMessageGUID"])
if sensorCheck.any? == false
MonnitopenclosedsensorJob.perform_later(auth_token, network_id, sensor)
end
end
end
end
The above code breaks Sensor Events for new sensors. It doesn't recognize that a sensor already exists, first issue, and then doesn't recognize that the event it is trying to create is already persisted to the database (uses a GUID for comparison).
Job to persist data
class MonnitopenclosedsensorJob < ApplicationJob
queue_as :default
def perform(auth_token, network_id, sensor)
m = Monnit.new("iMonnit", 1)
newSensor = m.get_extended_sensor(auth_token, sensor)
sensorRecord = MonnitOpenClosedSensor.new
sensorRecord.SensorID = newSensor['SensorID']
sensorRecord.MonnitApplicationID = newSensor['MonnitApplicationID']
sensorRecord.CSNetID = newSensor['CSNetID']
lastCommunicationDatePretty = newSensor['LastCommunicationDate'].scan(/[0-9]+/)[0].to_i / 1000.0
nextCommunicationDatePretty = newSensor['NextCommunicationDate'].scan(/[0-9]+/)[0].to_i / 1000.0
sensorRecord.LastCommunicationDate = Time.at(lastCommunicationDatePretty)
sensorRecord.NextCommunicationDate = Time.at(nextCommunicationDatePretty)
sensorRecord.id = newSensor['LastDataMessageMessageGUID']
sensorRecord.PowerSourceID = newSensor['PowerSourceID']
sensorRecord.Status = newSensor['Status']
sensorRecord.CanUpdate = newSensor['CanUpdate'] == "true" ? 1 : 0
sensorRecord.ReportInterval = newSensor['ReportInterval']
sensorRecord.MinimumThreshold = newSensor['MinimumThreshold']
sensorRecord.MaximumThreshold = newSensor['MaximumThreshold']
sensorRecord.Hysteresis = newSensor['Hysteresis']
sensorRecord.Tag = newSensor['Tag']
sensorRecord.ActiveStateInterval = newSensor['ActiveStateInterval']
sensorRecord.CurrentReading = newSensor['CurrentReading']
sensorRecord.BatteryLevel = newSensor['BatteryLevel']
sensorRecord.SignalStrength = newSensor['SignalStrength']
sensorRecord.AlertsActive = newSensor['AlertsActive']
sensorRecord.AccountID = newSensor['AccountID']
sensorRecord.CreatedOn = Time.now.getutc
sensorRecord.CreatedBy = "Monnit Open Closed Sensor Job"
sensorRecord.LastModifiedOn = Time.now.getutc
sensorRecord.LastModifiedBy = "Monnit Open Closed Sensor Job"
sensorRecord.save
sensorRecord = nil
end
end
Job to call controller every minute
class MonnitschedulerJob < ApplicationJob
queue_as :default
def perform(*args)
m = Monnit.new("iMonnit", 1)
getImonnitUsers = ImonnitCredential.select('"auth_token", "username", "password"')
getImonnitUsers.each do |user|
# puts user["auth_token"]
# puts user["username"]
# puts user["password"]
if user["auth_token"] != nil
m.logon(user["auth_token"])
else
auth_token = m.get_auth_token(user["username"], user["password"])
auth_token = auth_token["Result"]
end
network_list = m.get_network_list(auth_token)
network_list.each do |network|
# puts network["NetworkID"]
MonnitOpenClosedSensorsController.new.queueNewSensorEvents(auth_token, network["NetworkID"])
end
end
end
end
Sorry about the length of the post. I tried to include as much information as I could about the code involved.
EDIT
Here is the code for the extended sensor, along with the JSON response:
def get_extended_sensor(auth_token, sensor_id)
response = self.class.get("/json/SensorGetExtended/#{auth_token}?SensorID=#{sensor_id}")
if response['Result'] != "Invalid Authorization Token"
response['Result']
else
response['Result']
end
end
{
"Method": "SensorGetExtended",
"Result": {
"ReportInterval": 180,
"ActiveStateInterval": 180,
"InactivityAlert": 365,
"MeasurementsPerTransmission": 1,
"MinimumThreshold": 4294967295,
"MaximumThreshold": 4294967295,
"Hysteresis": 0,
"Tag": "",
"SensorID": 189092,
"MonnitApplicationID": 9,
"CSNetID": 24391,
"SensorName": "Open / Closed - 189092",
"LastCommunicationDate": "/Date(1500999632000)/",
"NextCommunicationDate": "/Date(1501010432000)/",
"LastDataMessageMessageGUID": "d474b3db-d843-40ba-8e0e-8c4726b61ec2",
"PowerSourceID": 1,
"Status": 0,
"CanUpdate": true,
"CurrentReading": "Open",
"BatteryLevel": 100,
"SignalStrength": 84,
"AlertsActive": true,
"CheckDigit": "QOLP",
"AccountID": 14728
}
}
Some thoughts:
recentEvent = MonnitOpenClosedSensor.select('id, "SensorID", "LastCommunicationDate"') -
this is not doing any ordering; you are presuming that the records you retrieve here are the latest records.
m = Monnit.new("iMonnit", 1)
newSensor = m.get_extended_sensor(auth_token, sensor)
without the implementation details of get_extended_sensor it's impossible to tell you how
sensorRecord.id = newSensor['LastDataMessageMessageGUID']
is resolving.
It's highly likely that you are getting duplicate messages. It's almost never a good idea to use input data as a primary key - rather autogenerate a GUID in your job, use that as the primary key, and then use the LastDataMessageMessageGUID as a correlation id.
So the issue that I was running into, as it turns out, is as follows:
A sensor event was pulled from the API and queued up in as a worker job in Sidekiq.
If the queue is running a bit slow, API speed or simply a lot of jobs to process, the 1 minute poll might hit again and pull the same sensor event down and queue it up.
As the queue processes, the sensor event gets inserted into the database with it's GUID being the primary key
As the queue continues to catch up with itself, it hits the same event that was scheduled as a secondary job. This job then fails.
My solution to this was to move my "does this SensorID and GUID exist in the database" to the actual job. So when the job ran the first thing it'd do is check AGAIN for the record to already exist. This means I am checking twice, but this quick check has low overhead.
There is still the risk that a check could happen and pass while another job is inserting the record, before it commits it to the database, and then it could fail. But the retry would catch it, and then clear it on out as a successful process when the check doesn't validate on the second round. Having said that, however, the check occurs AFTER the API data has been pulled. Since, in theory, the database persist of a single record from the API data would happen really fast (much faster than the API call would happen), it really does lower the chances of you having to hit a retry on any job....and I mean you'd have a better chance of hitting the lottery than having the second check fail and trigger a retry.
If anyone else has a better, or more clean solution, please feel free to include it as a secondary answer!

Ice cube, how to set a rule of every day at a certain time for Sidetiq/Fist of Fury

Per docs I thought it would be (for everyday at 3pm)
daily.hour_of_day(15)
What I'm getting is a random mess. First, it's executing whenever I push to Heroku regardless of time, and then beyond that, seemingly randomly. So the latest push to Heroku was 1:30pm. It executed: twice at 1:30pm, once at 2pm, once at 4pm, once at 5pm.
Thoughts on what's wrong?
Full code (note this is for the Fist of Fury gem, but FoF is heavily influenced by Sidetiq so help from Sidetiq users would be great as well).
class Outstanding
include SuckerPunch::Job
include FistOfFury::Recurrent
recurs { daily.hour_of_day(15) }
def perform
ActiveRecord::Base.connection_pool.with_connection do
# Auto email lenders every other day if they have outstanding requests
lender_array = Array.new
Inventory.where(id: (Borrow.where(status1:1).all.pluck("inventory_id"))).each { |i| lender_array << i.signup.id }
lender_array.uniq!
lender_array.each { |l| InventoryMailer.outstanding_request(l).deliver }
end
end
end
Maybe you should use:
recurrence { daily.hour_of_day(15) }
instead of recurs?

Activerecord transaction concurrency race condition issues

I'm currently doing live testing of a game I'm making for Android. The services are written in rails 3.1 and I'm using Postgresql. Some of my more technically savvy testers have been able to manipulate the game by recording their requests to the server and replaying them with high concurrency. I'll try to briefly describe the scenario below without getting caught up in the code.
A user can purchase multiple items, each item has its own record in the database.
The request goes to a controller action, which creates a purchase model to record information about the transaction.
The trade model has a method that sets up the purchase of the items. It essentially does a few logical steps to see if they can purchase the item. The most important is that they have a limit of 100 items per user at any given time. If all the conditions pass, a simple loop is used to create the number of items they requested.
So, what they are doing is, recording 1 valid request purchase via a proxy. Then replaying it with high concurrency, which essentially is allowing a few extra to slip through each time. So if they set it to purchase 100 quantity, they can get it up to 300-400 or if they do 15 quantity, they can get it up to like 120.
The above purchase method is wrapped in a transaction. However, even though its wrapped it won't stop it in certain circumstances where the requests are executing nearly at the same time. I'm guessing this may require some DB locking. Another twist in this that needs to be known is that at any given time rake task are being ran in cron jobs against the user table to update the players health and energy attributes. So, that cannot be blocked either.
Any assistance would be really awesome. This is my little hobby side project and I want to make sure the game is fair and fun for everyone.
Thanks so much!
Controller action:
def hire
worker_asset_type_id = (params[:worker_asset_type_id])
quantity = (params[:quantity])
trade = Trade.new()
trade_response = trade.buy_worker_asset(current_user, worker_asset_type_id, quantity)
user = User.find(current_user.id, select: 'money')
respond_to do |format|
format.json {
render json: {
trade: trade,
user: user,
messages: {
messages: [trade_response.to_s]
}
}
}
end
end
Trade Model Method:
def buy_worker_asset(user, worker_asset_type_id, quantity)
ActiveRecord::Base.transaction do
if worker_asset_type_id.nil?
raise ArgumentError.new("You did not specify the type of worker asset.")
end
if quantity.nil?
raise ArgumentError.new("You did not specify the amount of worker assets you want to buy.")
end
if quantity <= 0
raise ArgumentError.new("Please enter a quantity above 0.")
end
quantity = quantity.to_i
worker_asset_type = WorkerAssetType.where(id: worker_asset_type_id).first
if worker_asset_type.nil?
raise ArgumentError.new("There is no worker asset of that type.")
end
trade_cost = worker_asset_type.min_cost * quantity
if (user.money < trade_cost)
raise ArgumentError.new("You don't have enough money to make that purchase.")
end
# Get the users first geo asset, this will eventually have to be dynamic
potential_total = WorkerAsset.where(user_id: user.id).length + quantity
# Catch all for most people
if potential_total > 100
raise ArgumentError.new("You cannot have more than 100 dealers at the current time.")
end
quantity.times do
new_worker_asset = WorkerAsset.new()
new_worker_asset.worker_asset_type_id = worker_asset_type_id
new_worker_asset.geo_asset_id = user.geo_assets.first.id
new_worker_asset.user_id = user.id
new_worker_asset.clocked_in = DateTime.now
new_worker_asset.save!
end
self.buyer_id = user.id
self.money = trade_cost
self.worker_asset_type_id = worker_asset_type_id
self.trade_type_id = TradeType.where(name: "market").first.id
self.quantity = quantity
# save trade
self.save!
# is this safe?
user.money = user.money - trade_cost
user.save!
end
end
Sounds like you need idempotent requests so that request replay is ineffective. Where possible implement operations so that repeating them has no effect. Where not possible, give each request a unique request identifier and record whether requests have been satisfied or not. You can keep the request ID information in an UNLOGGED table in PostgreSQL or in redis/memcached since you don't need it to be persistent. This will prevent a whole class of exploits.
To deal with just this one problem create an AFTER INSERT OR DELETE ... FOR EACH ROW EXECUTE PROCEDURE trigger on the user items table. Have this trigger:
BEGIN
-- Lock the user so only one tx can be inserting/deleting items for this user
-- at the same time
SELECT 1 FROM user WHERE user_id = <the-user-id> FOR UPDATE;
IF TG_OP = 'INSERT' THEN
IF (SELECT count(user_item_id) FROM user_item WHERE user_item.user_id = <the-user-id>) > 100 THEN
RAISE EXCEPTION 'Too many items already owned, adding this item would exceed the limit of 100 items';
END IF;
ELIF TG_OP = 'DELETE' THEN
-- No action required, all we needed to do is take the lock
-- so a concurrent INSERT won't run until this tx finishes
ELSE
RAISE EXCEPTION 'Unhandled trigger case %',TG_OP;
END IF;
RETURN NULL;
END;
Alternately, you can implement the same thing in the Rails application by taking row-level lock on the customer ID before adding or deleting any item ownership records. I prefer to do this sort of thing in triggers where you can't forget to apply it somewhere, but I realise you might prefer to do it at the app level. See Pessimistic locking.
Optimistic locking is not a great fit for this application. You can use it by incrementing the lock counter on the user before adding/removing items, but it'll cause row churn on the users table and is really unnecessary when your transactions will be so short anyway.
We can't help much unless you show us your relevant schema and queries. I suppose that you do something like:
$ start transaction;
$ select amount from itemtable where userid=? and itemid=?;
15
$ update itemtable set amount=14 where userid=? and itemid=?;
commit;
An you should do something like:
$ start transaction;
$ update itemtable set amount=amount-1 returning amount where userid=? and itemid=?;
14
$ commit;

Resources