I have a table in the front end with multiple rows that can be read and edited. Each row has an edit icon that the user will click on, and a dialog will pop up to update the fields in the row. The save button on the dialog will save the fields (call the update API), close the dialog, and reload the table by calling the list API with the same page, filters, and sort order.
To support multiple users reading and editing the same table, I want to lock the row that clicks on the edit icon, and unlock if when the user clicks on save or cancel on the dialog that pops up. To do this, I added a lock field to each row in the database.
When the user clicks on the edit icon, I send a lock API call:
lock_success = false
message = nil
row = Row.find(id)
# (with_lock)
if row.lock.nil?
row.lock = #current_user.user_name
row.save!
lock_success = true
end
# (with_lock_end)
When the edit dialog is closed on save or cancel:
Row.update(id, lock: nil)
But there could be a case where this follows?
(1) row = Row.find(1)
(2) row = Row.find(1)
(1) if row.lock.nil?
(2) if row.lock.nil?
(1) row.lock = #current.user_name
(2) row.lock = #current.user_name
(1) row.save!
(2) row.save!
If I wrap row.with_lock around (with_lock) and (with_lock_end), it should solve this problem right?
Lastly, can I use optimistic locking with lock_version?.
User (1) loads row 1 with version 1.
User (2) loads row 1 with version 1.
User (1) updates row 1 with version 1, now row 1 is version 2.
User (2) updates row 1 with version 1, gets back stale object exception.
Then I won't need to wrap the update calls with with_lock? However, how can I keep track of who locked the row via this method?
Try doing something like this:
lock_obtained = Row.where(id: 1, locked_by: nil).update_all(locked_by: #current_user.user_name, locked_at: Time.zone.now) == 1
# this try to update the row with ID 1 with the current_user's name only if it's not already locked, if the call returns 1 it means that the record was updated so #current_user locked it, if it returns 0 it means the record was already locked by someone else, that's the "== 1" for
Now, you'll try to lock the row and know if the row was locked before or succeded in just one line so you won't run into that race condition.
Just a few suggestion:
- I would use an integer with the user id for the locked_by column instead of the name
- I would also save the locked_at timestamp, sounds it could be usedfull
- What do you do if the user closes the tab that locked the row? For this kind of stuff you should use ActionCable so you have can detect if the user got disconnected too.
Related
I want to use playwright to automatically click and expand all the child nodes. But my code only expands part of the nodes. How should I fix the code? Thank you.
Current:
What I want:
import json
import time
from playwright.sync_api import sync_playwright
p = sync_playwright().start()
browser = p.chromium.launch(headless=False, slow_mo=2000)
context = browser.new_context()
page = context.new_page()
try:
# page.add_init_script(js);
page.goto("https://keepa.com/#!categorytree", timeout=10000)
# Click text=Log in / register now to subscribe
page.click("text=Log in / register now to subscribe")
# Click input[name="username"]
page.click("input[name=\"username\"]")
# Fill input[name="username"]
page.fill("input[name=\"username\"]", "tylrr123#outlook.com")
# Click input[name="password"]
page.click("input[name=\"password\"]")
# Fill input[name="password"]
page.fill("input[name=\"password\"]", "adnCgL#f$krY9Q9")
# Click input:has-text("Log in")
page.click("input:has-text(\"Log in\")")
page.wait_for_timeout(2000)
page.goto("https://keepa.com/#!categorytree", timeout=10000)
while(True):
#loc.first.click()
loc = page.locator(".ag-icon.ag-icon-expanded")
print(loc.count())
loc.first.click(timeout=5000)
page.wait_for_timeout(2000)
except Exception as err:
print(err)
finally:
print("finished")`
My code only expands part of the nodes. How should I fix the code? Thank you.
Sometimes I try to do some scripts, but being honest, this was one of the most harder ones. It has been a real challenge.
I think it is finished.
# Import needed libs
import time
from playwright.sync_api import sync_playwright
import datetime
# We save the time when script starts
init = datetime.datetime.now()
print(f"{datetime.datetime.now()} - Script starts")
# We initiate the playwright page
p = sync_playwright().start()
browser = p.chromium.launch(headless=False)
context = browser.new_context()
page = context.new_page()
# Navigate to Keepa and login
page.goto("https://keepa.com/#!categorytree")
page.click("text=Log in / register now to subscribe")
page.fill("#username", "tylrr123#outlook.com")
page.fill("#password", "adnCgL#f$krY9Q9")
page.click("#submitLogin", delay=200)
# We wait for the selector of the profile user, that means that we are already logged in
page.wait_for_selector("#panelUsername")
# Navigate to the categorytree url
page.goto("https://keepa.com/#!categorytree")
time.sleep(1)
#This function try to click on the arrow for expanding an subtree
def try_click():
# We save the number of elements that are closed trees
count = page.locator(f"//span[#class='ag-group-contracted']").count()
# We iterate the number of elements we had
for i in range(0, count):
# If the last element is visible, then we go inside the "if" statement. Why the last element instead of the first one? Because I don't know why the last element is usually the frist one...Keepa things, don't ask
if page.locator(f"(//span[#class='ag-group-contracted'])[{count-i}]").is_visible():
# Element was visible, so we try to click on it (Expand it). I wrapped the click inside a try/except block because sometimes playwright says that click failed, but actually does not fail and element is clicked. I don't know why
try:
# Clicking the element
page.click(f"(//span[#class='ag-group-contracted'])[{count-i}]", timeout=200)
print(f"Clicking Correct {count-i}. Wheel up")
# If element is clicked, we do scroll up, and we return true
page.mouse.wheel(0, -500)
return True
except:
# As I said, sometimes click fails but is actually clicked, so we return also true. The only way of returning False is if the elements are not visible
print(f"Error Clicking {count-i} but probably was clicked")
return True
# This function basically checks that there are closed trees
def there_is_still_closed_trees():
try:
page.wait_for_selector(selector=f"//span[#class='ag-group-contracted']", state='attached')
return True
except:
print("No more trees closed")
return False
# When we navigated to categorytree page a pop up appears, and you have to move the mouse to make it disappear, so I move the mouse and I keep it on the list, because later we will need to do scroll up and scroll down over the list
page.mouse.move(400, 1000)
page.mouse.move(400, 400)
# Var to count how many times we made scroll down
wheel_down_times = 0
# We will do this loop until there are not more closed trees
while there_is_still_closed_trees():
# If we could not make click (The closed trees were not visibles in the page) we will do scroll down to find them out
if not try_click():
# We do scroll down, and we sum one to the scroll down counter
print("Wheel down")
page.mouse.wheel(0, 400)
wheel_down_times = wheel_down_times + 1
print(f"Wheel down times = {wheel_down_times}")
# Sometimes if we do a lot of scrolls, page can crash, so we sleep the script 10 secs every 100 scrolls
if wheel_down_times % 100 == 0:
print("Sleeping 10 secs in order to avoid page crashes")
time.sleep(10)
# This "if" checks that the latest element of the whole tree is visible and we did more than 5 scroll down. That means that we are at the end of the list and we forget some closed trees, so we do scroll up till we arrive at the top of the list and we will make scroll down trying to find the pending closed trees
if page.locator(f"//span[text()='Walkthroughs & Tutorials']").is_visible() and wheel_down_times > 5:
page.mouse.wheel(0, -5000000)
else:
print(f"Wheel down times from {wheel_down_times} to 0")
wheel_down_times = 0
# Script finishes and show a summary of time
end = datetime.datetime.now()
print(f"{datetime.datetime.now()} - Script finished")
print(f"Script started at: {init}")
print(f"Script ended at: {end}")
print("There should not be any more closed trees")
# This sleeps the script if you want to see the screen. But actually you can remove and page will be closed
time.sleep(10000)
The scripts takes almost 3 hours. I don't know how keepa has a so many categories. Awesome...
I'm using Dephi 10.1 Berlin and Access 2013.
My problem is related to TADODataSet.Cancel().
I want to show my user a message box asking for a confirmation before posting, in case data has been modified.
In the TADODataSet.BeforePost event, I added the following code:
if Application.MessageBox('Save changes?', '', 52) = idNo then
ADODataSet1.Cancel;
If the user click on btnNo, something unexpected happens.
Changes are canceled from the current record, but a new record with all fields empty is created.
The only field with some data is the one that was previously modified by the user.
If I cancel the modification via the cancel button of TDBNavigator, everything is fine.
If I simulate a click of the Cancel button of the TDBNnavigator in the BeforePost event:
if Application.MessageBox('Save changes?', '', 52) = idNo then
DBNavigator1.BtnClick(nbCancel);
I have the same behaviour, so a new empty record is created.
Any suggestion?
The help for TADODataSet.BeforePost says in part:
call Abort to cancel the Post operation (Delphi) or throw an exception (C++).
So:
if Application.MessageBox('Save changes?', '', 52) = idNo then
abort;
Note this is meant for preventing changes that don't pass validation (the common use for BeforePost) from being posted. It doesn't reset the edit buffers like cancel does. Usually that is a separate function in the UI so the user doesn't have to reenter all the changed data in the edits each time posting is rejected by calling abort in BeforePost.
who can help to develop a script that will lock the google spreadsheet row after a user entered data to the table row.
Case Description:
I have a spreadsheet table. This table is used by many users to enter data. I need to be sure that different uses can not change the data entered by the others. The best way if each row will have a special "lock" button. So when the user entered all the info into table row he can push the "lock" button to prevent data changes by other users. Besides I wish the user can change the data he entered but with some limit of time - for example only for at least 30 minutes he locked the row.
As an admin I wish to be able to change any data in a spreadsheet table.
Thank you for help.
perhaps you can use google spreadsheet's Protected Range feature as the lock. When person A wanna write data, he set the sheet as private, after that, set public. During person A's writing, if person B want to write also, he will meet exception, he can catch it, and wait a moment, later try to write into.
class ContextManagerUpdateSheet(object):
def __init__(self, spread_sheet_id, sheet_id):
self.spread_sheet_id = spread_sheet_id
self.sheet_id = sheet_id
# self.end_row_index = end_row_index
# self.end_column_index = ord(end_column_index) - 65
def __enter__(self):
logger.info("set spreadsheet sheet: {sheet_id} protected.")
self.protected_id = set_protected(self.spread_sheet_id, self.sheet_id)
def __exit__(self, *others):
logger.info("release protected spreadsheet sheet: {sheet_id}.")
delete_protected(self.spread_sheet_id, self.protected_id)
def runner():
with ContextManagerUpdateSheet("{google_spread_url}", 0):
from datetime import datetime
print datetime.now().strftime("%Y-%m-%d %H:%M:%s")
data = [["www", "3333"]]
apped_data("{google_spread_url}", 0, data)
---
#retry(googleapiclient.errors.HttpError, 5, 20, logger=logger)
def set_protected(spreadsheet_id, sheet_id):
logger.info("test")
service = get_service_handler()
requests_list = list()
requests_list.append(__protected_info(sheet_id))
body = {
"requests": requests_list
}
resp = service.spreadsheets().batchUpdate(spreadsheetId=spreadsheet_id,
body=body).execute()
return resp["replies"][0]["addProtectedRange"]["protectedRange"]["protectedRangeId"]
I want to do a live search on the DB.
Lets say I want to search by companies and I have the following info on a column named companies.
Facebook
FastCompany
Facebook
Google
Microsoft
I have a textfield that has calls a function on editchanged.
#IBAction func searching(sender: AnyObject) {
tempstring = "%"+searchBar.text+"%"
println(tempstring)
user = user.select(name)
.filter(like(tempstring, name))
.limit(30, offset: 0)
collectionView?.reloadData()
}
It kind of works, if I start typing "fa"
It will show (Facebook, Facebook and FastCompany)
If I continue typing "fac" it will show (Facebook, Facebook)
But when I delete the last character "c" from the searchbox (leaving it in "fa" again) then the query displays nothing.
Any ideas on how I can solve this.
I think your issue is coming from over writing the user object with each search. This is fine as long as you only move forward, but when you go backwards like you did, the query messes up.
Instead, try adding a currentQuery property to your view controller with the collection view in it and set it to your user.select statement.
currentQuery = user.select(name)
.filter(like(tempstring, name))
.limit(30, offset: 0)
Then use the currentQuery object to display the results instead. This way, no matter what you're searching, it will match everything.
I'm currently doing live testing of a game I'm making for Android. The services are written in rails 3.1 and I'm using Postgresql. Some of my more technically savvy testers have been able to manipulate the game by recording their requests to the server and replaying them with high concurrency. I'll try to briefly describe the scenario below without getting caught up in the code.
A user can purchase multiple items, each item has its own record in the database.
The request goes to a controller action, which creates a purchase model to record information about the transaction.
The trade model has a method that sets up the purchase of the items. It essentially does a few logical steps to see if they can purchase the item. The most important is that they have a limit of 100 items per user at any given time. If all the conditions pass, a simple loop is used to create the number of items they requested.
So, what they are doing is, recording 1 valid request purchase via a proxy. Then replaying it with high concurrency, which essentially is allowing a few extra to slip through each time. So if they set it to purchase 100 quantity, they can get it up to 300-400 or if they do 15 quantity, they can get it up to like 120.
The above purchase method is wrapped in a transaction. However, even though its wrapped it won't stop it in certain circumstances where the requests are executing nearly at the same time. I'm guessing this may require some DB locking. Another twist in this that needs to be known is that at any given time rake task are being ran in cron jobs against the user table to update the players health and energy attributes. So, that cannot be blocked either.
Any assistance would be really awesome. This is my little hobby side project and I want to make sure the game is fair and fun for everyone.
Thanks so much!
Controller action:
def hire
worker_asset_type_id = (params[:worker_asset_type_id])
quantity = (params[:quantity])
trade = Trade.new()
trade_response = trade.buy_worker_asset(current_user, worker_asset_type_id, quantity)
user = User.find(current_user.id, select: 'money')
respond_to do |format|
format.json {
render json: {
trade: trade,
user: user,
messages: {
messages: [trade_response.to_s]
}
}
}
end
end
Trade Model Method:
def buy_worker_asset(user, worker_asset_type_id, quantity)
ActiveRecord::Base.transaction do
if worker_asset_type_id.nil?
raise ArgumentError.new("You did not specify the type of worker asset.")
end
if quantity.nil?
raise ArgumentError.new("You did not specify the amount of worker assets you want to buy.")
end
if quantity <= 0
raise ArgumentError.new("Please enter a quantity above 0.")
end
quantity = quantity.to_i
worker_asset_type = WorkerAssetType.where(id: worker_asset_type_id).first
if worker_asset_type.nil?
raise ArgumentError.new("There is no worker asset of that type.")
end
trade_cost = worker_asset_type.min_cost * quantity
if (user.money < trade_cost)
raise ArgumentError.new("You don't have enough money to make that purchase.")
end
# Get the users first geo asset, this will eventually have to be dynamic
potential_total = WorkerAsset.where(user_id: user.id).length + quantity
# Catch all for most people
if potential_total > 100
raise ArgumentError.new("You cannot have more than 100 dealers at the current time.")
end
quantity.times do
new_worker_asset = WorkerAsset.new()
new_worker_asset.worker_asset_type_id = worker_asset_type_id
new_worker_asset.geo_asset_id = user.geo_assets.first.id
new_worker_asset.user_id = user.id
new_worker_asset.clocked_in = DateTime.now
new_worker_asset.save!
end
self.buyer_id = user.id
self.money = trade_cost
self.worker_asset_type_id = worker_asset_type_id
self.trade_type_id = TradeType.where(name: "market").first.id
self.quantity = quantity
# save trade
self.save!
# is this safe?
user.money = user.money - trade_cost
user.save!
end
end
Sounds like you need idempotent requests so that request replay is ineffective. Where possible implement operations so that repeating them has no effect. Where not possible, give each request a unique request identifier and record whether requests have been satisfied or not. You can keep the request ID information in an UNLOGGED table in PostgreSQL or in redis/memcached since you don't need it to be persistent. This will prevent a whole class of exploits.
To deal with just this one problem create an AFTER INSERT OR DELETE ... FOR EACH ROW EXECUTE PROCEDURE trigger on the user items table. Have this trigger:
BEGIN
-- Lock the user so only one tx can be inserting/deleting items for this user
-- at the same time
SELECT 1 FROM user WHERE user_id = <the-user-id> FOR UPDATE;
IF TG_OP = 'INSERT' THEN
IF (SELECT count(user_item_id) FROM user_item WHERE user_item.user_id = <the-user-id>) > 100 THEN
RAISE EXCEPTION 'Too many items already owned, adding this item would exceed the limit of 100 items';
END IF;
ELIF TG_OP = 'DELETE' THEN
-- No action required, all we needed to do is take the lock
-- so a concurrent INSERT won't run until this tx finishes
ELSE
RAISE EXCEPTION 'Unhandled trigger case %',TG_OP;
END IF;
RETURN NULL;
END;
Alternately, you can implement the same thing in the Rails application by taking row-level lock on the customer ID before adding or deleting any item ownership records. I prefer to do this sort of thing in triggers where you can't forget to apply it somewhere, but I realise you might prefer to do it at the app level. See Pessimistic locking.
Optimistic locking is not a great fit for this application. You can use it by incrementing the lock counter on the user before adding/removing items, but it'll cause row churn on the users table and is really unnecessary when your transactions will be so short anyway.
We can't help much unless you show us your relevant schema and queries. I suppose that you do something like:
$ start transaction;
$ select amount from itemtable where userid=? and itemid=?;
15
$ update itemtable set amount=14 where userid=? and itemid=?;
commit;
An you should do something like:
$ start transaction;
$ update itemtable set amount=amount-1 returning amount where userid=? and itemid=?;
14
$ commit;