Is it possible to get a tree of database transactions like:
app/models/user.rb:12 Transaction#1
app/models/user.rb:15 Transaction#2
app/models/user.rb:20 Transaction#4
app/models/car.rb:32 Transaction#3
Related
I am trying to implement redis cache on a rails application. Till now I am able to cache the active record data using redis cache. I am able to fetch all the records at once using get method. But I am having difficult time figuring out how to fetch a single record using id since the the data produced by redis is in string data type.
Following is the data cached by redis:
"set" "bookstore:authors" "[{\"id\":1,\"name\":\"Stephenie Meyer\",\"created_at\":\"2018-05-03T10:58:20.326Z\",\"updated_at\":\"2018-05-03T10:58:20.326Z\"},{\"id\":2,\"name\":\"V.C. Andrews\",\"created_at\":\"2018-05-03T10:58:20.569Z\",\"updated_at\":\"2018-05-03T10:58:20.569Z\"}]
Now I am calling
authors = $redis.get('authors')
to display all the authors.
How can I fetch a single author using his id?
Helper method to fetch authors
def fetch_authors
authors = $redis.get('authors')
if authors.nil?
authors = Author.all.to_json
$redis.set("authors", authors).to_json
$redis.expire("authors", 5.hour.to_i)
end
JSON.load authors
end
For your use case, using a hash is probably better. You could use the following commands to achieve that:
HSET bookstore:authors 1 "<author json>"
HGET bookstore:authors 1 // Returns that author's json data
Or you can store each author on its own:
SET bookstore:authors:1 <author_json>
GET bookstore:authors:1 // Returns that author's json data
I have a big Microsoft SQL Database in a remote host which I connect with ActiveRecord 5 using tiny_tds and activerecord-sqlserver-adapter. I need to make multiple queries to one table to find entries belonging to an object. The problem is that there are thousands of queries which takes a really long time to perform on a remote database.
Is it possible to cache the whole table so the queries would be performed in the local cached table to make them faster?
Edit: The purpose of this operation is to synchronize data from a legacy system to a newer one. The following loop is used for importing:
MsSqlDbEntity.where(deleted: nil).where.not(verified: nil).each do |entity|
entity_import(entity)
end
These are the methods used:
def entity_import(ms_sql_db_entity)
new_db_entity = NewDbEntity.new(
# Some params from ms_sql_db_entity
)
sub_entity_import(ms_sql_db_entity, new_db_entity) if new_db_entity.save
end
def sub_entity_import(ms_sql_db_entity, new_db_entity)
MsSqlDbSubEntity.where(ms_sql_db_entity_id: ms_sql_db_entity.id).each do |sub_entity|
new_db_sub_entity = NewDbSubEntity.create(
new_db_entity_id: new_db_entity.id,
# Some other params
)
end
end
The entity and sub_entity have a one-to-many relation.
I have 3 models: Project, MonthlySubscription (sti of Subscription), and MonthlyTransactionQueue (sti of TransactionQueue). Subscription and TransactionQueue both belong_to Project.
I want to create a copy of MonthlySubscription and place it into MonthlyTransactionQueue, for Projects that have a Release.released = false. How would I do this using AR?
My sql looks like this:
insert into transaction_queues
select a.*, b.id as release_id
from subscriptions a
left join releases b
on a.project_id = b.project_id
where b.released = false
and a.type = 'ReleaseSubscription'
For AR I have started with this ReleaseSubscription.joins(project: :releases) but it doesn't keep the Release.released field
You have a few options
Execute sql
ReleaseSubscription.connection.execute("insert into transaction_queues...")
Use AR inside of a transaction.
MonthlyTransactionQueue.transaction do
# I'm unsure what Release.released is and how it relates but this should work other than that.
MonthlySubscription.where(released: false).each do |sub|
MonthlyTransactionQueue.create(sub.attributes)
end
end
This creates multiple insert statements but runs them all in the same transaction.
Another good option would be to dump everything that matches your query into a sql file and use load data in file to add everything at once in sql.
I'm considering converting an app from php/MySQL to web2py (and MySQL or Postgres). The only SQL code in the php codebase for this app are calls to stored procedures...no SELECTs, no INSERTs, etc., in the php codebase. All SQL source in the php codebase is on the order of "CALL proc_Fubar(args...);"
How do I tell web2py, "Here's my INSERT stored procedure; here's my SELECT..."? I know I can executesql, but how about the returned rowset from a SELECT...I'd like to have that data returned as if it were the results of a web2py query from a table.
Yes, I know. I'm trying to get all the neat stuff that web2py does without keeping up my end of the bargain (by defining my SQL as web2py wants to see it).
You might try the following. First, define a model that matches the fields returned by your stored procedure (set migrate=False so web2py doesn't try to create that table in the db).
db.define_table('myfaketable', ..., migrate=False)
Then do:
raw_rows = db.executesql('[SQL code to execute stored procedure]')
rows = db._adapter.parse(raw_rows,
fields=[field for field in db.myfaketable],
colnames=db.myfaketable.fields)
Goal: Using a CRON task (or other scheduled event) to update database with nightly export of data from an existing system.
All data is created/updated/deleted in an existing system. The website does no directly integrate with this system, so the rails app simply needs to reflect the updates that appear in the data export.
I have a .txt file of ~5,000 products that looks like this:
"1234":"product name":"attr 1":"attr 2":"ABC Manufacturing":"2222"
"A134":"another product":"attr 1":"attr 2":"Foobar World":"2447"
...
All values are strings enclosed in double quotes (") that are separated by colons (:)
Fields are:
id: unique id; alphanumeric
name: product name; any character
attribute columns: strings; any character (e.g., size, weight, color, dimension)
vendor_name: string; any character
vendor_id: unique vendor id; numeric
Vendor information is not normalized in the current system.
What are best practices here? Is it okay to delete the products and vendors tables and rewrite with the new data on every cycle? Or is it better to only add new rows and update existing ones?
Notes:
This data will be used to generate Orders that will persist through nightly database imports. OrderItems will need to be connected to the product ids that are specified in the data file, so we can't rely on an auto-incrementing primary key to be the same for each import; the unique alphanumeric id will need to be used to join products to order_items.
Ideally, I'd like the importer to normalize the Vendor data
I cannot use vanilla SQL statements, so I imagine I'll need to write a rake task in order to use Product.create(...) and Vendor.create(...) style syntax.
This will be implemented on EngineYard
I wouldn't delete the products and vendors tables on every cycle. Is this a rails app? If so there are some really nice ActiveRecord helpers that would come in handy for you.
If you have a Product active record model, you can do:
p = Product.find_or_initialize_by_identifier(<id you get from file>)
p.name = <name from file>
p.size = <size from file>
etc...
p.save!
The find_or_initialize will lookup the product in the database by the id you specify, and if it can't find it, it will create a new one. The really handy thing about doing it this way, is that ActiveRecord will only save to the database if any of the data has changed, and it will automatically update any timestamp fields you have in the table (updated_at) accordingly. One more thing, since you would be looking up records by the identifier (id from the file), I would make sure to add an index on that field in the database.
To make a rake task to accomplish this, I would add a rake file to the lib/tasks directory of your rails app. We'll call it data.rake.
Inside data.rake, it would look something like this:
namespace :data do
desc "import data from files to database"
task :import => :environment do
file = File.open(<file to import>)
file.each do |line|
attrs = line.split(":")
p = Product.find_or_initialize_by_identifier(attrs[0])
p.name = attrs[1]
etc...
p.save!
end
end
end
Than to call the rake task, use "rake data:import" from the command line.
Since Products don't really change that often, the best way I would see is to update only the records that change.
Get all the deltas
Mass update using a single SQL statement
If you are having your normalization code in the models, you could use Product.create and Vendor.create or else it would be just a overkill. Also, Look into inserting multiple records in a single SQL transaction, its much faster.
Create an importer rake task that is cronned
Parse the file line by line using Faster CSV or via vanilla ruby like:
file.each do |line|
products_array = line.split(":")
end
Split each line on the ":" and push in into a hash
Use a find_or_initialize to populate your db such as:
Product.find_or_initialize_by_name_and_vendor_id("foo", 111)