Ruby on Rails - Strategy on save random invoice # - ruby-on-rails

So, The situation is like this:
I have one table called invoice, and it has a column called invoice #, I have to generate a complex invoice number every time before it was saved to the database.
And besides, I have another table called invoice_item, which will store a mess of items. each item also has a column called invoice # to declare that which invoice the item is belonged.
There is no limit that how many items will be in one invoice.
Now, I have 2 strategies to achieve this:
I have a function called generateCode() to return a random invoice #. I will put it in application_controller.rb, And every time, when we try to insert a new invoice, The method "create" in invoice_controller will generate an new invoice #, and pass the value to all the invoice_items which is belonged to the invoice.
I will use the ActiveRecord callback function: after_initialize, so, when we try to new an invoice instance, the invoice # will be also created, and it will be easy to pass the value to the item list.
It seems that the 2nd way has more logic, but does it have some chance to lead to a performance problem? and in most E-commercial website, the user most likely get their invoice # after they submit the shopping list. so I wanna know that what is the typical way to do this kind of questions, and more important, why?
Thanks

after_initialize operates every time you instantiate an ActiveRecord object, meaning that when you retrieve invoice_items from the database, they will get new invoice numbers (not what you want).
The callback you should use is before_create, which will fire on new objects before they are saved to the database.

Related

Track count of the new and updated records seperatly

Currently working on a project where any admin can import a xlsx product sheet into active record. I've developed it so that the xlsx parser hands each unique product row to a job which either updates an existing product or creates a new based on the attributes given.
I would like to keep track of the count of products either updated and created per sheet imported, assets added etc, to display in the admin panel.
The method i use now is simply creating events with an associated product id's that respond to a save record conditional, which i then count up and display after the import is done.
if product.save
product.events.new(payload: 'save')
end
The problem with this technique is that i can't differentiate between if the product is new or has been updated.
Are there better techniques that are more suitable to achieve counting products saved while differentiating between if its updated or new?
TDLR;
Importing products to active record (1 job per product) from an excel sheet. What are the better practise's/techniques for keeping count of new and updated records seperatly per import.
You have several choices here:
A simple option, as per my comment, is simply to check the created_at and updated_at columns after the record is saved. If they're equal, it's a new record, if not, it means the record already existed and was updated. You would have something along these lines:
if product.created_at == product.updated_at
new_product_count += 1
else
updated_product_count += 1
end
However, there might be better ways to do this. Just as an example: If I understand correctly you keep track of the number of saved products by creating a new 'save' event. You could instead have two types of events: created and updated. (This would have the added benefit of allowing you to count how many times a product has been updated since it was created)
I don't know if this can help you but in these cases I use persisted? method.
person = Person.new(id: 1, name: 'bob')
person.persisted? # => false

Can Rails' "Validates Uniqueness" override an existing record with new one?

I have three time slots per day, and a bunch of candidate posts for publication. I'm making a little tool that lets us arrange candidates into slots for the next few weeks to see what the schedule might look like. We often have to rearrange, dragging one candidate from its current time slot to another one.
I have a model called PublishTime that pairs a candidate_id and a datetime. I know that I can set up the model to validate the uniqueness of both candidate_id and datetime, which would preserve uniqueness by preventing the creation of a new record that has an existing candidate_id or an existing datetime.
What I'd like to do instead is to preserve the uniqueness by deleting any existing records that have the candidate_id or datetime of the new record I'm creating. I'd like the new record to override any existing records. Is there a built-in way to do this?
I believe you can use find_or_initialize_by which is an upsert operation. If the record is new you insert it. If one already exists based on your requirements, you update it.
For example:
scope = params.slice(:candidate_id, :datetime)
publish_time = PublishTime.find_or_initialize_by(scope) do |new_publish_time|
# optional block if you need to do something with a new model
# do something with `new_publish_time`
end
publish_time.assign_attributes(params)
publish_time.save!

Include vs Join

I have 3 models
User - has many debits and has many credits
Debit - belongs to User
Credit - belongs to User
Debit and credit are very similar. The fields are basically the same.
I'm trying to run a query on my models to return all fields from debit and credit where user is current_user
User.left_outer_joins(:debits, :credits).where("users.id = ?", #user.id)
As expected returned all fields from User as many times as there were records in credits and debits.
User.includes(:credits, :debits).order(created_at: :asc).where("users.id = ?", #user.id)
It ran 3 queries and I thought it should be done in one.
The second part of this question is. How I could I add the record type into the query?
as in records from credits would have an extra field to show credits and same for debits
I have looked into ActiveRecordUnion gem but I did not see how it would solve the problem here
includes can't magically retrieve everything you want it to in one query; it will run one query per model (typically) that you need to hit. Instead, it eliminates future unnecessary queries. Take the following examples:
Bad
users = User.first(5)
users.each do |user|
p user.debits.first
end
There will be 6 queries in total here, one to User retrieving all the users, then one for each .debits call in the loop.
Good!
users = User.includes(:debits).first(5)
users.each do |user|
p user.debits.first
end
You'll only make two queries here: one for the users and one for their associated debits. This is how includes speeds up your application, by eagerly loading things you know you'll need.
As for your comment, yes it seems to make sense to combine them into one table. Depending on your situation, I'd recommend looking into Single Table Inheritance (STI). If you don't go this route, be careful with adding a column called type, Rails won't like that!
First of all, in the first query, by calling the query on User class you are asking for records of type User and if you do not want user objects you are performing an extra join which could be costly. (COULD BE not will be)
If you want credit and debit records simply call queries on Credit and Debit models. If you load user object somewhere prior to this point, use includes preload eager_load to do load linked credit and debit record all at once.
There is two way of pre-loading records in Rails. In the first, Rails performs single query of each type of record and the second one Rails perform only a one query and load objects of different types using the data returned.
includes is a smart pre-loader that performs either one of the ways depending on which one it thinks would be faster.
If you want to force Rails to use one query no matter what, eager_load is what you are looking for.
Please read all about includes, eager_load and preload in the article here.

Loading all the data but not from all the tables

I watched this rails cast http://railscasts.com/episodes/22-eager-loading but still I have some confusions about what is the best way of writing an efficient GET REST service for a scenario like this:
Let's say we have an Organization table and there are like twenty other tables that there is a belongs_to and has_many relations between them. (so all those tables have a organization_id field).
Now I want to write a GET and INDEX request in form of a Rails REST service that based on the organization id being passed to the request in URL, it can go and read those tables and fill the JSON BUT NOT for ALL of those table, only for a few of them, for example let's say for a Patients, Orders and Visits table, not all of those twenty tables.
So still I have trouble with getting my head around how to write such a
.find( :all )
sort of query ?
Can someone show some example so I can understand how to do this sort of queries?
You can include all of those tables in one SQL query:
#organization = Organization.includes(:patients, :orders, :visits).find(1)
Now when you do something like:
#organization.patients
It will load the patients in-memory, since it already fetched them in the original query. Without includes, #organization.patients would trigger another database query. This is why it's called "eager loading", because you are loading the patients of the organization before you actually reference them (eagerly), because you know you will need that data later.
You can use includes anytime, whether using all or not. Personally I find it to be more explicit and clear when I chain the includes method onto the model, instead of including it as some sort of hash option (as in the Railscast episode).

Rails - How to store each update of an item

I'm not sure how to do this in rails (maybe its a common topic but I'm not even sure if the title is correct)
I have a product table with this fields
Name
Quantity
Price
Size
and the columns that rails provide (id, created_at, updated_at)
This table is going to be updated periodically (lets say each day or so) but I want to save the QUANTITY that is being ADDED, and the date/time of the actualization (UPDATE).
I'm not sure if it's a design problem or something else.
Is there a way rails can handle this?
Thanks in advance
Javier QQ.
Given what you've said in your comments, why don't you just make a new table, say a "Stock" table. Each stock has two fields (in addition to the default created_at and updated_at): quantity and item_id.
Whenever you want to update an item with a new quantity, in the update method (or stock method, whatever it is) you do:
Stock.create(:item_id => #item.id, :quantity => params[:quantity])
This also ensures that you know when stock was added, because Rails will automatically keep track of when this Stock was made.
I'm not sure if this is exactly what you're looking for... but you can try the papertrail Gem. It stores each update of your model and you can easily step backwards or forwards in time through them to inspect your model and what fields changed, so it sounds like it'd be pretty ideal for what you have in mind.

Resources