I'm working on an implementing a feature with a scenario similar to this:
Company has many Itemsand each Item has a column called company_item_number.
I'm looking for a way to increment the company_item_number as a new item is added into a particular Company, preserving cardinality even after an item is deleted.
Note that this is different than item_id which will auto increment whenever any item is added by any Company.
For example:
Company A
company_item_number: 1
company_item_number: 2
company_item_number: 3 (removed/deleted)
company_item_number: 4
Company B
company_item_number: 1 (removed/deleted)
company_item_number: 2
As you can see, I also need to make sure that if a previous item was deleted, the next item_number should be +1 greater than the previously deleted item, preserving cardinality.
Any thoughts on how to do this would be greatly appreciated.
This is complicated by your requirement that cardinality be preserved even when records are deleted. I would personally suggest never actually deleting Item records; instead, add a removed_at timestamp column which you can check to decide if an item was removed or not. Then, your approach gets much simpler (for this task, and likely for many other tasks). With that, you could use a before_create hook, which runs right before your new record is saved into the database, to populate the field on each record, like so:
class Item < ActiveRecord::Base
before_create do |item|
current_highest = Item.where(company_id: item.company_id).pluck("MAX(company_item_number)").first
item.company_item_number = (current_highest || 0) + 1
end
end
If you don't want to go that route, your companies.last_item_number_count idea seems like a good option. Alternatively, a new company_item_numbers table from which you never remove records would serve a similar purpose. Ultimately, without some record of the deleted items having been present, there's no way to ensure you aren't re-using a number you already used.
Related
Currently working on a project where any admin can import a xlsx product sheet into active record. I've developed it so that the xlsx parser hands each unique product row to a job which either updates an existing product or creates a new based on the attributes given.
I would like to keep track of the count of products either updated and created per sheet imported, assets added etc, to display in the admin panel.
The method i use now is simply creating events with an associated product id's that respond to a save record conditional, which i then count up and display after the import is done.
if product.save
product.events.new(payload: 'save')
end
The problem with this technique is that i can't differentiate between if the product is new or has been updated.
Are there better techniques that are more suitable to achieve counting products saved while differentiating between if its updated or new?
TDLR;
Importing products to active record (1 job per product) from an excel sheet. What are the better practise's/techniques for keeping count of new and updated records seperatly per import.
You have several choices here:
A simple option, as per my comment, is simply to check the created_at and updated_at columns after the record is saved. If they're equal, it's a new record, if not, it means the record already existed and was updated. You would have something along these lines:
if product.created_at == product.updated_at
new_product_count += 1
else
updated_product_count += 1
end
However, there might be better ways to do this. Just as an example: If I understand correctly you keep track of the number of saved products by creating a new 'save' event. You could instead have two types of events: created and updated. (This would have the added benefit of allowing you to count how many times a product has been updated since it was created)
I don't know if this can help you but in these cases I use persisted? method.
person = Person.new(id: 1, name: 'bob')
person.persisted? # => false
Consider two tables Foo and Bar and consider models based on them. Now consider a one-to-one relationship between them.
Foo contains has_one :bar in it's declaration so that we're able to access Bar from Foo's objects. But then what I don't understand is why Bar needs a foreign key referencing Foo?
Shouldn't it be easier if they just compare both the ids to get the result?
I'm assuming that there will be problems with comparing both ids and I want to know what the problems are.
The problem with ids is that they store auto-incremented values. Let's consider 2 tables students and projects.
Let's assume a student can have at most 1 project. Which means he can either have a project or not.
Now consider 2 students A & B.
students table
id name
1 A
2 B
now projects table
id name
1 P1
2 NULL
in this case A has a project named as P1 but B doesn't and we're creating a null entry just to maintain and match the id of records present in projects with the students but this is not feasible in the long term. If in a school there are 1000 students then we'll have may be 500 empty rows for 500 students who are not working on a project.
That's why adding a column in projects table is a feasible solution to reduce the size of the table and maintain relationships as well and also if you're going to delete a record then the new id won't be same as the previous one as id's are auto-incremented.
now projects table
id name student_id
1 P1 1
is more feasible and flexible as well. You can make it has_many as well because a student can work on multiple projects as well.
I hope this helps you.
You can't assume that the DB engine will add the same IDs to rows in different tables. You can (I would not recommend) make an app with such behavior and implement it with triggers and constraints, but this would be a very creative (in a negative sense) approach to relational databases.
I'm working on a project built on Rails 4, ActiveRecord and PostgreSQL and faced with a performance dilemma -
For brevity, let's say I have Category & Item models. Category has_many items.
Let's take the example where category 'Furniture' has 'bed, large mattress, small mattress, armchair', etc. While displaying these items under the category, we would intuitively want to see all kinds of mattresses and bed frames together, instead of being lexicographically ordered. Also, let's assume the total number of items under any category is in the order of < 100 (mostly about ~10-15 per category) & so naturally, the order of items falling in the same 'group' under a category would be much lower than that.
To achieve this grouping, one way is to create a SubCategory model and associate items through them, so we can add items of a certain group later on and still be able to show them together by grouping on the category & sub category.
The other way I'm thinking of, since the order of total items is so small, is to add an order (float type) field to the Item model to still be able to group them together (Bed = 5.01, Mattress = 5.02, Chair = 6.01, Bed Cover = 5.03 & so on).
The only reason I'm considering the other option is because we're confident on the number of items to not go beyond even a 100 in our application's scope and so the Sub Category route - creating a new model and persisting many columns vs one - seems like an overkill for this particular case.
So my question (finally!) is this -
What kind of pitfalls might I fall if I went the second route? Moreover, is sorting on a float field with Postgres an overall better tradeoff on speed and memory vs adding a new model to simulate sub groupings such as mentioned in the above example?
I'm trying to run a simple loop that increments an attribute in a database table, and isn't incrementing as expected. Imagine that our application is a daily deals site. I'm using Rails 3.0.1, for the record.
class Deal
has_many :orders
end
class Order
belongs_to :deal
end
An Order also has an attribute "quantity" - for example of someone buys a few of the same thing.
if someone buys a bunch of orders in a shopping cart, we want to tally the total in each deal to track how many we sold in each deal.
#new_orders.each do |order|
order.deal.update_attribute(:order_count, order.deal.order_count + order.quantity)
end
Firstly, ignore the fact that there may be a better way to write this. This is a contrived example in order to get to my point.
Now, imagine that our current case is where someone bought 3 orders of the same deal in the shopping cart (for no good reason - this example is somewhat contrived). And let's say each order had a quantity of 1.
I would have expected, at the end of the loop, the deal's quantity to be 3. But when I run this code, whether in tests, on the browser, or in the console, I get 1.
It seems as if in each iteration of the loop, when it pulls "order.deal" - since each order happens to belong to the same deal, it's pulling the deal from a cache. So let's say before this loop began, the deal's order_count was 0, each time it pulls a cached copy of the deal which has an order_count of 0.
Is this the expected behavior? Is there a way to turn off this type of caching? What's even stranger is that a colleague of mine tried to run a similar loop in his project and got the expected total. Or am I missing something entirely?
Thanks for your help!
You aren't saving the record. After the increment (but within the loop), you need:
order.deal.save
~~~~~~
Based on your comment:
#new_orders.each do |order|
order_quantity = order.quantity
order.reload
order.deal.update_attribute(:order_count, order.deal.order_count + order_quantity)
end
This will save the new order quantity in a variable, reload the order from the database and then do the update.
Does anyone know of any method in Rails by which an associated object may be frozen. The problem I am having is that I have an order model with many line items which in turn belong to a product or service. When the order is paid for, I need to freeze the details of the ordered items so that when the price is changed, the order's totals are preserved.
I worked on an online purchase system before. What you want to do is have an Order class and a LineItem class. LineItems store product details like price, quantity, and maybe some other information you need to keep for records. It's more complicated but it's the only way I know to lock in the details.
An Order is simply made up of LineItems and probably contains shipping and billing addresses. The total price of the Order can be calculated by adding up the LineItems.
Basically, you freeze the data before the person makes the purchase. When they are added to an order, the data is frozen because LineItems duplicate nessacary product information. This way when a product is removed from your system, you can still make sense of old orders.
You may want to look at a rails plugin call 'AASM' (formerly, acts as state machine) to handle the state of an order.
Edit: AASM can be found here http://github.com/rubyist/aasm/tree/master
A few options:
1) Add a version number to your model. At the day job we do course scheduling. A particular course might be updated occasionally but, for business rule reasons, its important to know what it looked like on the day you signed up. Add :version_number to model and find_latest_course(course_id), alter code as appropriate, stir a bit. In this case you don't "edit" models so much as you do a new save of the new, updated version. (Then, obviously, your LineItems carry a item_id and an item_version_number.)
This generic pattern can be extended to cover, shudder, audit trails.
2) Copy data into LineItem objects at LineItem creation time. Just because you can slap has_a on anything, doesn't mean you should. If a 'LineItem' is supposed to hold a constant record of one item which appeared on an invoice, then make the LineItem hold a constant record of one item which appeared on an invoice. You can then update InventoryItem#current_price at will without affecting your previously saved LineItems.
3) If you're lazy, just freeze the price on the order object. Not really much to recommend this but, hey, it works in a pinch. You're probably just delaying the day of reckoning though.
"I ordered from you 6 months ago and now am doing my taxes. Why won't your bookstore show me half of the books I ordered? What do you mean their IDs were purged when you stopped selling them?! I need to know which I can get deductions for!"
Shouldn't the prices already be frozen when the items are added to the order? Say I put a widget into my shopping basket thinking it costs $1 and by the time I'm at the register, it costs $5 because you changed the price.
Back to your problem: I don't think it's a language issue, but a functional one. Instead of associating the prices with items, you need to copy the prices. If every item in the order has it's own version of a price, future price changes won't effect it, you can add discounts, etc.
Actually, to be clean you need to add versioning to your prices. When an item's price changes, you don't overwrite the price, you add a newer version. The line items in your order will still be associated with the old price.