Rails and understanding "belong_to" - ruby-on-rails

So im pretty new with rails, and am working on an API that takes POST requests (from a raspberry-pi) and sets up data in the database.
I have 2 models/schema:
a "Measurement" model. Which simply just contains 2 floats (humidity and temp for now)
and a "Unit" model. Which im not 100% sure how I want to do this, but it will probably just contain an "id" identifying the unit-id in some sort of way.
Anyways, I want measurements to belong to a unit (so I can reference the units for historical value) IE: This raspberry-pi had these temps the past 5 hours..or whatever.
How would I want to arrange this.
I imagine i'd need at the very least "Measurement" model to "belong_to" "Units" model. Am I forgetting something else? Besides the "has_many" of course for Units. How would I go about creating seed data for this?
I want to eventually be able to have an index page for the "Unit" id that contains it's humidity/temps it's been sent.

A measurements database record will have a unit_id integer field, matching the id primary key field of the units table.
Rails's ActiveRecord expresses this many-to-one relationship by saying Unit.has_many :measurements, and Measurement.belongs_to :unit.
From here, take time to just read your tutorials, to soak in all this before trying to code.

Related

Database design for reservation of computers in laboratories

I want to create an application for enabling the users to book the computers in a laboratory if they want to use it from a specific time to time.The users can book the computers for their next 15 days. So,how should I design the database for this application.
Start by defining what your entities and attibutes will be. It's better if you can do a Conceptual Design first.
Than you design it logically.
For example, your entities might be:
USERS, COMPUTERS, RESERVATIONS.
Your attributes might be:
USERS (SomeUniquePersonalIDnumber, Name, Surname, Email*, PhoneNumber*)
PrimaryKey in bold. With asterisk *Optional
COMPUTERS (UniqueComputerSerialNumber, NumberOfComputerInLab)
RESERVATIONS (AutoincrementNumber, UserPK, ComputerPK, DateOfReservation, TimeFrom, TimeTill)
PrimaryKey composed of the three attibutes making it unique. Same user might reserve the same computer over time but the AutoincrementNumber field will make the composite PK unique.
RESERVATIONS(UserPK) referencing USER(SomeUniquePersonalIDnumber)
RESERVATIONS(ComputerPK) referencing COMPUTERS(UniqueComputerSerialNumber)
Define what type of fields will those attributes be
(Integer/Varchar/...) based on the querying language you will want
to use.
Translate all of above into commands to create the database, the
tables etc.
Just pick a piece of paper and start with normalization https://en.wikipedia.org/wiki/Database_normalization.
If I look for entities in your question I see the following entities:
user
computer
booking
Then you need to figure out what properties belong to those entities and the relation between them. You can create an ERD if you like and start creating Ruby on Rails models.
In Ruby on Rails if you generate a model the model has a created_at datetime field by default. I think you can use that to figure out the specific time and check if 15 days are past since this booking is created.

Rails 4 - Ordering by something not stored in the database

I am using Rails 4. I have a Room model with hour_price day_price and week_price attributes.
On the index, users are able to enter different times and dates they would like to stay in a room. Based on these values, I have a helper method that then calculates the total price it would cost them using the price attributes mentioned above.
My question is what is the best way to sort through the rooms and order them least to greatest (in terms of price). Having a hard time figuring out the best way to do this, especially when considering the price value is calculated by a helper and isn't stored in the database.
You could load all of them and do an array sort as is suggested here, and here. Though that would not scale well, but if you've already filtered by the rooms that are available this might be sufficient.
You might be able to push it back to the database by building a custom sql order by.
Rooms.order("(#{days} * day_price) asc")

Creating a YAML based list vs. a model in Rails

I have an app that consists mainly of restaurant model instances. One of the essential attributes for these restaurants is labeling the cuisine it falls under. I'm currently at odds with myself in regards to designing this. On one hand I thought of creating a Cuisine model and creating either a HMT or HABTM association between Restaurants and Cuisines.
More recently I came across this post which shows how to create a pre-defined set of attributes. To take the answer one step further I'm assuming (in my case) I'd add a string-based cuisine column to my restaurant model and setup a select box in my restaurant form that would save the selected value.
What I was wondering was what would be the most efficient way of doing this? The goal is to eventually be able to query restaurants based what cuisine(s) they fall under. I wasn't sure if a model would be the best choice due to it only serving as a join table in a sense with a name attribute. Wasn't sure if having this extra table for something so minute would be optimal.
On the other hand I didn't know if using YAML for this would be conducive since the values are essentially dummy strings with no tangible records on file like I'd have with a model instance. Can someone help me sort out this confusion?
There are many benefits of normalizing many-to-many relationships in the db. Here are some:
Searching, sorting, and creating indexes is faster, since tables are narrower, and more rows fit on a data page.
You can have more clustered indexes (one per table), so you get more flexibility in tuning queries.
Index searching is often faster, since indexes tend to be narrower and shorter.
More tables allow better use of segments to control physical placement of data.
You usually have fewer indexes per table, so data modification commands are faster.
Fewer null values and less redundant data, making your database more compact.
Triggers execute more quickly if you are not maintaining redundant data.
Data modification anomalies are reduced.
Normalization is conceptually cleaner and easier to maintain and change as your needs change.
Also, by normalizing you get the cleaner syntax and other infrastructure benefits from ActiveRecord, e.g.
cuisine.restaurants.where(city: 'Toledo')

How to create an Order Model with a type field that dictates other fields

I'm building a Ruby on Rails App for a business and will be utilizing an ActiveRecord database. My question really has to do with Database Architecture and really the best way I should organize all the different tables and models within my app. So the App I'm building is going to have a database of orders for an ECommerce Business that sells products through 2 different channels, a subscription service where they pick the products and sell it for a fixed monthly fee and a traditional ECommerce channel, where customers pay for their products directly. So essentially while all of these would be classified as the Order model, there are two types of Orders: Subscription Order and Regular Order.
So initially I thought I would classify all this activity in my Orders Table and include a field 'Type' that would indicate whether it is a subscription order or a regular order. My issue is that there are a bunch of fields that I would need that would be specific to each type. For instance, transaction_id, batch_id and sub_id are all fields that would only be present if that order type was a subscription, and conversely would be absent if the order type was regular.
My question is, would it be in my best interest to just create two separate tables, one for subscription orders and one for regular orders? Or is there a way that fields could only appear conditional on what the Type field is? I would hate to see so many Nil values, for instance, if the order type was a regular order.
Sorry this question isn't as technical as it is just pertaining to best practice and organization.
Thanks,
Sunny
What you've described is a pattern called Single Table Inheritance — aka, having one table store data for different types of objects with different behavior.
Generally, people will tell you not to do it, since it leads to a lot of empty fields in your database which will hurt performance long term. It also just looks gross.
You should probably instead store the data in separate tables. If you want to get fancy, you can try to implement Class Table Inheritance, in which there are actually separate but connected table for each of the child classes. This isn't supported natively by ActiveRecord. This gem and this gem might be able to help you, but I've never used either, so I can't give you a firm recommendation.
I would keep all of my orders in one table. You could create a second table for "subscription order information" that would only contain the columns transaction_id, batch_id and sub_id as well as a primary key to link it back to the main orders table. You would still want to include an order type column in the main database though to make it a little easier when debugging.
Assuming you're using Postgres, I might lean towards an Hstore for that.
Some reading:
http://www.devmynd.com/blog/2013-3-single-table-inheritance-hstore-lovely-combination
https://github.com/devmynd/hstore_accessor
Make an integer column called order_type.
In the model do:
SUBSCRIPTION = 0
ONLINE = 1
...
It'll query better than strings and whenever you want to call one you do Order:SUBSCRIPTION.
Make two+ other tables with a foreign key equal to whatever the ID of the corresponding row in orders.
Now you can keep all shared data in the orders table, for easy querying, and all unique data in the other tables so you don't have bloated models.

To normalize or not to normalize user_ids

In my Rails application, I have a variety of database tables that contain user data. Some of these tables have a lot of rows (as many as 500,000 rows per user in some cases) and are queried frequently. Whenever I query any table for anything, the user_id of the current user is somewhere in the query - either directly, if the table has a direct relation with the user, or through a join, if they are related through some other tables.
Should I denormalize the user_id and include it in every table, for faster performance?
Here's one example:
Address belongs to user, and has a user_id
Envelope belongs to user, and has a user_id
AddressesEnvelopes joins an Address and an Envelope, so it has envelope_id and address_id -- it doesn't have user_id, but could get to it through either the envelope or the address (which must belong to the same user).
One common expensive query is to select all the AddressesEnvelopes for a particular user, which I could accomplish by joining with either Address or Envelope, even though I don't need anything from those tables. Or I could just duplicate the user id in this table.
Here's a different scenario:
Letter belongs to user, and has a user_id
Recepient belongs to Letter, and has a letter_id
RecepientOption belongs to Recepient, and has a recepient_id
Would it make sense to duplicate the user_id in both Recepient and RecepientOption, even though I could always get to it by going up through the associations, through Letter?
Some notes:
There are never any objects that are
shared between users. An entire
hierarchy of related objects always
belongs to the same user.
The user owner of objects never changes.
Database performance is important because it's a data intensive application. There are many queries and many tables.
So should I include user_id in every table so I can use it when creating indexes? Or would that be bad design?
I'd like to point out that it isn't necessary to denormalize, if you are willing to work with composite primary keys. Sample for AddressEnvelop case:
user(
#user_id
)
address(
#user_id
, #addres_num
)
envelope(
#user_id
, #envelope_num
)
address_envelope(
#user_id
, #addres_num
, #envelope_num
)
(the # indicates a primary key column)
I am not a fan of this design if I can avoid it, but considering the fact that you say that all these objects are tied to a user, this type of design would make it relatively simply to partition your data (either logically, put ranges of users in separate tables or physically, using multiple databases or even machines)
Another thing that would make sense with this type of design is using clustered indexes (in MySQL, the primary key of InnoDB tables are built from a clustered index). If you ensure the user_id is always the first column in your index, it will ensure that for each table, all data for one user is stored close together on disk. This is great when you always query by user_id, but it can hurt perfomance if you query by another object (in which case duplication like you sugessted may be a better solution)
At any rate, before you change the design, first make sure your schema is already optimized, and you have proper indexes on your foreign key columns. If performance really is paramount, you should simply try several solutions and do benchmarks.
As long as you
a) get a measurable performance improvement
and
b) know which parts of your database are real normalized data and which are redundant improvements
there is no reason not to do it!
Do you actually have a measured performance problem? 500 000 rows isn't very large table. Your selects should be reasonable fast if they are not very complex and you have proper indexes on your columns.
I would first see if there are slow queries and try to optimize them with indexes. If that is not enough, only then I would look into denormalization.
Denormalizations that you suggest seem reasonable if you can't achieve the required performance with other means. Just make sure that you keep denormalized fields up-to-date.

Resources