I have a school assignment: a Dog Show.
My assignment is to create a website, where vistors can display results, and where Judges and Secretary can admin and CRUD.
I have a small problem, one part of the assignment: the result should be based on two decisions from different judges, and after that checked by the secretary, before the result is displayed for the user.
I have to say I'm fairly new to programming, and so I need some smart suggestions on how to design and implement this. The assignment should cover both a DB and C# (.NET MVC).
Q1: How do i create a object (result) that depends on two other objects (judge's decisions)? Is that even needed?
Q2: How to solve this in a relational db?
I don't think this would be hard to solve using a relational DB. I'd suggest that you consider each table in the database as representing an entity (class) in your object model. Some entities that you might want to consider Dog Show, Dog, Entry, Judgement, Result, Judge, Secretary (Judge/Secretary might both be an Official). According to your definition, each Entry would have 2 Judgements (thus you should have a 1-to-Many relationship there), but each Entry only has 1 Result. You might use code or a database constraint (or both) to ensure that a Result for an Entry isn't created until there are two Judgements for that Entry. Likewise, you might use code/constraint to ensure that no more than two Judgements are entered for each Entry.
Hope this helps get you started.
How do i create a object (result) that depends on two other objects (judge's decisions)? Is that even needed?
I suggest that you create the result object, when you create the 2nd decision object.
The pseudocode might be something like, when the judge tries to create a new decision, then see how many other decisions already exist:
case 0: this is the first decision; just create the new decision and return
case 1: this will be the second decision; create the new decision, and then create the result based on the two decisions
case 2 or more: two decisions already exist, so don't allow this further decision to be created.
Another (perhaps not so good) possibility is to have a separate "create results" process, which runs continually (not continuously: e.g., once every minute), looking for any already-created decision-pairs for which there's no corresponding result, and creating the corresponding result.
Related
So I've just worked through the tutorial and I'm unclear about a few things. The main one, however, is how do you decide when something is a relationship and when it should be a Node?
For example, in the Movies Database,there is a relationship showing who acted in which film. A property of that relationship is the Role. BUT, what if it's a series of films? The role may well be constant between films (say, Jack Ryan in The Hunt for Red October, Patriot Games etc.)
We may also want to have some kind of Character bio, which would obviously remain constant between movies. Worse, the actor may change from one movie to another (Alec Baldwin then Harrison Ford.) There are many others like this (James Bond, for example).
Even if the actor doesn't change (Main roles in Harry Potter) the character is constant. So, at what point would the Role become a node in its own right? When it does, can I have a 3-way relationship (Actor-Role-Movie)? Say I start of with it being a relationship and then, down the line, decide it should've been a node, is there a simple way to go through the database and convert it?
No there is no way to convert your datamodel. When you start your own Database first take time to find a fitting schema. There is no ideal way to create a schema and also there are many different models fitting to the same situation without being totally wrong.
My strategy is to put less information to the relationship itself. I only add properties that directly concern the relationship and store all the other data in the nodes. Also think of properties you could use for traversing the graph. For example you might need some flags or even different labels for relationships even they more or less are the same. The apoc.algo.aStar is only including relationshiptypes you want (you could exclude certain nodes by giving them a special relationshiptype). So keep that in mind that you take a look at procedures that you might use later.
Try to create the schema as simple as possible and find a way to stay consistent in terms of what things are nodes and what deserves a relationship. Dont mix it up.
Choose a design that makes sense for you! (device 1)-[cable]-(device 2) vs (device 1)-[has cable]-(cable)-[has cable]-(device 2) in this case I'd prefer the first because [has cable] wouldn't bring anymore information. Irrespective to what I wrote above I would have a lot of information in this [cable] relationship but it totally makes sense for me because I wouldnt want to search in a device node for cable information.
For your example giving the role a own node is also valid way. For example if you want to espacially query which actors had the same role in common I'll totally go for giving the role a extra node.
Summary:
Think of what you want to do with the data and choose the easiest model.
I do have a problem where I am having a hard time figuring out how to lay out the relationship in coredata. I tried to visualize the problem below.
Basically in my application users have STATEMENTS that are made of TERMS.
One or more TERMS make STATEMENTS[1].
Users can also tap on a term[2] and
create another statement that's connected to this term[3].
Once they solve this sub statement[4]
they can go back to the main statement [5] and highlight another term and so on.
They should be able to go deeper than one level if need be. Say, select another term in sub statement and create another statement etc.
I am not sure how to create such a schema in coredata.
I already have my TERM and STATEMENT entities, and I initially separated STATEMENT AND SUBSTATEMENT as two different entities but I am not entirely sure if this is a approach anymore.
I think a more efficient approach would be to store each statement in just one entity and have a relationship with TERM but I Am not sure how to figure out the levels.
I would appreciate any directions.
In my Quest to understanding Mnesia, I still struggle with thinking in relational terms. So I will put my struggles up here and ask for the best way to solve them.
one-to-many-relations
Say I have a bunch of people,
-record(contact, {name, phone}).
Now, I know that I can define phone to always be saved as a list, so people can have multiple phone numbers, and I suppose that's the way to do it (is it? How would I then look this up the other way around, say, finding a name to a number?).
many-to-many-relations
now let's suppose I have multiple groups I can put people in. The group names don't have any significance, they are just names; the concept is "unix system groups" or "labels". Naively, I would model this membership as a proplist, like
{groups [{friends, bool()}, {family, bool()}, {work, bool()}]} %% and so on...
as a field within the "contact" record from above, for example. What is the best way to model this within mnesia if I want to be able to lookup all members of a group based on group name quickly, and also want to be able to lookup all group an individual is registered in? I also could just model this as a list containing just the group identifiers, of course. For use with mnesia, what is the best way to model this?
I apologize if this question is dumb. There's plenty of documentation on mnesia, but it's lacking (IMO) some good examples for the overall use.
For the first example, consider this record:
-record(contact, {name, [phonenumber, phonenumber, ...]}).
contact is a record with two fields, name and phone where phone is a list of phone numbers. As user425720 said it could make sense to store these as something else than strings, if you have extreme requirements for small storage footprint, for example.
Now here comes the part that is hard to "get" with key-value stores: you need to also store the inverse relationship. In other words, you need something similar to the following:
-record(phone, {phonenumber, contactname}).
If you have a layer in your application to abstract away database handling, you could make it always add/change the phone records when adding/changing a contact.
--
For the second example, consider these two records:
-record(contact, {uuid, name, [group_id, group_id]}).
-record(group, {uuid, name, [contact_id, contact_id]}).
The easiest way is to just store ids pointing to the related records. As Mnesia has no concept of referential integrity, this can become out of sync if you for example delete a group without removing that group from all users.
If you need to store the type of group on the contact record, you could use the following:
-record(contact, {name, [{family, [group_id, group_id]}, {work, [..]}]}).
--
Your second problem could also be solved by using a intermediate record, which you can think of as "membership".
-record(contact, {uuid, name, ...}).
-record(group, {uuid, name, ...}).
-record(membership, {contact_uuid, group_uuid}). # must use 'bag' table type
There can be any number of "membership" records. There will be one record for every users group.
First of all, you ask for key-value store design patters. Perfectly fine.
Before I will try to answer your question lets make it clear - what is Mnesia. It is k-v DB, which is included in OTP. Because it is native, it is very comfortable to use from Erlang. But be careful. This is old database with very ancient assumptions (e.g. data distribution with linear hashing). So go ahead, learn and play with it, but for production take your time and browse NoSQL shop to find the best for your needs.
#telephone example. Do not store stuff as strings (list()) - it is very heavy for GC. I would make couple fields like phone_1 :: < < binary > > , phone_2 :: < < binary > >, phone_extra :: [ < < binary > > ] and build index on the most frequent query-field. Also mnesia indicies are tricky - when node crashes and goes up, they need to rebuild themselves (it can take awfully lot of time).
#family example. It quite hard with flat namespace. You may play with more complex keys.. Maybe create separate table for TheGroup and keep identifiers of members? Or each member would have ids of groups he belongs (hard to maintain..). If you want to recognize friends I would implement some sort of contract before presenting data (A is B's friend iff B is A's friend) - this approach would cope with eventual consistency and conflicts in data.
I'm working on a RoR app, but this is a general question of strategy for OOP. Consider the case where there are multiple types of references that you are storing data for: Books, Articles, Presentations, Book Chapters, etc. Each type of reference is part of a hierarchy where common behaviors sit at the most general point of inheritance, and at the DB level I am using single-table inheritance. The type is set by use of a select option, so lets say that I was entering the data as if it were a Book, but then realize that it is only a chapter. So I change the type of reference by selecting "Book Chapter", which then posts an update to the existing model/form. The question is what is the correct strategy for handling this?
On one hand it seems preferable to transform the existing record in the DB to avoid id exhaustion, and potentially save on operations for creating/deleting records. This however tends to make the update strategy complex.
On the other hand, it seems more in keeping with general object orientation to create a new object (and record) using the old object to initialize values that you want to persist, then delete the old object. This I think makes more sense in terms of an Object Space (heap), and I think is more aligned to ideas like those of general systems.
However, I haven't nailed this down, and after sitting on it for a while, I'm pitching it to this community to see what "right" way to do this is.
Prefer immutable objects, in other words: the second strategy. Your objects may not be immutable by themselves, but reducing mutability is often a step in the right direction.
Besides that, this is the more natural way. In general OOP terms there's no way to change the type of an object. In your situation you can, but it's still an awkward and unusual thing to do.
On the other hand, if your objects are represented by the same (identical) class and changing the type is done by setting a high-level property, one could argue that re-creating the object is overkill.
Still, reducing mutability is a good think, but if your class is already designed to be mutable, it might not be worth it. (In that special case where there's actually only one actual class from the language point of view)
The transformation you're talking about doesn't seem to warrant a new record or even a new object.
Each of the entries you cited have the same form. They are block of text with siblings, parents and children. A chapter may have a block of text, with a parent book and a child endnote, for example. They are differentiated on a DB level only by their type, which itself could be a field.
All you need is a model to handle these 'elements' differently depending on whether or not it is flagged as a book, or a chapter, etc. If an element is flagged as a chapter, yet has no parent, for example, then you might flag it as a 'book' when it is saved to the DB.
Changing the way an element is flagged doesn't change the element, it only changes the way it is viewed. So long as the element knows how to find it's children the data will compute in the same way. As far as the model is concerned it's just an element you're worrying about. The rest is done in the UI.
I'm calling a update SPROC from my DAL, passing in all(!) fields of the table as parameters. For the biggest table this is a total of 78.
I pass all these parameters, even if maybe just one value changed.
This seems rather inefficent to me and I wondered, how to do it better.
I could define all parameters as optional, and only pass the ones changed, but my DAL does not know which values changed, cause I'm just passing it the model - object.
I could make a select on the table before updateing and compare the values to find out which ones changed but this is probably way to much overhead, also(?)
I'm kinda stuck here ... I'm very interested what you think of this.
edit: forgot to mention: I'm using C# (Express Edition) with SQL 2008 (also Express). The DAL I wrote "myself" (using this article).
Its maybe not the latest state of the art way (since its from 2006, "pre-Linq" so to say but Linq works only for local SQL instances in Express anyways) of doing it, but my main goal was learning C#, so I guess this isn't too bad.
If you can change the DAL (without changes being discarded once the layer is "regenerated" from the new schema when changes are made), i would recomend passing a structure containing the column being changed with values, and a structure kontaing key columns and values for the update.
This can be done using hashtables, and if the schema is known, should be fairly easy to manipulate this in the "new" update function.
If this is an automated DAL, these are some of the drawbacks using DALs
You could implement journalized change tracking in your model objects. This way you could keep track of any changes in your objects by saving the previous value of a property every time a new value is set.This information could be stored in one of two ways:
As part of each object's own private state
Centrally in a "manager" class.
In the first solution, you could easily implement this functionality in a base class and have it run in all model objects through inheritance.
In the second solution, you need to create some kind of container class that will keep a reference and a unique identifier to any model object that is created and record all changes in its state in a central store.This is similar to the way many ORM (Object-Relational Mapping) frameworks achieve this kind of functionality.
There are off the shelf ORMs that support these kinds of scenarios relatively well. Writing your own ORM will leave you without many features like this.
I find the "object.Save()" pattern leads to this kind of behavior, but there is no reason you need to follow that pattern (while I'm not personally a fan of object.Save(), I feel like I'm in the minority).
There are multiple ways your data layer can know what changed and most of them are supported by off the shelf ORMs. You could also potentially make the UI and/or business layer's smart enough to pass that knowledge into the data layer.
Two options that I prefer:
Generating/hand coding update
methods that only take the set of
parameters that tend to change.
Generating the update statements
completely on the fly.