I am seeking a TDBTree component that is very versatile, and i would like to hear some recommendations. I am specifically seeking one that would show a master record and "n" number of Linked table records. (I mean records from various tables). For example, the TDBTree would be hook to master table, and Detail table 1, Additional table, etc.
Master Table Record
Detail Table 1 Record
Detail Table 1 Record
Detail Table 1 Record
Additional Table Record
Additional Table Record
I am not sure if this is possible or not. This is why i am inquiring. Thanks for any recommendations you may be able to provide.
And example would be
Master Checks
Check Details
Account Record
Bank Record
Look at Developer Express controls. They have something alike what you're looking for. They have both a grid that can show details "in line", and some db-aware trees with many capabilities - IMHO if you're displaying that kind of that their Master-Detail grid is better than any tree, you're going to show different data in each detail.
I know this isnt DB aware, but if your open to alternatives then VirtualStringTree is a very good option. I use this tree component displaying most of my DB data to the user - it offers a very flexible and speedy tree/grid for any data. It is very easy to handle DB updating in the many events it provides for you.
Related
I'm a new user to QLIK, scripting & overall beginner. I am looking for any help or recommendations to deal with my tables below. Just trying to create a good model to link my tables.
Created a sample here
file The original 3 tables are different qvd files
Transactions table has multiple columns and the main ones are TxnID, SourcePartyTypeID, DestPartyTypeID, SourcePartyType, DestinationPartyType, ConductorID.
Customers Table - CustName, CustID etc.
Accounts Table - AcctID, AcctNum, PrimaryActID etc.
With transactions it can relate to multiple CustID's/AcctID's which are linked by the Dest/SourcePartyIDs. Also the transaction has a source/destination party type field where A = Accounts, C = Customers & some NULLs.
I have read a lot on data models and a link table for star schema or join is recommended but I am unsure how to code this because these are also based on the Source/DestinationType fields (Transactions Table) where A = Accounts & C = Customers. Have tried to code but not successful.
I'm unsure how to join based on SourceType/DestinationType = Accounts or Customers. Link table or ApplyMap() with a WHERE clause?? Any suggestions
Hopefully your introduction to Qlik is still a positive one! There are a lot of resources to help you develop your Qlik scripting capabilities including:
Qlik Continuous Classroom (https://learning.qlik.com)
Qlik Community (https://community.qlik.com)
Qlik Product Documentation (https://help.qlik.com)
In terms of your sample data question. If you are creating a Qlik Sense app you can use the Qlik Data Manager to link your data.
This is excellent because not only will it try and analyse your data and make useful suggestions to link fields, it will also build the script which you can then review and use as a basis for developing your own understanding further.
Looking at your sample data, one option might be a simple key field between a couple of the tables. Here is one example of how this could work.
Rod
[Transactions]:
Load
// User generated fields
AutoNumberHash256 ( [DestPartyID], [SoucePartyID] ) As _keyAccount,
// Fields in source data
[TxnID],
[TxnNum],
[ConductorID],
[SourcePartyType],
[SoucePartyID] As [CustID],
[DestPartyType],
[DestPartyID],
[etc...]
From [lib://AttachedFiles/TablesExamples.xlsx]
(ooxml, embedded labels, table is Transactions);
[Customers]:
Load
// User generated fields
// Fields in source data
[CustID],
[CustFirstName],
[CustLastName]
From [lib://AttachedFiles/TablesExamples.xlsx]
(ooxml, embedded labels, table is Customers);
[Accounts]:
Load
// User generated fields
AutoNumberHash256 ( [AcctID], [PrimaryAcctID] ) As _keyAccount,
// Fields in source data
[AcctID],
[AcctNum],
[PrimaryAcctID],
[AcctName]
From [lib://AttachedFiles/TablesExamples.xlsx]
(ooxml, embedded labels, table is Accounts);
I have a "catalog" that I am trying to display information on. This information will be pulled from a few different tables that a user will be able to set a preference to hide a record from the respective table on their "catalog". I am running a Postgres database
So, my question is:
Would it be better (performance wise) to create a new table (table_a_to_catalog) where it would store the table_a_id and the catalog_id for the record from table_a that the user wants to hide for that catalog. Then have another table (table_b_to_catalog) to hold that connection...and so on...
OR
Would it be better to store the hide preference as a json value in the record of the catalog? So it would be something like {"table_a" => [id1, id2, id3], "table_b" => [id1, id2, id3]}
It really depends on the usecase of this catalog... If the information is readonly and you are running a job once a day to update the said catalog then json would be better. However, if you want to update information on the catelog live and and allow it to be editable then having a separate table would be best.
As for personal preference, I think keeping data in table allows more flexibility when you want to use the data for other features
Having very large tables negatively impacts for performance. Keeping "hide" view data in a postgres table means having a DB entry for each hidden entry in each catalog. Each client application will need to filter that table for information relevant to their user, and with many users this could take considerable time.
If one simply adds a field to the user table, containing an hstore, JSON or CSV of view data (e.g. hide preferences), that will reduce the initial load time marginally. JSON would make more sense if "hiding" means simply not displaying it client-side, wheras hstore makes more sense if you wish to not send the data to the client to begin with.
I say marginally because many other factors (caching) will impact performance more than this. You may want to look into using Redis for the application runtime and Postgres for data warehousing.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have found some tutorials, but they still leave me with questions.
Let's take a classic example of 2 tables, one for customer details and one for order details.
The customers table in the database has:
an autoincrementing integer customer_id as primary key
a text field for customer name
a text field for contact details
And the orders table has:
an integer customer_id which is a foreign key referencing the customers table
some other stuff, such a reference to a bunch of item numbers
an integer order_value to store the cash value of the order
I need two dataset components, two queries and a connection.
So far, so good? Or did I miss something already?
Now, the tutorials say that I have to set the MasterSource of the of the datasource which coresponds to the DB grid showing the orders table to be the datasource which corresponds to the customers table and the MasterFields, in this case, to customer_id.
Anything else? Should I for instance set the Detailfields of the query of the datasource which corresponds to the customers table to customer_id?
Should I use the properties, or a paramaterized query?
Ok, at this point, we have followed the classic tutorials and can scroll through the customers DB grid and see all orders for the current customer shown in the orders DB grid. When the user clicks the customers DB grid I have to Close(); then Open(); the orders query to refresh its corresponding DB grid.
However, those tutorials always seem to posit a static database with existing contents which never change.
When I asked anothter question, I gave an example where I was using a Command to INSERT INTO orders... and was told that that is A Bad Thing` and I should:
OrdersQuery.Append();
OrdersQuery.FieldByName('customer_id') := [some value]';
OrdersQuery.FieldByName('item_numbers') := [some value]';
OrdersQuery.FieldByName('order_value') := [some value]';
OrdersQuery.Post();
Is that correct?
I ask because it seems to me that a Command puts data in and a query should only take it out, but I can see that a command doesn't have linkage to the DB grid via a datasource's query.
Is this a matter of choice, or must the query be used?
If so, it seems that I can't use even simple SQL functions such as SUM, MIN< AVG, MAX in the query and have to move those into my code.
If I must use the query, how do I implement SQL UPDATE and DROP?
And, finally, can I have a Master/Detail/Detail relationship?
Let's say I want a 3rd DB grid, which shows the total and average of all orders for a customer. It gets its data from the orders table (but can't use SUM and AVG) which is updated each time the user selects a different customer, thus giving a Master/Detail/Detail relationship. DO I just set that up as two Master/Detail relationships? I.E, the DB grid, datasource, query for the total and average orders refers only to orders and has no reference to customers, even if it does use customer_id?
Thanks in advance for any help and clarification. I hope that this question will become a reference for others in the future (so, feel free to edit it).
TLDR: In the SQL world, Master/Detail is an archaism.
When some people say "Master Detail" they aren't going to go all the way down the rabbit hole. Your question suggests you do want to. I'd like to share a few things that I think are helpful, but I don't see that anyone can really answer your questions completely.
A minimal implementation of master detail, for any two datasets, for some people's purposes, is nothing more than an event handler firing when the currently selected row in the master table changes. This row is then used to filter the rows in the detail table dataset, so that only the rows that match the primary key of the master row are visible. This is done for you, if you configure it properly, in most of the TTable-like objects in Delphi's VCL, but even Datasets that do not explicitly support master/detail configurations can be made to function this way, if you are willing to write a few event handlers, and filter data.
At one of my former employers, a person had invented a Master Detail controller component, which along with a little known variant of ADO-components for Delphi known as Kamiak, and it had some properties which people who are only familiar with the BDE-TTable era concept of master detail would not have expected. It was a very clever bit of work, it had the following features:
You could create an ADO recordset and hold it in memory, and then as a batch, write a series of detail rows, all at once, if and only if the master row was to be stored to the disk.
You could nest these master-detail relationships to almost arbitrary depths, so you could have master, detail and sub-detail records. Batch updates were used for UPDATES, to answer that part of your question. To handle updates you need to either roll your own ORM or Recordset layer, or use a pre-built caching/recordset layer. There are many options, from ADO, to the various ORM-like components for Delphi, or even something involving client-datasets or a briefcase model with data pumps.
You could modify and post data into an in-memory staging area, and flush all the master and detail rows at once, or abandon them. This allowed a nearly object-relational level of persistence management.
As lovely as the roll-your-own-ORM approach seems above, it was not without it's dark side. Strange bugs in the system lead me to never want to ever use such an approach again. I do not wish to overstate things, but can I humbly suggest that there is such a thing as going too far down the master-detail rabbit-hole? Don't go there. or if you do, realize that you're really building a mini ORM, and be prepared to do the work, which should include a pretty solid set of unit tests and integration tests. Even then, be aware that you might discover some pretty strange corner cases, and might find that a few really wicked bugs are lurking in your beautiful ORM/MasterDetail thing.
As far as inserts go, that of course depends on whether you are a builder, or a user. A person who is content to build atop whatever Table classes are in the VCL and who never wants to dirty their hands with SQL is going to think your approach is wrong-headed if you are not afraid of SQL. I wonder how that person is going to deal with auto-assigned identity primary keys, though. I store a person record in a table, and immediately I need to fetch back that person's newly assigned ID, which is an integer, and I am going to use that integer primary key now, to associate my detail rows with the master row, and the detail rows, therefore refer to the master row's ID integer, as a foreign key, because my SQL database is nicely constructed, with referential integrity constraints, and because I've thought about all this in advance and don't want to do this over and over again repeatedly, I eventually get from here, to building an object-relational-mapping framework. I hope you can see how your many questions have many possible answers, answers which have lead to hundreds or millions of possible approaches, and there is no one right one. I happen to be a disbeliever in ORMs, and I think the safe place to get off this crazy train is before you get on it. I hand code my SQL, and I hand code my business objects, and I don't use any fancy Master Detail or ORM stuff. You, however, may choose to do as you like.
What I would have implemented as "master detail" in the BDE/dBase/flat-file era, I now simply implement as a query for master rows, and a second query for detail rows, and when the master row changes, I refresh the detail rows queries, and I do not use the MasterSource or related Master/Detail properties in the the TTable-objects at all.
I am developing an ASP.NET MVC4 web application. It uses the entity framework for data access. Many of the pages contain grids. These need to support paging, sorting, filtering and grouping. For performance the grid filtering, sorting, paging etc needs to occur on the database (i.e. the entity framework needs to generate a suitable SQL query). One complication is that the view model to represent the grid rows is built by combining the data from multiple business entities (tables). This could be simply getting the data from an entity a couple of levels down or by calculating it based on the values of related business entities. What approach is recommended to handle this scenario? Does anyone know of a good example on the web? Most have a simple mapping between the view model and business domain model.
Update 28/11 - To further clarify the initial display of the grid and paging works performs well. (See comment below) The problem is how do you handle sorting/ordering (and filtering) when the column that the user clicked on does not map directly to a column on the underlying business table. I am looking for a general solution to achieving this as the system will have approx 100 grids with a number of columns each and trying to handle this on a per column basis will not be maintainable.
If you want to be able to order a calculated field that isn't pre calculated in the database or do any Database Operation against it, then you are going to have to precalulate the value and store it in the database. I don't know anyway around that.
The only other solution is to move the paging and sorting etc to the web server, I am sure you don't really want to do that as you will have to calculate ALL the values to find what order they go in.
So if you want to achieve what you want - I think you will have to do the following, I would love to hear alternate solutions though:
Database Level Changes:
Add a Nullable Column for each calculated field you have in your View Model.
Write a SQL Script the calculates these values.
Set the column to Not Null if necessary
App Level Changes:
In your Add and Edit Pages you will have to calculate these values and Commit them with the rest of the data
You can now query against these at a Database level and use Queryable as you wanted.
I'm programming a website that allows users to post classified ads with detailed fields for different types of items they are selling. However, I have a question about the best database schema.
The site features many categories (eg. Cars, Computers, Cameras) and each category of ads have their own distinct fields. For example, Cars have attributes such as number of doors, make, model, and horsepower while Computers have attributes such as CPU, RAM, Motherboard Model, etc.
Now since they are all listings, I was thinking of a polymorphic approach, creating a parent LISTINGS table and a different child table for each of the different categories (COMPUTERS, CARS, CAMERAS). Each child table will have a listing_id that will link back to the LISTINGS TABLE. So when a listing is fetched, it would fetch a row from LISTINGS joined by the linked row in the associated child table.
LISTINGS
-listing_id
-user_id
-email_address
-date_created
-description
CARS
-car_id
-listing_id
-make
-model
-num_doors
-horsepower
COMPUTERS
-computer_id
-listing_id
-cpu
-ram
-motherboard_model
Now, is this schema a good design pattern or are there better ways to do this?
I considered single inheritance but quickly brushed off the thought because the table will get too large too quickly, but then another dilemma came to mind - if the user does a global search on all the listings, then that means I will have to query each child table separately. What happens if I have over 100 different categories, wouldn't it be inefficient?
I also thought of another approach where there is a master table (meta table) that defines the fields in each category and a field table that stores the field values of each listing, but would that go against database normalization?
How would sites like Kijiji do it?
Your database design is fine. No reason to change what you've got. I've seen the search done a few ways. One is to have your search stored procedure join all the tables you need to search across and index the columns to be searched. The second way I've seen it done which worked pretty well was to have a table that is only used for search which gets a copy of whatever fields that need to be searched. Then you would put triggers on those fields and update the search table.
They both have drawbacks but I preferred the first to the second.
EDIT
You need the following tables.
Categories
- Id
- Description
CategoriesListingsXref
- CategoryId
- ListingId
With this cross reference model you can join all your listings for a given category during search. Then add a little dynamic sql (because it's easier to understand) and build up your query to include the field(s) you want to search against and call execute on your query.
That's it.
EDIT 2
This seems to be a little bigger discussion that we can fin in these comment boxes. But, anything we would discuss can be understood by reading the following post.
http://www.sommarskog.se/dyn-search-2008.html
It is really complete and shows you more than 1 way of doing it with pro's and cons.
Good luck.
I think the design you have chosen will be good for the scenario you just described. Though I'm not sure if the sub class tables should have their own ID. Since a CAR is a Listing, it makes sense that the values are from the same "domain".
In the typical classified ads site, the data for an ad is written once and then is basically read-only. You can exploit this and store the data in a second set of tables that are more optimized for searching in just the way you want the users to search. Also, the search problem only really exists for a "general" search. Once the user picks a certain type of ad, you can switch to the sub class tables in order to do more advanced search (RAM > 4gb, cpu = overpowered).