We have a stored procedure that UPDATES a table based on some conditions. And in the same stored procedure, INSERTS into the same table on some other conditions. Now this destination table has a bunch of associated indexed views which slows down the updates and deletes. What we are doing now is to disable the indexes on the views prior to the load and rebuilding them after. The rebuild takes close to half hour but if we don't disable this, the indexed views are rebuilding once for the update and once for the insert.
My questions:
The updates and inserts, do they recreate the views per row or for all the rows impacted in the UPDATE/INSERT
Is there a way to batch the insert and the updates so that the indexed view gets triggered only once after all the INSERT and all the UPDATE
The indexed views accesses one of the columns in the table that is being updated/inserted. Now does this indexed view get recreated even if that particular column itself did not change but some other column in the table was updated.
Related
Ok, so I have like 7 cost tables. The idea would be to create a flat table that is essentially all of those costs on a single table. At which point we can feed that to a front end, and when a person picks a specific item, they can see all costs associated with that item.
I have an ItemInfo table, which defines all potential items that may have costs. I then have 7 cost tables, that define all of the individual costs occurred for 7 different phases of that items production.
So, just starting with two of those tables, I have joined the Item table to the Cost1 table, by the ItemID. If I execute that SQL, I get a new table that shows each cost that was accrued in the first phase, along with the relevant bits of data from both tables.
My issue is when I bring the next table in, Cost2.
The Cost1 table has 6,999 entries.
The Cost2 table has 13,743
When I join the ItemID table to the Cost2 table, the resulting table is massive.
I have tried inner joins, left joins, right joins, outer joins, etc.... Regardless of the type of join I try, I do not get 20,742 entries. Which would be the accurate number of entries, based on those two tables being both represented. I have not even attempted moving on to Cost3 through Cost7, as I can't even get the first two to display properly.
I suspect the answer may lie in grouping, but I'm not sure how to do that in a way that would retain the individual cost items from each page.
I thought I understood joins fairly well, and I think I do when it is just 2 tables. What I don't understand is if I tell the first 2 tables to only grab the matching items between them, and then I tell a second set of tables to do the same thing... why does it then seem to try and match cost1 to cost2, even though the ItemID table is the only one I am trying to link them too?
I've been facing this problem for the past few days. I am attempting to create a table view that is populated from a database query (seems simple enough). As I will be managing multiple tables, I have created a database helper class to fetch the data by using the sql queries. But it does not work consistently (or at all of late).
When I attempt to query a table, using one of the defined functions, the db return cursors with XX number of records, but null column data. In effect, multiple rows ( I see the row separators), but each row is blank.
Any suggestion or help is highly appreciated.
I'm building a Rails application with a dashboard composed of a sorted collection of cells. The ultimate goal is to allow the user to arrange the cells and have that persisted to the database, but I'm unable to fathom the architecture required to make this happen.
I'm less concerned about the UI/UX of dragging and dropping cells, and more concerned about the models required to represent this in a SQL database with ActiveRecord.
Any help would be appreciated. Thanks!
This is a pretty solved problem, there are numerous gems that will handle this for you.
Typically you'd add a "position" integer column to the table, and sort by that when you select records. When you want to move an item A to a new position after item B, you first add 1 the position of all records that are sorted after B to make a new space for A, and then to set A's position to B.position + 1. This way, sorting involves only two writes.
I have a table named Audit with the following fields; ID (PK), startDate,EndDate,TypeID.
I have added two indexes one on the StartDate and the other on the typeID column . Since users can search the audit data based on these two columns. And a new audit record will be added to the audit table whenever a user perform add, edit or delete on any of our system functions. So my question is whether adding two indexes on the audit table can negatively affect the performance for adding, editing , deleting our system data inside our system, or since the audit table will only have new records added, no edit or deletion on the audit table , so adding the two indexes will not negatively affect the speed of the create, edit and delete for our data that will be logged ?
Thanks
Indexes are never free. The question is whether the cost of maintaining the two indexes will be noticeable at all, and if so, whether its impact on your write workload is justified by the improvements they make to the search queries.
My guess is going to be, yes, the indexes are probably worth it, but only you can know for sure by testing an entire workload cycle. (And assuming they are the right indexes to support your queries, which we also don't know.)
I have a database (held in an Access .MDB file) that records staff members, and any absence they have e.g. holiday, sickness, training course, the start and end dates, and hours of productive time lost.
I then have a dbgrid bound to an "master" ADO query that finds all staff meeting the selected criteria of date range, department, search string for name, summing up the hours of productive time lost.
I have another dbgrid bound to a "detail" ADO table containing the absence records.
The desired effect is that the detail dbgrid should only contain those records from the Absence table that match the row selected in the master record (both "master" Staff and "detail" Absence tables contain a common EmployeeID field).
Though I can achieve this using ADO Queries created on the fly, changing the query each time the user moves to a different master staff record, I was hoping to use the detail DBGrid as my main method of deleting, updating, and adding additional absence records, complete with in grid lookups; so user can select record types without having to remember the code for that type.
I would also like the changes in this detail grid to be reflected in the summaries in the master dbgrid.
I have achieved this using a detail ADOTable linked as MasterDetail to the Staff Query, but need to have filtered set to True, and control the onfilterevent in code; but as the database increases in size this is getting slower and slower.
Is there anything I can do to improve this performance, or will I be forced to have the detail dbgrid as purely read-only, and all Absence records entered through another form or panel?
More information on Making the Table a Detail of Another Dataset
ADOTable2.MasterSource := DataSource1;
ADOTable2.MasterFields := 'EmployeeID';
I would also like the changes in this detail grid to be reflected in the summaries in the master dbgrid. After editing the detail table and posting any change you may use the AfterPost event to recalculate the summaries.