I'm coming from the delphi world and I want to make a master/detail interface, like Order and Products.
I already made actions to display the data using fields and a jqGrid. What I want know is how make possible to add lines, edit or remove them, but, just make the changes in db when the user confirm the changes in the master.
On delphi I would use a TClientDataSet with all the in memory changes and just after the confirmation would execute them inside a transaction like:
BEGIN
Master.Post
FOREACH Line IN Lines Line.Post
COMMIT
So in resume, I don't know how keep in memory the array of lines in the grid and how send them back to server to commit.
Any help will be appreciated. Thanks in Advance.
You'll need to keep track of the changes client side, perhaps using some hidden fields and/or form fields in your grid. When a line is deleted (that previously existed in the db), you'll need to add it's id to a field containing lines to delete. Lines that are added need to have associated form fields containing their data. When the master is committed you roll the whole set of fields up into a POST and send that back to the server.
Using LINQ to SQL, you'd create a data context, get the master object, then delete the related objects (from the hidden field of ids) that are so marked and create/add new related objects that didn't exist before taking the values from the appropriate form fields. Then you'd do a SubmitChanges and all of the statements would be executed within a single transaction.
Related
I have a (Firebird) DB. For most of my tables I have a trigger which fires before insert which will create the Primary Key (PK) for me via a generator as well as write to the newly inserted records a Created Date value and a Created By value. I also have an update trigger which writes to an Updated Date field and an Updated By field.
eg (Client is a table in my DB):
create trigger t_client_id for client
active before insert
as begin
new.client_id = gen_id(gen_client_id, 1);
new.created = current_timestamp;
new.created_by = current_user;
new.lock_vn = 1;
end ^
create trigger t_client_update for client
active before update
as begin
new.updated = current_timestamp;
new.updated_by = current_user;
end ^
When I apply updates thru my ClientDataSet (CDS) - which are attached to remote TDataSetProviders via a TDSProviderConnection, how can I "retrieve" these generated values? If I edit an existing one (which will in turn call the t_client_update trigger, calling RefreshRecord will get the updated and updated_by fields. However, the Doco says to use that method cautiously, so that may not be the correct way to achieve this. I call it straight after I've called ApplyUpdates(-1).
The CDS I use only contains the one record I am attempting to Edit. For a New record, the CDS is in dsInsert mode. Everything is written to the DB ok so I just need to get this new data back out again. I have also tried using a CDS which contains ALL records in the table too to see if it was any simpler but didn't make any difference - unsurprisingly. The reason I need this information is simply to show to the user in DB Aware controls these values. They are read only.
I could call a Get on the record I guess when editing an existing record, using the PK, but that won't help for an Insert as I don't know what the new PK is.
Example of where I attempt to ApplyUpdates to my CDS (actDSSave is a TDataSetPost action)
dsState := actDSSave.DataSource.DataSet.State;
DoApplyUpdates(-1);
if dsState = dsEdit then
TClientDataSet(actDSSave.DataSource.DataSet).RefreshRecord;
I am using TIBQuery for my dataset attached to the remote DataSetProvider. This query SQL is a simple select * from client where client_id = :client_id. I have tried associating this query with a TIBUpdateSQL too as well as trying to set poAutoRefresh to true in the DataSetProvider.
So is it possible to obtain these Trigger generated values this way or do I need to approach it in a different way? Another way I can think of, is to create stored procedures which do CRUD against each table and use that instead (with appropriate in/out params to return this new data) but hopefully I don't have to go down that track. Hopefully I have provided sufficient info here to explain and replicate the issue.
Thanks
EDIT
Realised in above, DoApplyUpdates(-1) is my own method. It's implementation at the moment is simply:
FdatCommon.cdsClient.ApplyUpdates(MaxErrorCount);
FdatCommon is a TDataModule containing my CDS.
You simply can't get "generated" values without new requery (RefreshRecord) of data after Post.
It's because triggers runs on server side when you call ApplyUpdates, but TClientDataSet does not refresh by default posted record. For example other libraries FIBPlus have an option to do it automatically.
About inserts, TIBDataSet have GeneratorField property. Using it, dataset query and increment generator value separatelly before insert. So you will have PK values after post even on inserts. But avoid using it again in trigger.
MIDAS (TClientDataSet) is a great library, but his general / universal architecture loose DB specific features (such as retriving values from inserts) compared to dedicated libraries for specific DBMS, such as FibPlus. By the way I saw TpFIBClientDataSet. It work in conjunction with TpFibDataSet.
Here's a simple problem: users want to edit products in grid-like manner: select and click add, select and click add... and they see updated products list... then click "Finish" and order should be saved.
However, each "Add" have to go to server, because it involves server-side validation. Moreover, the validation is inside domain entity (say, Order) - that is, for validation to happen I need to call order.Add(product) and then order decides if it can add the product.
The problem is, if I add products to order, it persists changes so even if users do not click "Finish" the changes will still be there!
OK, I probably shouldn't modify the order until users click Finish. However, how do I validate the product then? This should be done by the order entity - if product is already added, if product does not conflict with other products, etc.
Another problem, is that I have to add product to order and "rebuild view/HTML" based on its new state (as it can greatly change). But if I don't persist order changes, the next Add will start from the same order each time, not from the updated one. That is, I need to track changes to the order somehow.
I see several solutions:
Each time the user click Add, retrieve order from database, and add all new products (from the page), but do not persist it, just return View(order). The problem is I cannot redirect from POST /Edit to GET /Edit - because all the data only exists in the POST data, and GET lose it. This means that Refresh page doesn't work in a nice way (F5 and you get duplicated request, not to mention the browser's dialog box)).
Hm, I thought I can do redirect to GET using TempData (and MvcContrib helper). So after POST to /Edit I process business logic, gets new data for view, and do RedirectToAction<>(data) from MvcContrib that passes data via TempData. But since TempDate is... temp... after F5 all the data is lost. Doesn't work. The damn data should be stored somewhere, this way or another.
Store "edit object" in Session with the POST data (order, new products info). This can also be database. Kind of "current item - per page type". So page will get order ID and currently added products from this storage. But editing from multiple pages is problematic. And I don't like storing temp/current objects in Session.
Marking products as "confirmed" - if we do /order/show, we first cleanup all non-confirmed products from the order. Ugly and messy logic.
Make a copy of the order - a temporary one - and make /Edit work with it. Confirm will move changes from temp order to persisted. A lot of ugly work.
Maybe some AJAX magic? I.e. "Add" button won't reload page but will just send new + already added products to server, server will validate as order.Add(products + newproduct) but will not persist changes, will just return updated order information to re-build the grid. But Refresh/F5 will kill all user-entered info.
What else?
Is this problem common? How do you solve similar ones? What's the best practices?
It depends a lot on how you implement your objects/validation, but your option number 5 is probably the best idea. If AJAX isn't your thing, you can accomplish the same thing by writing the relevant data of already-added-but-not-saved entries to hidden fields.
In other words, the flow ends up something like this:
User enters an item.
Item is sent to the server and validated. The view is returned with the data entered by the user in hidden fields.
User enters a second item.
Item is sent to the server, and both items are validated. The view is returned with the data for both items in hidden fields.
etc.
So far as F5/Refresh killing entered data... In my experience this isn't too much of a problem. A more pressing concern is the back/forward buttons, which need to be managed with something like Really Simple History.
If you DO want to make the page continue to work after a refresh, you need to do one of the following:
Persist the records to the database, associated with the current user in some way.
Persist the records to session.
Persist the records to the query string.
These are the only storage locations available that persist through both redirection and refreshes.
If I were you, I would come up with something which resembles option 5. And since you say that you are comfortable with Ajax you can try this. But before you do this, you should move your validation logic outside the Order.Add() method. Maybe you can move it to another public function called Validate() which returns a bool. And you can still call the same Validate() in the Add() method, thereby doing the necessary validation before you add the order.
Try to do the validation on the client side. If you are using jQuery, you can use the jquery validate plugin. But, if this is not possible for some reason (such as when you need to validate stuff against a database). You should do your validation on the server side and just return a JSON object with the 'success' boolean flag and an optional message, just a way to mark that the data is valid. You would allow the user to add a new product only if the previous Order was valid.
And when the user hits finish send the product to the server and do the validation again, but persist the order in this round-trip.
Now, If I had a complete say in this, I wouldn't even go to the extent of doing validation whenever a product is added/edited. I would just do the validation whenever the customer hits finish. That would be the simplest solution. But, maybe I am missing something.
alt text http://img29.imageshack.us/img29/825/simplemodel.jpg
How you can see above, it's a kind of subclass a table in a RDBMS (in this case, my favorite: MySQL) so I handle it with Visual Subclassing a base form for tb_order_base with validating field data, etc.
This way, I'm free of repeating code and some other bothering problems, well, it seems a true OO aproach. But ...
Now I got a Big problem with subclassed form i.e. tb_order_service with a master/detail approach, when I post the tb_order_base dataset, instead of Delphi first post it and get the PK ID from RDBMS and then post the TB_ORDER_PRODUCT with id filled, it does the opposed, posting the detail tb_order_product dataset first then master tb_order_base, so I get a big foreing key constraint error.
Does anyone has any idea to how to by-pass this amazing problem?
I have asked it before, but with few details in The Master/Detail Behavior
One thing I have done in such a condition was to break away from doing DBMS direct calls, and to instead use a memory dataset (or client dataset) for the form. When the user presses the save button, I would validate my edits and return any errors. If no errors then I would begin a database transaction, commit the master record (and then read the master record key back if its an insert), then run thru each table commiting the child records, followed by a commit of the transaction (or rollback if there were any problems saving child data).
I also stay away from using data-aware components. They work great for simple utility type programs, but when you start creating a complex system using them you will find little gotchas along the way that are easily solved by using a standard edit and a function to push/pull data to/from the database. The only exception I generally make would be for grids, but I only use the grid for selection...the actual editing is done using non data-aware components.
Set the scene:
New to .NET; drinking from firehose
ASP.NET MVC app, SQL Server back
Editable table in browser with a single SAVE button.
User can right-click to add or delete rows.
Table won't ever have more than approx. 30 rows.
My question :
I'm saving everything upon the Save button click but would it be better to save row by row, AJAX style, as the user makes updates?
I don't like the look of separate buttons for each row, which is why I've designed it this way.
Is this mostly a UI issue? Am I missing any technical gotchas here, such as backend failure during the mass saving of the rows?
Additionally, assume I do save the entire table at once, is it better to create an ADO DataTable object or just loop through, inserting/updating each row as I go by calling a stored procedure. I suppose I could add LINQ to the firehose, but that would make this question even less "answerable".
You don't have a huge volume of data here, so saving all 30 rows at the end of the table is a reasonable approach. But you should be prepared for a failure, particularly if you are changing existing rows when it will fail more often due to other apps/users changing the same data. Just make sure that you wait for confirmation from the SQL server that the changes have been committed.
What I've done before with these sorts of big table views is when somebody clicks on a cell they'd like to edit, run some ajax to display a text field with that text, they can edit, then listen to onmouseout and the enter button to send off the ajax request to modify the single row.
When the response from the ajax call comes back you can add a tooltip or something that it was saved, and then change the cell to the new val.
Assuming you have SQL2005, you could build up an XML document with all of your data rows, then call a single stored proc and pass it your XML. Then the stored proc could save all of the rows at once.
Here's my question:
I need to write a wizard, for customers to "create a new" very big objetc, with some other asociated with it: for example, Some images stored in another table (with relationships), some Lat's and Lang's for google earth, etc.
Each of them are stored in diferent tables in the Database, and that's why, i have to first insert to get the first object's Database generated ID to make the relationships with the another Objects. That's the reason I think puttin' Everything on just one View and hide selective DIVs with Jquery is not one of my option.
Session isn't an option because of the bigger object.
And because of the type of website, the wizard MUST be as follows:
Basic details of objetct 1
Images of object 1 (I will need here the ID of the first object)
Geolocations (with google maps, as before)
More details of object 1.
Preview
Publish
The point is, in step 4, user fill some fields that are required by the DB, and I cannot make them nullable as is it part of the customers reqs.
If somebody can a least give Ideas, will be nice...
Thanks in advance
You state that storing your object in Session is not desirable because of the size of the object. An alternative is to serialize that object and store it in the database. As the user progresses through the wizard, that object gets retrieved, updated and stored back in as a blob. Once they publish it, you can insert the appropriate records and remove the serialized object from whatever table you're storing them in.