In our system, some stored procedures bundle an insert statement and a update statement together. They execute an insert statement first. If there is a duplicated error for a unique field after the execution, they will execute a update statement. They are designed to use in a situation where it is unknown whether the data is in our DB or not.
The following is the structure of a such query
INSERT INTO table1
(...)
VALUES
(...)
ON DUPLICATE KEY UPDATE
...
In a situation, I need to update only one field of a table. I can use a such stored procedure, of course. I, however, am wondering whether I shall create a new update query just for updating data or not in terms of a good practice.
Any inputs?
If I understand what you're saying, then you are calling an update statement if the insert statement returns an error?
If so, then you could probably use an EXISTS to check if the field you are looking to update is in the DB before updating.
https://msdn.microsoft.com/en-us/library/ms188336.aspx
Related
When I update an object in my sqlite API with ajax, it keeps the order of my object array - so the front end looks the same. When I update an object in the API after switching the db to postgres, it changes the order of the array - mostly placing the updated objects at the end of the array. Any ideas what's going on here?
I've tried deleting and remaking the database, no luck. I switched back to sqlite and is working normally again.
In SQL order is not guaranteed. If you desire a particular order, the safest thing to do is to add a sort key to your records, and make sure you're doing an ORDER BY on your select statement.
The fact that SQLite is preserving your ordering is kind of a "mistake" of implementation. You should not rely on the engine to do anything outside the specification.
Quote from the Postgres docs:
After a query has produced an output table (after the select list has been processed) it can optionally be sorted. If sorting is not chosen, the rows will be returned in an unspecified order. The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on. A particular output ordering can only be guaranteed if the sort step is explicitly chosen.
That said: without an explicit ORDER clause the order of the returned records is kind of random.
I'm trying to do an insert operation in a table with autoinc fields, and I'm using FireDac TFDCommand for that. So, the record is getting successfully inserted on db, but how to get the generated values for autoinc fields?
Obs: TFDConnection let me get the last auto gen. value, but the table generates two autoinc fields. I could get the primary key and select the record in db, but it's gonna be another call to db and I need to prevent it.
Any idea?
The only way seems to parse TFDConnection.Messages property, after the insert occurs. Some DBMS, like SQL Server, return messages as an additional result set.
To enable messages processing, set ResourceOptions.ServerOutput to True.
If messages coming from your database server you use doesn't returns any last inserted key information, I fear that the only solution would be another query to retrieve the last ID ...
Given a simple employees table (id, lastname, firstname) the assignment requres me to write a stor proc that takes first and last names, figures out the next id and inserts a new record into the table. That's done. Next part's asking to write a trigger that will call this stor proc whenever a new INSERT happens. My understanding is that this trigger is supposed to intercept the insert statement that's triggered it, extract its arguments and run the stor proc INSTEAD (not before or after) of the insert. The problem is that instead-of triggers seem to only work with views which I'm not allowed to write. Any ideas on how this might be approached?
Thank you for your input!
In Oracle there are sequences. So you could create a sequence and then assign the next sequence number each time in the insert before trigger. Examples can be found here
http://www.adp-gmbh.ch/ora/concepts/sequences.html
and here
http://www.adp-gmbh.ch/ora/sql/trigger/before_after_each_row.html
Let me know if you can do the assignment now or if you need more help.
I'm using EF 4.1 (Code First). I need to add/update products in a database based on data from an Excel file. Discussing here, one way to achieve this is to use dbContext.Products.ToList() to force loading all products from the database then use db.Products.Local.FirstOrDefault(...) to check if product from Excel exists in database and proceed accordingly with an insert or add. This is only one round-trip.
Now, my problem is there are two many products in the database so it's not possible to load all products in memory. What's the way to achieve this without multiplying round-trips to the database. My understanding is that if I just do a search with db.Products.FirstOrDefault(...) for each excel product to process, this will perform a round-trip each time even if I issue the statement for the exact same product several times ! What's the purpose of the EF caching objects and returning the cached value if it goes to the database anyway !
There is actually no way to make this better. EF is not a good solution for this kind of tasks. You must know if product already exists in database to use correct operation so you always need to do additional query - you can group multiple products to single query using .Contains (like SQL IN) but that will solve only check problem. The worse problem is that each INSERT or UPDATE is executed in separate roundtrip as well and there is no way to solve this because EF doesn't support command batching.
Create stored procedure and pass information about product to that stored procedure. The stored procedure will perform insert or update based on the existence of the record in the database.
You can even use some more advanced features like table valued parameters to pass multiple records from excel into procedure with single call or import Excel to temporary table (for example with SSIS) and process them all directly on SQL server. As last you can use bulk insert to get all records to special import table and again process them with single stored procedures call.
I have a web service call that returns XML which I convert into domain objects, I then want to insert these domain objects into my Core Data store.
However, I really want to make sure that I dont insert duplicates (the objects have a date stamp which makes them unique which I would hope to use for the uniqueness check). I really dont want to loop over each object, do a fetch, then insert if nothing is found as that would be really poor on performance...
I am wondering if there is an easier way of doing it? Perhaps a "group by" on my objects in memory???? Is that possible?
Your question already has the answer. You need to loop over them, look for them, if they exist update; otherwise insert. There is no other way.
Since you are uniquing off of a single value you can fetch all of the relevant objects at once by setting the predicate:
[myFetchRequest setPredicate:[NSPredicate predicateWithFormat:#"timestamp in %#", myArrayOfIncomingTimestamps]];
This will give you all of the objects that already exist in a faulted state. You can then run an in memory predicate against that array to retrieve the existing objects to update them.
Also, a word of advice. A timestamp is a terribly uniqueID. I would highly recommend that you reconsider that.
Timestamps are not unique. However, we'll assume that you have unique IDs (e.g. a UUID/GUID/whatever).
In normal SQL-land, you'd add an index on the GUID and search, or add a uniqueness constraint and then just attempt the insert (and if the insert fails, do an update), or do an update (and if the update fails, do an insert, and if the insert fails, do another update...). Note that the default transactions in many databases won't work here — they lock rows, but you can't lock rows that don't exist yet.
How do you know a record would be a duplicate? Do you have a primary key or some other unique key? (You should.) Check for that key -- if it already exists in an Entity in the store, then update it, else, insert it.