Oracle global lock across process - stored-procedures

I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail.
One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts.
My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

If you want the inserted entities to be unique, then in Oracle you don't need to serialise anything - a unique constraint is perfectly designed and suited for exactly this purpose. Oracle handles all the locking required to ensure that only one entity gets inserted.
I can't think of a reason why the dba department rejected the idea of a unique constraint, this is pretty basic - perhaps they rejected some other aspect of your proposed solution.
If you want to serialise access for some reason (and I can't think of a reason why you would), you could (a) get a lock on the whole table, which would serialise all DML on the table; or (b) get a user-named lock using DBMS_LOCK - which would only serialise the particular process(es) in which you get the lock. Both options have advantages and disadvantages.

Related

How stable are the neo4j IDs?

I know that you're not supposed to rely on IDs as identifier for nodes over the long term because when you delete nodes, the IDs may be re-assigned to new nodes (ref).
Neo4j reuses its internal ids when nodes and relationships are deleted. This means that applications using, and relying on internal Neo4j ids, are brittle or at risk of making mistakes. It is therefore recommended to rather use application-generated ids.
If I'm understanding this correctly, then only looking up a node/relationship by its id when you can't guarantee if it may have been deleted puts you at risk.
If through my application design I can guarantee that the node with a certain ID hasn't been deleted since the time ID was queried, am I alright to use the IDs? Or is there still some problem that I might run into?
My use case is that I wish to perform a complex operation which spans multiple transactions. And I need to know if the ID I obtained for a node during the first transaction of that operation is a valid way of identifying the node during the last transaction of the operation.
As long as you are certain that a node/relationship with a given ID won't be deleted, you can use its native ID indefinitely.
However, over time you may get want to add support for other use cases that will need to delete that entity. Once that happens, your existing query could start producing intermittent errors (that may not be obvious).
So, it is still generally advisable to use your own identification properties.

How to connect Jboss BRMS (6.4.0.GA) to any database

I have a SQL Server database with a Person table and I want to load a list of these people from the database to an Arraylist or List in the BRMS to apply the rules. how can I do this?
The best practice is to delegate the data retrieval logic to the caller.
The pattern should be:
Retrive the data from a DB or whatever
Fill in the data in the Working Memory
Fire the rules
Collect the results
Depending on the application you can use the results to update a DB
The BRMS has the ability to retrieve data in the rule logic but it should be considered a bad practice, or something to do when no other options are available (really rare case, in rare situation). Otherwise, the BRMS performances will be terrible and the overall code really hard to maintain.

Using Advantage Database Server and JDBC connection

I am trying to figure out how I can use a JDBC connection and SQL statements to read DBF/CDX tables that reside on an Advantage Database Server (using a free connection) and find "deleted" records. (Those records that are logically deleted and not physically deleted.)
I know that I can include or exclude deleted records using a
connection property, however my question is how to include them and
then later identify them.
Unfortunately, I don't think there is a simple way to do that through JDBC and an SQL-only solution. As you state, it is possible to display the logically deleted records by including ShowDeleted=true; in the connection string. But after doing that, it is not possible to distinguish between the deleted and non-deleted DBF records in an SQL statement.
It might be possible to write an Advantage Extended Procedure that used a navigational approach to return information about logically deleted records, but that could be a fair bit of work to do from scratch. Another idea (a rather messy/ugly one) would be to maybe use two separate connections to the same table where one shows deleted records and the other doesn't and then get ROWIDs and use that to isolate the deleted records. Not fun.
It doesn't help you now, but the forthcoming release (v12) will contain an SQL scalar function DELETED() that returns true/false for logically deleted records. That is exactly what you need but won't be available until later in 2014.

Atomic changes with Simperium?

Is there a way to ensure an ordered atomic change set from Simperium?
I have a data model that has complex relationships associated. It seems looking over things that it is possible for the object graph to enter in an invalid state if the communication pipe is severed. Is there a way to indicate to Simperium that a group of changes belong together? This would be helpful as the client or server would prevent applying those changes unless all the data from a "transaction" is present thus keeping the object graph in a valid state.
Presently it's expected that your relationships are marked as optional, which allows objects to be synced and stored in any order without technically violating your model structure. Relationships are lazily re-established by Simperium at first opportunity, even if the connection is severed and later restored.
But this approach does pass some burden to your application logic. The code is open source, and suggestions for changes in this regard are welcome.

Core Data vs. SQLite for offline persistence of an existing databased exposed via OData

I am creating an app that requires "offline" persistence of it's data that is exposed via an OData web service. The OData service gives me access to all the tables of the underlying database, as well as all the relevent database fields such as ID's.
Additionally, I already have a SQLite database schema that I can use.
My question, and to which I have flip-flopped on twice already, is whether it is better to store the web service data on the device via SQLite directly (using FMDB), or to leverage Core Data?
If I use Core Data, then I lose the relational benefits of Primary and Foreign keys, but gain the benefit of automatically nested/populated NSManagedObjects. I'm not totally sure of how best to recreate the relational nature of my data objects.
If I use SQLite, I can just straight insert/update the results of the web service calls, and automatically get relationships from existing Foreign Key columns. The downside is I probably need to manually encapsulate my results in POCO objects.
My gut right now is telling me SQLite, but it seems as though the community overwhelmingly points to Core Data in any/all cases. If Core Data, how do I best create and maintain object relationships (especially when they are 1->many)
This app will not go into the app store, if any Apple-happy aspects are of issue.
Core Data models relationships directly. So in your schema you might say e.g. that object A has a relationship with object B and that the relationship is 'to many'. However the relationships work like normal object references — you need to link each instance of A to all relevant instances of B, you don't [easily, or usually] just say 'A relates to B through foreign key bID' and then have the relationship deal with itself.
If you have a SQL persistent store then the way that's implemented is that each object gets an implicit unique key for its table. Relationships are modelled as an extra column that holds the key or keys of every linked object in the foreign table.
Other things people tend not to like about Core Data:
if you rely consistently on the implicit data fetches then you'll often get poor performance, so you often end up with explicit queries anyway in order to populate results you're probably about to look at in a single database trip;
since SQLite is not thread safe and Core Data objects maintain a live connection to their stores, Core Data objects are not thread safe (though objectID references to them are and you can fetch similarly safe dictionaries instead of live objects if you prefer);
even once you've otherwise solved the threading issue, saves in the background still block accesses in the foreground as per the SQLite thread safety comment.
Conversely:
since iOS 5 you can use NSIncrementalStore so that you just run Core Data queries and your Core Data store is smart enough to go to the server if it needs to — the main body of your code has no idea of whether data is local or remote and doesn't need to repeat itself when declaring what it's going to look for;
you get the live database connection for free, so your objects automatically update themselves if the persistent store changes;
if you're looking mainly to do iPhone-style table views then the work is almost entirely done for you already, you pretty much just supply the query;
Core Data has a sophisticated system of faulting that largely resolves memory footprint issues when dealing with large data sets.

Resources