Does Cosmos block a partition while executing a stored procedure? - stored-procedures

I'm writing a sproc that increments a value in a "meta" document for every document the sproc inserts. This will only work if the partition is locked while the sproc executes. Does cosmos lock the partition for writes by other callers or are concurrent writes in the partition allowed while the sproc executes?
Thanks in advance for your help.

Cosmos DB has optimistic locking and uses etags for concurrency.
More details can be found at Optimistic Concurrency Control
Hope this helps.

Related

CosmosDB acid transactions (commit and rollback) using .NET api

I want to write some (potentially) complex operations on my CosmosDB database, which, most importantly, can consist of multiple CRUD operations, and it is really important of course that if one of operations fails, I can rollback whole transaction. Are javascript stored procedures the only way to achieve this? Would this mean that I would write them as javascript files and execute them using .NET api (because my code is using .NET cosmos db api)? Is this possible?
Thanks in advance
You are absolutely right. For the time being its achievable only through the stored procedures because it has to be run on the server side . You can definitely execute those stored procedures by calling from .net api
var sprocBody = File.ReadAllText(#"..\..\StoredProcedures\spHelloWorld.js");
If you need transactions within a logical partition key, you have options (nothing wrong with different document types in one collection, remember to have a property for type name to distinguish JSON-objects)
1. https://devblogs.microsoft.com/cosmosdb/introducing-transactionalbatch-in-the-net-sdk/ TransactionalBatch
2. https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started a lib for bulk insert/update operations, also supporting transactions within a partition

FireDAC: Array DML Progress

I'm inserting huge amount of data to my remote MS SQL database using FireDAC's Array DML feature.
It works fine but takes time to complete execute method. While execute method running, I want to know the internal progress of FireDAC so that I can show it to the user to be patient.
How can I get the actual status of execute method?
PS: Delphi XE4 and FireDAC v8
Thanks.
This is rather question to DBMS API - does it provide a progress feedback or not. The nature of Array DML is that the full set of array items is sent to DBMS as a single packet. And at end of execution DBMS provides the feedback - again for the full set of array items. This reduces the number of roundtrips. A feedback in between will raise the number of roundtrips.
AFAIK, none of the API's is providing a progress feedback. So, FireDAC does not provide it too. If you need a progress feedback, then do not use Array DML and use one-by-one ExecSQL approach.

Is it necessary to lock simultaneous SQLite access for SELECT statements?

I am using FMDB to access the standard iOS internal SQLite database, with one db connection shared among multiple threads.
To make it thread safe I'm locking access to the db to one block of code at a time. All works well, although the access to the db is now a bit of a bottleneck, obviously.
My question is: Can I ease this up a bit by allowing simultaneous queries from multiple threads, as long as they are all readonly SELECT statements?
I can't find an answer anywhere.
You cannot use the same connection to execute multiple queries at the same time.
However, for purely read-only accesses, you can use multiple connections.
You can have one FMDatabase object for each thread. You might have to write code to test for genuine busy conditions and handle them properly. For example set busyRetryTimeout appropriate for your situation (e.g. how long do you want it to retry in contention situations). Also gracefully handle if the timeout expires and your database query fails.
Clearly, using a shared FMDatabaseQueue is the easiest way to do database interactions from multiple threads. See the Using FMDatabaseQueue and Thread Safety section of the FMDB README.

Stored procedures fire and forget with Entity Framework

I am using the Entity Framework 4.1 within an application. One of the requirments is to execute some stored procedures on the database out of which some take quite some time. Further, those stored procedures do not return any results so I need to only start them and forget about them.
Naturally, .NET will wait for these operations to complete so after some time it throws an exception that the "Timeout period has expired".
I know that I could fix that by setting the CommandTimeout property to a higher value, however I am looking for an alternative solution (If such even exists).
Is it possible to execute stored procedures using the Entity Framework as Fire-and-Forget?
Any help will be appreciated.
Regards
Stored procedures don't support fire and forget execution. You can either use plain ADO.NET and execute query asynchronously on separate connection (with BeginExecuteNonQuery). EF doesn't support asynchronous execution. Another more complex way which behaves like fire and forget is creating SQL Job with single step execution your stored procedure. Instead of calling your stored procedure you will call sp_start_job which returns immediately after starting the job and job will execute asynchronously without returning any other result back to your application.

Massive memory leak in Powershell script

I have a powershell 2.0 script on an XP OS. The purpose of the script is to extract data from an old database (Sybase) and populate a SQL Server 2008 database. The model that I am using is to create OLEDB connections to the Sybase database. The script calls a series of stored procedures from the Sybase database. The results are used to create an XML string. The XML string is queried for the input data for the SQL Server stored procedures. AFter each data element is created in the SQL Server database the XML string is saved to a file. Every database connection is closed after execution is completed. It is simple model, but uses staggering amounts of memory. For transferring only 1000 rows of data the script memory footprint grows to 3G. When the script completes the memory does not drop. In an attempt to rectify this problem I have added logic to free every variable when not used and call garbage collection in every finally clause of every try block. I am aware that this is overkill, but am trying to find anything that will reduce the memory usage. I am in the processing of looking for a memory trace tool, but I am also looking for expert opinions as to possibly start tracking down this critical issue. I know I am missing something obvious so any advice would be appreciated.
I found the source of the leak. I am using an OLEDB connection to a Sybase Server instance to get data to load into SQL Server. 95% of the leak was isolated to a single function that invoked a Sybase stored procdure. Instead of getting the results in a result set, this procedure had the option of iether returning the results as output parameters or a result set. I initially chose the output parameter option. For reasons that are not clear to me, this output parameter mechanism caused a massive memory leak. I changed the logic to use a result set and the resolved the leak. Not clear why output parameters were an issue, but I found an optional approach that corrected the problem.

Resources