SQL Server: Concurrency when using temporary tables within stored procedures - asp.net-mvc

I have an ASP.NET MVC app published on IIS Server. I am using web gardening here, I mean, application pool has more than one worker processes to attend incoming requests. Also this app is being used by a lot of users clients.
This app calls a SP which uses some local temporary tables (#example). As an example:
BEGIN
if OBJECT_ID(N'tempdb..#MyTempTable') IS NOT NULL
BEGIN
DROP TABLE #MyTempTable
END
CREATE TABLE #MyTempTable
(
someField int,
someFieldMore nvarchar(50)
)
... Use of temp table here
... And then drop table again at the end..
DROP TABLE #MyTempTable
END
I am worried about concurrency, for example, what happens if a user client calls the stored procedure while another previous call is being running at the same time? Can be concurrency issues here?

In IIS (including most web server), use threads to process requests. Each request will be executed in a new thread which is created in app pool. Unless shared resources, threads will not affect each other.
Local temporary objects are separated by Session. If you have two queries running concurrently, then they are clearly two completely separate sessions and you have nothing to worry about. The Login doesn't matter. And if you are using Connection Pooling that also won't matter. Local temporary objects (Tables most often, but also stored procedures) are safe from being seen by other sessions.
Even multiple threads(also requests) want to use a connection and execute stored procedure, the same connection can not be reused by connection pool. It means no danger. Explained in this thread.
Similarly, one thread uses the connection and execute stored procedure, it will not have effect. All calls are using the same stored procedure. They will be queued in sequence until the previous call is executed.

Related

Stored procedure hangs on statement.execute()

Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works? Further info: I remove that statement from the stored procedure, then the SP also runs properly. How can this sort of thing be debugged?
(One more piece of info: running as a different user on a different schema, the SP works as intended.)
Update: running the SP on a different warehouse worked, so it might be a problem with the warehouse, not the schema.
Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works?
There can be multiple reasons: Query gets queued due to lack of resources, is awaiting a lock to free (if its a transactional query), etc.
How can this sort of thing be debugged?
Check the Query History UI page on Snowflake. If your procedure-executed statement is showing a queued status, you're likely running into a warehouse size limit or a maximum concurrency limit, which can be resolved by reconfiguring your warehouse (via auto-scaling and/or using higher warehouse sizes).

What will happen if I use one database connection instance in whole application in a concurrency system?

I am working on asp.net MVC+WebAPI entity framework code-first project, and using async methods. Currently, I am creating database connection on per-call basis.
Creating database connection takes small time,
so every request to server needs small time to create a database connnection,
and for thousand requests it takes more time.
Now, my question is if I use one database connection instance in whole application then what will be happened?
SQL Server and Oracle have connection pools, so the actual resources behind the database connection are typically going to be reused even though you create a connection class for each call.
If you have a singleton connection you won't be able to have concurrent database requests. You'll have a single-thread bottleneck to the database.
The connection pool will grow and shrink depending on how much concurrency you application actually has.

Persistent messages over channels

I am building an app using phoenix framework that will use thousands of channels, users will be sending messages of long, lat info each one seconds.
So, I will be getting thousands of messages each second.
The question is, how would I store each message I get into DB?
Please bear with me to describe my thoughts about this, as I am thinking of the following options to handle message storing into DB:
Use the same connection process to store the message into DB.
Possible caveat:
Would this effect the latency time and performance of the channel?
Create a DB dedicated process for each channel to store its messages.
Possible caveat:
Then, if I have 100'000 channel, I will need another 100'000 process to store messages, is this fine? considering that erlang processes are lite and cheap?
Create one DB process for all channel, then each message from any channel will be queued, then stored by this individual DB process.
Possible caveat:
One process to store all messages of thousands of channels, the message queue will go high, it will be slow?
So, which is the recommended approach to store messages coming each second from thousands of channels?
EDIT
I will be using dynamo db which will scale to handle thousands of concurrent request with ease.
The most important question is if the request in the connection channel can be completed before it's written to the DB or not. You need to consider what would happen if the connection process responded back to the client and something happened to the DB so that it has been written. If the data loss is acceptable then the DB access can be completed asynchronously, if not, then it needs to be synchronous, e.g. respond to the client only after the DB confirmed that it has stored the request.
If the data can be stored to the database asynchronously then it's better to either spawn a new process to complete it or add it to a queue (2 and 3). If the data has to be stored synchronously then it's easier to handle it in the same process (1). Please note that the request has to be copied between processes which may affect performance if the DB write is handled in a different process and the message size is big.
It's possible to improve the reliability of the asynchronous write for example by storing the request somewhere persistently before it's written to the DB, then reply back to the client, and then try to complete the DB write, which can then be retried if the DB is down. But it complicates this a bit.
You also need to determine the bottleneck, what would the slowest part of the architecture. If the DB then it doesn't matter if you create one queue of requests to the DB or if you create a new process for each connection, the requests will pile up either in the memory of that single process or in the amount of created processes.
I would probably determine how many parallel connections the DB can handle without sacrificing on latency too much and create a pool of processes to handle the requests. Then I would create a queue to dispatch requests to those pooled processes. To handle bigger messages I would obtain a token (permission to write) from the queue and connect to the DB directly to avoid copying the message too much. That architecture would be easier to extend if any bottlenecks have been found later, e.g. persistently store incoming messages before they can be written to the DB or balance requests to additional nodes when the DB is overloaded.

Where to initiate and manage background operations in Asp.Net MVC

The first operation will be carrying out several calculations and updating the same tables that users also access. These processes dont depend on any indivual request/state and will always be running.
Should I put the first operation in a separate application/machine?
The second operation acts like a manager across all requests and will be running continuously.
How do I initiate and maintain the second operation? Do I start an Admin request or can I initiate at a global level automatically?
This post (https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/) explains how to implement scheduled/background tasks with asp.net mvc. Otherwise you can use Windows Services or WCF services. You can use DB tables to synchronize background jobs with requests.
You don't need a separate application/machine but it depends on your requirements, your architecture (single server or farm) and your performance goals.

Using multithreading for making queries in Delphi

I've been recently applying threads for making queries to a MYSQL database, I use MyDAC for connection to DB, 'cause TMyConnection doesnot let making simultaneously queries per a connection, I create a new connection and a new query object per every thread executing a query, so in certain time could happens that server has several connections per a client. If we consider this scenario for several clients connecting to database, this is would be a problem, I guess. Is there a better solution for using threads in queries?
Thanks in advance
Use a second tier where you can pool some connections (you can do with datasnap or remobjetcs...) This way you can reuse connections of all of your users and mantain the number of connections in a smaller level.
Have a look Cary Jansen article called
Using Semaphores in Delphi, Part 2: The Connection Pool
He goes in to great detail about how to provide thread-safe access to a limited number of database connections
Getting is code to work with MyDac - TMyConnection is trivial.

Resources