How to run Aurora-MySQL time consuming store procedure serverless beyond the 300 second limit - serverless

We are running stored procedures on AWS Aurora-MySQL from AWS Lambda.
Some stored procedures take more than 300 seconds so Lambda is not useful to wait and keep the connection alive to MySQL until process is done.
Any other way to serverless run a routine on Aurora without the time restriction?

If a stored procedure takes more than 300 seconds I'd look into
1) Optimizing the stored procedure itself
2) Redesign with asynchronous processing principles using SQS or similar messaging service and also look into Aurora calling Lambda once the stored procedure is executed.

Related

aws lambda and api gateway timeout only in lambda

I uploaded some functional codes to lambda.
Before I upload my code, I checked that my code works pretty good in my local server runs on Django Rest Framework. and on collab environment, It also works.
But in lambda, some cases that need more calculating time bring me timeout.
I know api gate way has 29 seconds time limit. But in my local server, even more complicated cases are done within 10 seconds.
I know lambda has cold start problem but it takes much more time than runned in my local server. I want to know why and is there any solution?

Azure SQL parallel stored procedure taking longer than expected

I faced an issue in Azure SQL, I have written 4 stored procedures - let's say sp1, sp2, sp3, sp4 (all the procedures are read-only procedures).
Individually when I run all those stored procedures serially, like this:
exec sp1 (it took time 3s)
go
exec sp2 (it took time 4s)
go
exec sp3 (it took time 6s)
go
exec sp4 (it took time 2s)
That means all the stored procedures return a result set within 15 seconds, but I want to make a parallel call (concurrent) to all the stored procedures so that these stored procedures return result within 6 seconds (as 6 seconds is the highest time a single stored procedure) from an application and I used async calling method to call those stored procedures but what I found it took 15 seconds fully in time parallel call as well .
Though I found from log of Azure SQL all the 4 stored procedures running start time with millisecond difference that means though it calling parallel but it took longer execution time. Then again I changed the calling method and made synchronous call in that time I found 3,4,6,2 , so net result is same both in synchronous and parallel call.
Here my question is there any setting in Azure SQL that is increasing the execution time in times of parallel call?
I used MAXDOP=8

Calling azure function to call a procedure in snowflake to load data causes timeout in consumption plan, is there another way to achieve this?

I want to trigger a procedure in snowflake warehouse to load file from azure blob storage, for that I have implemented snowflake connector as an azure function and it is running on consumption plan (dynamic). But consumption plan has a default timeout of 5mins and max timeout can be of 10mins. But my data is like 50 GB and it takes like 20mins with medium size snowflake cluster. So is there any other way to achieve this?
If you want to get rid of this limitation, you have multiple solutions.
First, you can design a timetrigger to wake up the function before it times out. This timetrigger is periodic, and its period should be less than the timeout of your function.
Second, because the timeout limit comes from the service plan, you can change your service plan to complete your idea.
In a serverless Consumption plan, the valid range is from 1 second to 10 minutes, and the default value is 5 minutes.
In the Premium plan, the valid range is from 1 second to 60 minutes, and the default value is 30 minutes.
In a Dedicated (App Service) plan, there is no overall limit, and the default value is 30 minutes. A value of -1 indicates unbounded execution, but keeping a fixed upper bound is recommended.
Related documents:https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#functiontimeout
Third, use durable functions. Under the consumption plan, ordinary out-of-the-box functions run for up to 10 minutes. But if you use durable functions, there is no such restriction at all. It also introduces support for stateful execution, which means that subsequent calls to the same function can share local variables and static members. This is an extension of the normal out-of-the-box functional model, and it requires some additional boilerplate code to make all functions work as expected.
more details about durable functions:https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp

SQL Server Express: How to determine SP memory usage

I am developing with Microsoft SQL Server 2008 R2 Express.
I have a stored procedure that uses temp tables and outputs some processed data usually within 1 second.
Over a few months, my DB has gathered a lot of data almost reaching the 10 GB limit. At this point, this particular stored procedure started taking as much as 5 mins for the same input parameters. After I emptied some of the huge tables in DB, it got back to normal.
After this incident, I am worried if my stored procedure needs more than necessary space in DB. How can I be sure? Any leads?
Thanks already
Jyotsna
Follow this article
Other old school way is run spwho2 check your spid related to the database see CPU and IO usage.
To validate run DBCC INPUTBUFFER(spid)
Also check STATISTICS of SP in original scenario without purging data from tables.
SET STATISTICS IO ON
EXEC [YourSPName]
see the logical reads , also refer article

Amazon Red Shift: How to write query batches similar to Stored Procedures in SQL Server

We are trying to port a SQL Server based application to Amazon Redshift. Redshift looks promising in terms of performance and Scalability. We are facing issues finding replacement for Stored Procedures to execute queries in batches.
Thanks
Update: Stored Procedures are now supported in Amazon Redshift from version 1.0.7287 (late April 2019). Please review the document "Creating Stored Procedures in Amazon Redshift" for more information on getting started with stored procedures.
No neat replacement exists, sadly.
As I see it, you have a few choices:
Shell/CMD scripts that call psql that you run from as schedule jobs.
An ETL tool (SSIS). Bear in mind it'll mostly be running shell/cmd scripts.
SQL Server stored procs that use xp_cmdshell to call psql.
Use AWS Data Pipeline to run your batch processing.

Resources