When a task is resumed by a user, different from the owner, does it execute with the privileges of the creator? - task

I have a task which calls a SP which in turns does DML operations on objects. The task is owned by USER1. When USER2 resumes the task, does it require all required privileges to complete the process (usage on SP, DML privileges on affected objects)?
Or whoever resumes a task, it only gets executed with its owner's right?

Related

SQL Server: Concurrency when using temporary tables within stored procedures

I have an ASP.NET MVC app published on IIS Server. I am using web gardening here, I mean, application pool has more than one worker processes to attend incoming requests. Also this app is being used by a lot of users clients.
This app calls a SP which uses some local temporary tables (#example). As an example:
BEGIN
if OBJECT_ID(N'tempdb..#MyTempTable') IS NOT NULL
BEGIN
DROP TABLE #MyTempTable
END
CREATE TABLE #MyTempTable
(
someField int,
someFieldMore nvarchar(50)
)
... Use of temp table here
... And then drop table again at the end..
DROP TABLE #MyTempTable
END
I am worried about concurrency, for example, what happens if a user client calls the stored procedure while another previous call is being running at the same time? Can be concurrency issues here?
In IIS (including most web server), use threads to process requests. Each request will be executed in a new thread which is created in app pool. Unless shared resources, threads will not affect each other.
Local temporary objects are separated by Session. If you have two queries running concurrently, then they are clearly two completely separate sessions and you have nothing to worry about. The Login doesn't matter. And if you are using Connection Pooling that also won't matter. Local temporary objects (Tables most often, but also stored procedures) are safe from being seen by other sessions.
Even multiple threads(also requests) want to use a connection and execute stored procedure, the same connection can not be reused by connection pool. It means no danger. Explained in this thread.
Similarly, one thread uses the connection and execute stored procedure, it will not have effect. All calls are using the same stored procedure. They will be queued in sequence until the previous call is executed.

Rails/Postgres - What type of DB lock do I need?

I have a PendingEmail table which I push many records to for emails I want to send.
I then have multiple Que workers which process my app's jobs. One of said jobs is my SendEmailJob.
The purpose of this job is to check PendingEmail, pull the latest 500 ordered by priority, make a batch request to my 3rd party email provider, wait for array response of all 500 responses, then delete the successful items and mark the failed records' error column. The single job will continue in this fashion until the records returned from the DB are 0, and the job will exit/destroy.
The issues are:
It's critical only one SendEmailJob processes email at one time.
I need to check the database every second if a current SendEmailJob isn't running. If it is running, then there's no issue as that job will get to it in ~3 seconds.
If a table is locked (however that may be), my app/other workers MUST still be able to INSERT, as other parts of my app need to add emails to the table. I mainly just need to restrict SELECT I think.
All this needs to be FAST. Part of the reason I did it this way is for performance as I'm sending millions of email in a short timespan.
Currently my jobs are initiated with a clock process (Clockwork), so it would add this job every 1 second.
What I'm thinking...
Que already uses advisory locks and other PG mechanisms. I'd rather not attempt to mess with that table trying to prevent adding more than one job in the first place. Instead, I think it's ok that potentially many SendEmailJob could be running at once, as long as they abort early if there is a lock in place.
Apparently there are some Rails ways to do this but I assume I will need to execute code directly to PG to initiate some sort of lock in each job, but before doing that it checks if there already is one lock, and if there is it aborts)
I just don't know which type of lock to choose, whether to do it in Rails or in the database directly. There are so many of them with such subtle differences (I'm using PG). Any insight would be greatly appreciated!
Answer: I needed an advisory lock.

Creating a FIFO queue in SWF to control access to critical code sections

At the moment we have an Amazon Simple Workflow application that has a few tasks that can occur in parallel at the beginning of the process, followed by one path through a critical region where we can only allow one process to proceed.
We have modeled the critical region as a child workflow and we only allow one process to run in the child workflow at a time (though there is a race condition in our code that hasn't caused us issues yet). This is doing the job, but it has some issues.
We have a method that keeps checking if the child workflow is running and if it isn't it proceeds (race condition mentioned above - the is running check and starting running are not an atomic operation), otherwise throws an exception and retries, this method has an exponential backoff, the problems are: 1. With multiple workflows entering, which workflow will proceed first is non-deterministic, it would be better if this were a FIFO queue. 2. We can end up waiting a long time for the next workflow to start so there is wasted time, would be nice if the workflows proceeded as soon as the last one had finished.
We can address point 2 by reducing the retry interval, but we would still have the non-FIFO problem.
I can imagine modeling this quite easily on a single machine with a queue and locks, but what are our options in SWF?
You can have "critical section" workflow that is always running. Then signal it to "queue" execute requests. Upon receiving signal the "critical section" workflow either starts activity if it is not running or queues the request in the decider. When activity execution completes the "response" signal is sent back to the requester workflow. As "critical section" workflow is always running it has periodically restart itself as new (passing list of outstanding requests as a parameter) the same way all cron workflows are doing.

Is Sidekiq suitable for tasks that are highly mission critical, one-time execution guarrantee required, single threaded execution necessary?

Example Scenario:
Payment handling and electronic-product delivery transaction.
Requirements
There are approximately a few thousand payment transactions a day that need to be executed. Each taking about 1 second. (So the entire process should take about an hour)
Transactions must be processed linearly in a single thread (the next transaction must not start until the last transaction has completed, strong FIFO order is necessary)
Each payment transaction is wrapped inside a database transaction, anything that goes back to roll the transaction back, it is aborted and put into another queue for manual error handling. After that, it should continue to process the rest of the transactions.
Order of Importance
Single execution (if failed, put into error queue for manual handling)
Single Threadedness
FIFO
Is Sidekiq suitable for such mission critical processes? Would sidekiq would be able to fullfil all of these requirements? Or would you recommend other alternatives? Could you point me to some best practices regarding payment handling in rails?
Note: The question is not regarding whether to use stripe or ActiveMerchant for payment handling. It is more about the safest way to programmatically execute those processes in the background.
Yes, Sidekiq can fulfill all of these requirements.
To process your transactions one at a time in serial, you can launch a single Sidekiq process with a concurrency of 1 that only works on that queue. The process will work jobs off the queue one at a time, in order.
For a failed task to go into a failure queue, you'll need to use the Sidekiq Failures gem and ensure retries are turned off for that task.
To guarantee that each task is executed at least once, you can purchase Sidekiq Pro and use Reliable Fetch. If Sidekiq crashes, it will execute the task when it starts back up. This assumes you will set up monitoring to ensure the Sidekiq process stays running. You might also want to make your task idempotent, so it doesn't write the same transaction twice. (In theory, the process could crash after your database transaction commits, but before Sidekiq reports to Redis that the task completed.)
If using Ruby is a constraint, Sidekiq is probably your best bet. I've used several different queuing systems in Ruby and none of them have the reliability guarantee that Sidekiq does.
the next transaction must not start until the last transaction has completed
In that case, I think background processing is suitable as long as you create the next job at the completion of the previous job.
class Worker
include Sidekiq::Worker
def perform(*params)
# do work, raising exception if necessary
NextWorker.perform_async(params, here)
end
end

iSeries stored procedure - how to get handle on the spool file output?

We have a stored procedure written using a CL and RPG program combination. When called locally on the iSeries all is fine. When called externally (for example from a SQL frontend) the RPG program cannot get a hadle on the spool file it produces because the spool file appears under a different (random?) job number and user.
The jobs run as QUSER in the QUSRWRK subsystem but the spool file gets the user id of which the connection was made externally in the connection pool (i.e USERA).
Is there a way of being able reliably to get a handle on the correct sppol file as the job runs (rather than relying on picking the last spool fiel from that queue etc).
If you're running a stored procedure (running in job QZDASOINIT), you will no be able to access the spooled output via the program status data structure. Those spooled files reside in a job named, user/QPRTJOB, where user is the "current user" running the stored procedure. To access the spooled files, run api QSPRILSP to obtain a structure that points you to the spooled file.
Both the behavior and API is well documented in IBM's infocenter.
A server job (e.g., a database server instance for ODBC/JDBC) runs under a system user profile. For stored procs, the system user will usually be QUSER. Objects created within a job are usually owned by the job user.
Server jobs generally perform work on behalf of other users. You tell the server job which user when you establish a connection. (And note that during its lifetime a given server job might work on behalf of many different users.)
Particularly for spooled output, this is a problem because the spooling subsystem has been around longer than we've had the "Web" and before we had significant numbers of users connecting to remote databases. The behavior of switching around from user to user simply isn't part of the spooling subsystem's makeup, nor can a vendor such as IBM determine when a spooled file should be owned by a particular job user or connection user. (And spooling is not a major element of database connections.)
IBM did adapt how spooling is associated output with users by defaulting "switched users" output to collect in a job named QPRTJOB, but it doesn't quite fit with how you want a later RPG program to handle the output.
However, if you create a stored proc that generates spooled output, the proc can choose who owns the output and thereby choose to keep it within the same job. Consider these example CALLs that can be be pasted into the iSeries Navigator 'Run SQL Scripts' function:
call qsys2.qcmdexc ('OVRPRTF FILE(*PRTF) SPLFOWN(*JOB) OVRSCOPE(*JOB)' , 48);
call qsys2.qcmdexc ('DSPMSGD RANGE(CPF9899) MSGF(QCPFMSG) OUTPUT(*PRINT)' , 51);
call qsys2.qcmdexc ('DLTOVR FILE(*PRTF) LVL(*JOB)' , 28);
If you run them as a set, they create spooled output showing the attributes of the message description for CPF9899. If you check afterwards, you should see that QUSER now owns a spooled file named QPMSGD and that it resides within the QZDASOINIT job that's handling your remote database requests. A RPG program within that job can easily find the "*LAST" spooled file in that case. Also, if you delete the first and last CALL and now run just the middle one, you should find that you own the next spooled file.
(QUSER is the IBM default. If your system uses a different user profile, substitute that user for "QUSER".)
Recommendation:
Change your SP to issue an appropriate OVRPRTF command before spooling output that you need to capture in the job and to issue DLTOVR after the output is generated.
You might use commands similar to the ones shown here to create a test procedure. Experiment with different OVRSCOPE() settings and with FILE(*PRTF) or with specifically named files. Also, create output before and after the override commands to see how differently they behave.
Stay aware that the server job might handle a different user after your SP finishes (or a different SP might be called later in the job), so you'll want to be sure that DLTOVR runs. (That's one reason to keep it close to the OVRPRTF.)
I need a bit more information, but I'll make some assumptions. Please clarify if I assumed wrong.
The QUSER in QUSRWRK behavior is correct. You are now running through the SQL server (or similar server). All connections run under these settings.
There are a couple approaches.
1) Assuming that this all runs in one job. Using '*" for the job information should work.
2) The other option is to use RTVJOBA CURUSER(&ME). The current user is the person that is logged in. USER would not work in this case.
If you can modify the RPG program you can retrieve job information from the Program Status Data Structure while the File Information Data Structure has the spool file number from the open feedback area. However I'm not sure the job information will be for the QUSER job (not what you need) or for the USERA job (what you need). The spool file number could be enough of a handle for subsequent Print API calls.
The job itself knows or can find out (see previous answers) so, if all else fails, modify the program to place a message on a queue that provides the information you need. Read it off at your leisure.
.

Resources