I have a stored procedure that is run based on plant name.
I have a maintenance plan that calls the stored procedures for all plants in a timezone like
execute dbo.runPlantSP('plant1')
execute dbo.runPlantSP('plant3')
execute dbo.runPlantSP('plant55')
The issue I am facing is if there is an error that occured while execute dbo.runPlantSP('plant1') runs then it never runs for 'plant3' or 'plant55'. Is there a setting in the maintenance plan I can change to make it still attempt the next line? Or do I need to change the internals of my stored procedure to handle errors so that if something happens in the stored procedure for 'plant1', then we have a catch that handles it and it does not bubble up to the maintenance plan stopping it?
Related
I need a way shutting down the Xodus environment so that
It waits for all the writing transactions in all threads to finish (or be aborted after a timeout as an option).
It blocks starting new transactions (or throws an exception as an option).
Safely closes the environment.
So far we tried something like this
if (env.isOpen()) {
env.clear();
env.close();
}
but I am not sure that it does exactly the right thing, I still get exceptions thrown from env.close() from time to time. So what is the proper way to do that?
First of all, note that Environment#clear() just clears all the data in your environment.
Minor: you don't have to check whether your environment is open before closing it.
If you don't care much of the application state then you can set the exodus.env.closeForcedly option on Environment creation:
Environment env = Environments.newInstance("db path", new EnvironmentConfig().setEnvCloseForcedly(true));
In that case, close() method reports in logs the number of not finished at the moment transactions and does close the environment anyway.
On production server I got this error
ActiveRecord::StatementInvalid: PG::QueryCanceled: ERROR: canceling statement due to statement timeout <SQL query here>
In this line:
Contact.where(id: contact_ids_to_delete).delete_all
SQL query was DELETE command with a huge list of ids. It timed out.
I came up with a solution which is to delete Contacts in batches:
Contact.where(id: contact_ids_to_delete).in_batches.delete_all
The question is, how do I test my solution? Or what is the common way to test it? Or is there any gem that would make testing it convenient?
I see two possible ways to test it:
1. (Dynamically) set the timeout in test database to a small amount of seconds and create a test in which I generate a lot of Contacts and then try to run my code to delete them.
It seems to be the right way to do it, but it could potentially slow down the tests execution, and setting the timeout dynamically (which would be the ideal way to do it) could be tricky.
2. Test that deletions are in batches.
It could be tricky, because this way I would have to monitor the queries.
This is not an edge case that I would test for because it requires building and running a query that exceeds your database's built-in timeouts; the minimum runtime for this single test would be at least that time.
Even then, you may write a test for this that passes 100% of the time in your test environment but fails 100% of the time in production because of differences between the two environments that you can never fully replicate; for one, your test database is being used by a single concurrent user while your production database will have multiple concurrent users, different available resources, and different active locks. This isn't the type of issue that you write a test for because the test won't ensure it doesn't happen in production. Best practices will do that.
I recommend that you follow the best practices for Rails by using the find_in_batches or find_each methods with the expectation that the database server can successfully act on batches of 1000 records at a time:
Contact.where(id: contact_ids_to_delete).find_in_batches do |contacts|
contacts.delete_all
end
Or if you prefer:
Contact.where(id: contact_ids_to_delete).find_in_batches(&:delete_all)
You can tweak the batch size with batch_size if you're paranoid about your production database server not being able to act on 1000 records at a time:
Contact.where(id: contact_ids_to_delete).find_in_batches(batch_size: 500) { |contacts| contacts.delete_all }
New to the tSQLt world (great tool set) and encountered a minor issue with a stored procedure I am setting up a test for.
If I for some reason have a stored procedure which connects to mutiple databases or even multiple SQL servers (Linked Servers).
Is it possible to do unit tests with tSQLt in such a scenario?
I commented already, but I would like to add some more. So as I said already, that you can do anything that fits into the single transaction.
But for your case I would suggest to create synonyms for every cross database/instance object and then use synonyms everywhere.
I've created following function to mock view/tables synonyms. It has some limitations but at least it can handle simple use cases.
CREATE PROCEDURE [tSQLt].[FakeSynonymTable] #SynonymTable VARCHAR(MAX)
AS
BEGIN
DECLARE #NewName VARCHAR(MAX)= #SynonymTable+REPLACE(CAST(NEWID() AS VARCHAR(100)), '-', '');
DECLARE #RenameCmd VARCHAR(MAX)= 'EXEC sp_rename '''+#SynonymTable+''', '''+#NewName+''';';
EXEC tSQLt.SuppressOutput
#RenameCmd;
DECLARE #sql VARCHAR(MAX)= 'SELECT * INTO '+#SynonymTable+' FROM '+#NewName+' WHERE 1=2;';
EXEC (#sql);
EXEC tSQLt.FakeTable
#TableName = #SynonymTable;
END;
Without you providing sample code I am not certain of your exact use case but this information may help.
The alternative approach for cross-database testing (assuming both databases are on the same instance) is to install tSQLt in both databases. Then you can mock the objects in the remote database in the same way that you would if they were local.
E.g. If you had a stored procedure in LocalDb that referenced a table in RemoteDb, you could do something like this:
Imagine you have a procedure that selects a row from a table called localTable in the local database and inserts that row in to a table called remoteTable in the remote database (on the same instance)
create procedure [myTests].[test mySproc inserts remoteTable from local table]
as
begin
-- Mock the local table in the local database
exec tSQLt.FakeTable 'dbo.localTable' ;
-- Mock the remote table (not the three part object reference to remoteDb)
exec RemoteDb.tSQLt.FakeTable 'dbo.remoteTable' ;
--! Data setup ommitted
--! exec dbo.mySproc #param = 'some value' ;
--! Get the data from the remote table into a temp table so we can test it
select * into #expected from RemoteDb.dbo.remoteTable;
--! Assume we have already populated #actual with our expected results
exec tSQLt.AssertEqualsTable '#expected', '#actual' ;
end
The above code demonstrates the basics but I blogged about this in more detail some years ago here.
Unfortunately this approach will not work across linked servers,
What could trigger a deadlock-message on Firebird when there is only a single transaction writing to the DB?
I am building a webapp with a backend written in Delphi2010 on top of a Firebird 2.1 database. I am getting an concurrent-update error that I cannot make sense of. Maybe someone can help me debug the issue or explain scenarios that may lead to the message.
I am trying an UPDATE to a single field on a single record.
UPDATE USERS SET passwdhash=? WHERE (RECID=?)
The message I am seeing is the standard:
deadlock
update conflicts with concurrent update
concurrent transaction number is 659718
deadlock
Error Code: 16
I understand what it tells me but I do not understand why I am seeing it here as there are no concurrent updates I know of.
Here is what I did to investigate.
I started my appplication server and checked the result of this query:
SELECT
A.MON$ATTACHMENT_ID,
A.MON$USER,
A.MON$REMOTE_ADDRESS,
A.MON$REMOTE_PROCESS,
T.MON$STATE,
T.MON$TIMESTAMP,
T.MON$TOP_TRANSACTION,
T.MON$OLDEST_TRANSACTION,
T.MON$OLDEST_ACTIVE,
T.MON$ISOLATION_MODE
FROM MON$ATTACHMENTS A
LEFT OUTER JOIN MON$TRANSACTIONS T
ON (T.MON$ATTACHMENT_ID = A.MON$ATTACHMENT_ID)
The result indicates a number of connections but only one of them has non-NULLs in the MON$TRANSACTION fields. This connection is the one I am using from IBExperts to query the monitor-tables.
Am I right to think that connection with no active transaction can be disregarded as not contributing to a deadlock-situation?
Next I put a breakpoint on the line submitting the UPDATE-Statement in my application server and executed the request that triggers it. When the breakpoint stopped the application I then reran the Monitor-query above.
This time I could see another transaction active just as I would expect:
Then I let my appserver execute the UPDATE and reap the error-message as shown above.
What can trigger the deadlock-message when there is only one writing transaction? Or are there more and I am misinterpreting the output? Any other suggestions on how to debug this?
Firebird uses MVCC (Multiversion Concurrency Control) for its transaction model. One of the features is that - depending on the transaction isolation - you will only see the last version committed when your transaction started (consistency and concurrency isolation levels), or that were committed when your statement started (read committed). A change to a record will create a new version of the record, which will only become visible to other active transactions when it has been committed (and then only for read committed transactions).
As a basic rule there can only be one uncommitted version of a record. So attempts by two transactions to update the same record will fail for one of those transaction. For historical reasons these type of errors are grouped under the deadlock error family, even though it is not actually a deadlock in the normal concurrency vernacular.
This rule is actually a bit more restrictive depending on your transaction isolation: for consistency and concurrency level there can also be no newer committed versions of a record that is not visible to your transaction.
My guess is that for you something like this happened:
Transaction 1 started
Transaction 2 started with concurrency or consistency isolation
Transaction 1 modifies record (new version created)
Transaction 1 commits
Transaction 2 attempts to modify same record
(Note, step 1+3 and 2 could be in a different order (eg 1,3,2, or 2,1,3))
Step 5 fails, because the new version created in step 3 is not visible to transaction 2. If instead read committed had been used then step 5 would succeed as the new version would be visible to the transaction at that point.
Recently I was tasked with creating a SQL Server Job to automate the creation of a CSV file. There was existing code, which was using an assortment of #temp tables.
When I set up the job to execute using BCP calling the existing code (converted into a procedure), I kept getting errors:
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Native Client][SQL Server]Invalid object name #xyz
As described in other post(s), to resolve the problem lots of people recommend converting all the #tempTables to #tableVariables.
However, I would like to understand WHY BCP doesn't seem to be able to use #tempTables?
When I execute the same procedure from within SSMS it works though!? Why?
I did do a quick and simple test using global temp tables within a procedure and that seemed to succeed via a job using BCP, so I am assuming it is related to the scope of the #tempTables!?
Thanks in advance for your responses/clarifications.
DTML
You are correct in guessing that it's a scope issue for the #temp tables.
BCP is spawned as a separate process, so the tables are no longer in scope for the new processes. SSMS likely uses sub-processes, so they would still have access to the #temp tables.