I have a datadriven unit test which reads from an access database and execute for all rows in the database.When I add this to a load test and execute with 5 concurrent users,all the records in the databse are executed 5 times.the problem I face here is when there are more records in the database the test takes more time for execution.instead is there a way to restrict a test to execute only one data row?
Are you trying to just run the test one time per row in the database and then stop? If so, you should probably avoid using your web test as a load test.
I think you have two options but I don't have my work computer in front of me to confirm.
Option 1: Create the web test like you've already done including wiring it up to the access database like you probably already have. Then convert the test to a coded web test. And change the code so that it runs once for each record in the datasource (in other words, add an outer loop to the coded webtest).
Option 2: Edit your local test run config to run the test N number of times. From the main menu go to Test/Edit Test Run Configurations, choose your test config, Select Web Test from the left pane, then change Fixed Run Count to 5. I can't confirm this right now but I believe each time the test runs it will advance to the next record as opposed to staying on the first.
I assume you only want data tests to run when there is a row of data available.
I would change the data driving the test to read a stored procedure with an atomic transation such as this SQL code:
BEGIN TRANSACTION
DECLARE #Id UNIQUEIDENTIFIER
SET #Id = (SELECT top 1 ID from #TestData WHERE TestRun = 0)
SELECT top 1 * FROM #TestData WHERE ID = #Id
UPDATE #TestData SET TestRun = 1 WHERE ID = #ID
COMMIT TRANSACTION
This will get you a unique datarow each time the test is run, allowing the test to be used in a load test.
You will have to use SQL Express instead of access as I don't think it can handle the concurrency so well (open to correction here).
If you need more control over what happens during the load test, consider creating a load test plugin that will allow you to implement code from the following load test events:
LoadTestStarting
LoadTestFinished
LoadTestWarmupComplete
TestStarting
TestFinished
ThresholdExceeded
HeartBeat
LoadTestAborted
later I figured out the test iterations property of the scenario in a load test to control the number of times you want to run a test for each user.
in the run settings,set
Use test iterations = true
test iterations = xxx(the number of iterations you need)
also to have pause between each iterations,
in the scenario properties of you load test,set the below properties
1)Think Profile = On
2)Think Time between test iterations = 1
Related
I'm learning erlang and I have created a cache_server module implementing gen_server behaviour.
This module is responsible for creating ets tables and has an api to insert, lookup, etc.
I wanted to make test suite for the module and to run test cases for insertion and lookup in one group of tests as a sequence because first function populates the table and other searches for inserted keys.
I tried to call cache_server:start_link([]) in init_per_suite hook function of the suite but in test cases I don't see my cache_server process when I call registered() function.
And I get
{noproc,{gen_server,call,[cache_server,{lookup,numbers,1}]}}
error.
I also tried to move call cache_server:start_link() from the init_per_suite to the first test case but in the subsequent test cases the process gets unavailable.
When I test my code by hands using rebar3 shell, everything works as expected.
Is it possible to share a named gen_server process between test cases in common test test suite?
Try calling cache_server:start_link() in init_per_testcase. As this function is executed before each test case, you would also need to stop the cache_server process in end_per_testcase.
Another option is to group all cache_server related test cases in one group and call cache_server:start_link() in init_per_group and stop the process in end_per_group
On production server I got this error
ActiveRecord::StatementInvalid: PG::QueryCanceled: ERROR: canceling statement due to statement timeout <SQL query here>
In this line:
Contact.where(id: contact_ids_to_delete).delete_all
SQL query was DELETE command with a huge list of ids. It timed out.
I came up with a solution which is to delete Contacts in batches:
Contact.where(id: contact_ids_to_delete).in_batches.delete_all
The question is, how do I test my solution? Or what is the common way to test it? Or is there any gem that would make testing it convenient?
I see two possible ways to test it:
1. (Dynamically) set the timeout in test database to a small amount of seconds and create a test in which I generate a lot of Contacts and then try to run my code to delete them.
It seems to be the right way to do it, but it could potentially slow down the tests execution, and setting the timeout dynamically (which would be the ideal way to do it) could be tricky.
2. Test that deletions are in batches.
It could be tricky, because this way I would have to monitor the queries.
This is not an edge case that I would test for because it requires building and running a query that exceeds your database's built-in timeouts; the minimum runtime for this single test would be at least that time.
Even then, you may write a test for this that passes 100% of the time in your test environment but fails 100% of the time in production because of differences between the two environments that you can never fully replicate; for one, your test database is being used by a single concurrent user while your production database will have multiple concurrent users, different available resources, and different active locks. This isn't the type of issue that you write a test for because the test won't ensure it doesn't happen in production. Best practices will do that.
I recommend that you follow the best practices for Rails by using the find_in_batches or find_each methods with the expectation that the database server can successfully act on batches of 1000 records at a time:
Contact.where(id: contact_ids_to_delete).find_in_batches do |contacts|
contacts.delete_all
end
Or if you prefer:
Contact.where(id: contact_ids_to_delete).find_in_batches(&:delete_all)
You can tweak the batch size with batch_size if you're paranoid about your production database server not being able to act on 1000 records at a time:
Contact.where(id: contact_ids_to_delete).find_in_batches(batch_size: 500) { |contacts| contacts.delete_all }
I am using SenTestKit to test an iOS app. I've split tests into methods which run
For example:
In
#interface simpleGameTests : SenTestCase
With tests:
- (void)testFindingFacebookFriends
- (void)testRegisterUsernameFromForm
- (void)testStartGame
It seems kind of random which of the tests that run first, second and third. Is it possible in Xcode to set the order which the tests run?
No. You should write your tests as isolated cases that can be run no independent of each other, no matter the order.
Yes, although the accepted answer is idealistically true and your test cases should indeed be isolated you can set the order in actuality you can control the order and sometimes it is preferable to do so. They are executed in alphabetical order, so testACreateAccount will be executed before testBLoginToAccount. I use this to generate a password in setUp routine then use that in testACreateAccount to setup the account and testBLoginToAccount to test account login using the created one. In this way the test is full and complete (and also no longer strictly a unit test) but it is an invaluable test for my code.
In my 'notification' model I have a method (notification and contact models both has and belongs to many):
`def self.update_contact_association(contact, notification)
unless contact == nil
notification.contacts.clear
c = Contact.find(contact)
notification.contacts << c
end
end`
that updates the association between a specific notification and contact. It takes a notification object(row) and a list of contact ids. The method works fine, given a single contact id 1 and a notification with the id of 4 updates the table should and will look like this:
notification_id contact_id
4 1
The problem comes in when trying to write a unit test to properly test this method. So far I have:
'test 'update_contact_association' do
notification = Notification.find(4)
contact = Contact.find(1)
Notification.update_contact_association([contact.id], notification)
'end
Running the test method causes no errors, but the test database is not updated to look like the above example, it is just blank. I'm pretty sure I need to use a save or update method to mimic what the controller is doing, but I'm not sure how. I just need the unit test to properly update the table so I can go ahead and write my assertions. Any ideas would be greatly appreciated for I need to test several methods that are very similar/the same as this one.
Tests will generally run any database queries inside of a transaction and rollback that transaction when finished with each test. This will result in an empty database when the tests complete.
This is to ensure a pristine starting point for each test and that tests are not interdependent. A unit test is supposed to be run in isolation, so it should always start from the same database/environment state. It also runs on the smallest amount of code possible, so you don't have to worry about code interaction (yet!).
When you're ready to worry about code interacting, you'll be wanting to build out integration tests. They're longer and will touch on multiple areas of code, running through each different possible combination of inputs so touch as many lines of code as possible (this is what code coverage is all about).
So, the fact that it's blank is normal. You'll want to assert some conditions after you run your update_contact_association method. That will show you that the database is in the expected state and the results of that method are what you expect to happen.
We have an asynchronous task that performs a potentially long-running calculation for an object. The result is then cached on the object. To prevent multiple tasks from repeating the same work, we added locking with an atomic SQL update:
UPDATE objects SET locked = 1 WHERE id = 1234 AND locked = 0
The locking is only for the asynchronous task. The object itself may still be updated by the user. If that happens, any unfinished task for an old version of the object should discard its results as they're likely out-of-date. This is also pretty easy to do with an atomic SQL update:
UPDATE objects SET results = '...' WHERE id = 1234 AND version = 1
If the object has been updated, its version won't match and so the results will be discarded.
These two atomic updates should handle any possible race conditions. The question is how to verify that in unit tests.
The first semaphore is easy to test, as it is simply a matter of setting up two different tests with the two possible scenarios: (1) where the object is locked and (2) where the object is not locked. (We don't need to test the atomicity of the SQL query as that should be the responsibility of the database vendor.)
How does one test the second semaphore? The object needs to be changed by a third party some time after the first semaphore but before the second. This would require a pause in execution so that the update may be reliably and consistently performed, but I know of no support for injecting breakpoints with RSpec. Is there a way to do this? Or is there some other technique I'm overlooking for simulating such race conditions?
You can borrow an idea from electronics manufacturing and put test hooks directly into the production code. Just as a circuit board can be manufactured with special places for test equipment to control and probe the circuit, we can do the same thing with the code.
SUppose we have some code inserting a row into the database:
class TestSubject
def insert_unless_exists
if !row_exists?
insert_row
end
end
end
But this code is running on multiple computers. There's a race condition, then, since another processes may insert the row between our test and our insert, causing a DuplicateKey exception. We want to test that our code handles the exception that results from that race condition. In order to do that, our test needs to insert the row after the call to row_exists? but before the call to insert_row. So let's add a test hook right there:
class TestSubject
def insert_unless_exists
if !row_exists?
before_insert_row_hook
insert_row
end
end
def before_insert_row_hook
end
end
When run in the wild, the hook does nothing except eat up a tiny bit of CPU time. But when the code is being tested for the race condition, the test monkey-patches before_insert_row_hook:
class TestSubject
def before_insert_row_hook
insert_row
end
end
Isn't that sly? Like a parasitic wasp larva that has hijacked the body of an unsuspecting caterpillar, the test hijacked the code under test so that it will create the exact condition we need tested.
This idea is as simple as the XOR cursor, so I suspect many programmers have independently invented it. I have found it to be generally useful for testing code with race conditions. I hope it helps.