Can I run VS2008 Team System Load Test scenarios sequentially? - load-testing

I have a couple of load test scenarios composed of several web tests in Visual Studio 2008 Ultimate. By default these scenarios are run concurrently - is there some way to run them sequentially, such that the first test runs for some duration, then the second scenario runs for the same duration?

The most straight forward approach would be to create a coded web test that yields a different set of tests after a certain time.
Though this approach would be a bit labour intensive in re-arranging the code to do what you want.
I have to admit some curiosity as to why you would want to do this?

Related

HLK Studio - Shortening MiniFilter Test Duration Using Parallelism

I've also posted the same question in Microsoft Forums, but is seems dead... Maybe someone here can help.
After multiple trial-and-error iterations, we finally succeeded setting up a fully automated testing environment for our MiniFilter driver, as part of our Continuous Integration system (Jenkins). This runs fine with HLK Studio installed on 1 server, with 1 additional client (Windows 10).
The entire testing cycle now takes about 7 hours to complete. We have 2 additional client computers set up, and would like to shorten the MiniFilter driver certification cycle by splitting the tests and running them on multiple, identical machines in parallel.
We can see this concept exists in the HLK Studio product ("distributed testing"), but for some reason, it is not available for a "Software Device" (MiniFilter in our case).
We also thought about splitting the tests manually, creating multiple HLKX files in parallel then merging them ("deep merge"), but even this is not permitted according to HLK Documentation.
Now that Microsoft forces us to submit drivers for signing (or else, computers with "Secure Boot" feature will refuse to load the driver), this is becoming a mission-critical process.
Did anyone succeed in running such tests in parallel, shortening the test duration?

iOS automated tests questions

I've been working on a project that I want to add automated tests. I already added some unit tests, but I'm not confident with the process that I've been using, I do not have a great experience with automated tests so I would like to ask for some advice.
The project is integrated with our web API, so it has a login process. According to the logged user the API provides a configuration file which will allow / disallow the access to some modules and permissions within the mobile application. We also have a sync process where the app will access several methods from the API to download files (PDFs, html, videos, etc) and also receive a lot of data through JSON files. The user basically doesn't have to insert data, just use the information received in the sync process.
What I did to add unit tests in this scenario so far was to simulate a logged user, then I added some fixture objects to the user and test them.
I was able to test the web service integration, I used Nocilla to return fake JSONs and assert the result. So far I was only able to test individual request, but I still don't know how should I test the sync process.
I'm having a hard time to create unit tests for my view controllers. Should I unit test just the business logic and do all the rest with tools like KIF / Calabash?
Is there an easy way to setup the fixture data and files?
Thanks!
Everybody's mileage may vary but here's what we settled on and why.
Unit tests: We use a similar strategy. Only difference is we use OHTTPStubs instead of Nocilla because we saw some more flexibility there that we needed and were happy to trade off the easier syntax of Nocilla.
Doing more complicated (non-single query) test cases quickly lost its luster because we were essentially rebuilding whole HTTP request/response flows and that wasn't very "unit". For functional tests, we did end up adopting KIF (at least for dev focused efforts, assuming you don't have a seaparte QA department) for a few reasons:
We didn't buy/need the multi-language abstraction layer afforded by
Appium.
We wanted to be able to run tests on many devices per
build server.
We wanted more whitebox testing and while
Subliminal was attractive, we didn't want to build hooks in our main
app code.
Testing view controller logic (anything that's not unit-oriented) is definitely much more useful using KIF/Calbash or something similar so that's the path I would suggest.
For bonus points, here are some other things we did. Goes to show what you could do I guess:
We have a basic proof of concept that binds KIF commands to a JSON RPC server. So you can run a test target on a device and have that device respond to HTTP requests, which will then fire off test cases or KIF commands. One of the advantage of this is that you can reuse some of the test code you wrote for single device for multiple device test cases.
Our CI server builds integration tests as a downstream build of our main build (which includes unit tests). When the build starts we use XCTool to precompile tests, and then we have some scripts that starts recording a quicktime screen recording, runs the KIF tests, exports the result, and then archive it on our CI server so we can see a live test run along with test logs.
Not really part of this answer but happy to share that if you ping me.

multiple instances of cucumber for 3000 scenarios

I have a test pack consisting of more than 3000 scenarios.now the problem is that when i run the scenarios in one shot ..it takes approx 10 hours to complete,i want to divide the scenarios in 4 blocks,each of approx 750 scenarios and wanted to run them parallel in different windows/terminal(VMware).is there a workaround ???
This question has a selenium tag, so (assuming that's accurate) Selenium-Grid would be an option for setting up a distributed parallel testing environment.
orde mentions Selenium Grid and that's one piece of the puzzle. The main benefit you get out of that is that you only have to identify your Server Hub in your code when creating a new selenium instance.
The next thing would be to actually execute your 4 blocks of 750 scenarios at once. I'd recommend using a CI tool like Jenkins to accomplish that and you'll have your results together on Jenkins' web gui.

Specflow stability

As the project I'm working on is growing, the number of tests is also growing. But, in my case, when the number of scenario's being tested increases, the stability of Specflow seems to be decreasing.
Let me try to clearify: When I'm running, for instance, some test lists (with 5 to 10 scenario's) in Visual Studio 2010 separately, all the scenario's are passing. However, when I'm running all the test lists at once (something like 70 scenario's total), some scenario's fail, that in the 'separate test list run' passed. When I immediately run the 'total test run' again, different scenario's fail, or sometimes all the files pass. In other words, which scenario's fail is totaly random.
Is anyone familiar with this issue and/or can enlighten me about the (what it seems to be) stability of Specflow when the number of scenario's to test is increasing?
I dont think this is a specflow issue at all, we run around 800 tests and all pass every time. What I expect is that you are getting crosstalk between your tests. IE your tests are failing because they are sharing data you arent expecting.
This is a pretty common problem.

Speed of running a test suite in Rails

I have 357 tests (534 assertions) for my app (using Shoulda). The whole test suite runs in around 80 seconds. Is this time OK? I'm just curious, since this is one of my first apps where I write tests extensively. No fancy stuff in my app.
Btw.: I tried to use in memory sqlite3 database, but the results were surprisingly worse (around 83 seconds). Any clues here?
I'm using Macbook with 2GB of RAM and 2GHz Intel Core Duo processor as my development machine.
I don't feel this question is rails specific, so I'll chime in.
The main thing about testing is that it should be fast enough for you to run them a lot (as in, all the time). Also, you may wish to split your tests into a few different sets, specifically things like 'long running tests' and 'unit tests'.
One last option to consider, if your database setup is time consuming, would be to create your domain by restoring from a backup, rather than doing a whole bunch of inserts.
Good luck!
You should try this method https://github.com/dchelimsky/rspec/wiki/spork---autospec-==-pure-bdd-joy- using spork to spin up a couple of processes that stay running and batch out your tests. I found it to be pretty quick.
It really depends on what your tests are doing. Test code can be written efficiently or not in exactly the same way as any other code can.
One obvious optimisation in many cases is to write your test code in such a way that everything (or as much as possible) is done in memory, as opposed to many read/writes to the database. However, you may have to change your application code to have the right interfaces to achieve this.
Large test suites can take some time to run.
I generally use "autospec -f" when developing, this only runs the specs that have changed since the last run - makes it much more efficient to keep your tests running.
Of course, if you are really serious, you will run a Continuous Integration setup like Cruise Control - this will automate your build process and run in the background, checking out your latest building and running the suite.
If you're looking to speed up the runtime of your test suite, then I'd use a test server such as this one from Roman Le NĂ©grate.
You can experiment with preloading fixtures, but it will be harder to maintain, and, IMHO, not worth it's speed improvements (20% maximum I think, but it depends)
It's known that SQLite is slower than mysql/pgsql, excepting very small, tiny DBs.
As someone already said, you can put mysql (or other DB) datafiles on some kind of RAMDisk (I use tmpfs on linux).
PS: we have 1319 Rspec examples now, and it runs for 230 seconds on C2D-3Ghz-4GRam, and I think it's fine. So, yours is fine too.
As opposite to in-memory SQLite, you can put a MySQL database on RAMDISK (on Windows) or on tmpfs on Linux.
MySQL has a very efficient buffering, so putting database in memory does not help a lot until you update a lot of data really often.
More significant is the way of test isolation and data preparation for each test.
You can use transactional fixtures. That means that each test will be wrapped into transaction and thus next test will start at the initial point.
This is faster than cleaning up the database before each test.
There are situations when you want to use both transactions and explicit data erasing, here is a good article about it: http://www.3hv.co.uk/blog/2009/05/08/switching-off-transactions-for-a-single-spec-when-using-rspec/

Resources