How can I create a golden master for mvc 4 application - asp.net-mvc

I was wondering how to create a golden master approach to start creating some tests for my MVC 4 application.
"Gold master testing refers to capturing the result of a process, and
then comparing future runs against the saved “gold master” (or known
good) version to discover unexpected changes." - #brynary
Its a large application with no tests and it will be good to start development with the golden master to ensure the changes we are making to increase the test coverage and hopefully decrease the complexity in the long don't break the application.
I am think about capturing a days worth of real world traffic from the IIS logs and use that to create the golden master however I am not sure the easiest or best way to go about it. There is nothing out of the ordinary on the app lots controllers with post backs etc
I am looking for a way to create a suitable golden master for a MVC 4 application hosted in IIS 7.5.
NOTES
To clarify something in regards to the comments the "golden master" is a test you can run to verify output of the application. It is like journalling your application and being able to run that journal every time you make a change to ensure you have broken anything.
When working with legacy code, it is almost impossible to understand
it and to write code that will surely exercise all the logical paths
through the code. For that kind of testing, we would need to
understand the code, but we do not yet. So we need to take another
approach.
Instead of trying to figure out what to test, we can test everything,
a lot of times, so that we end up with a huge amount of output, about
which we can almost certainly assume that it was produced by
exercising all parts of our legacy code. It is recommended to run the
code at least 10,000 (ten thousand) times. We will write a test to run
it twice as much and save the output.
Patkos Csaba - http://code.tutsplus.com/tutorials/refactoring-legacy-code-part-1-the-golden-master--cms-20331
My question is how do I go about doing this to a MVC application.
Regards

Basically you want to compare two large sets of results and control variations, in practice, an integration test. I believe that the real traffic can't exactly give you the control that I think you need it.
Before making any change to the production code, you should do the following:
Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
Bombard the class or system under test with these random inputs.
Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.
This doesn't match with your approach. Imagine a scenario in which a user makes a purchase of a certain product, you can not determine the outcome of the transaction, insufficient credit, non-availability of the product, so you cannot trust in your input.
However, what you now need is a way to replicate that data automatically, and the automation of the browser in this case cannot help you much.
You can try a different approach, something like the Lightweight Test Automation Framework or else MvcIntegrationTestFramework which are the most appropriate to your scenario

Related

Branch strategy for parallel working and 'breaking changes'

We have an application which has historically suffered a high number of branches.
These have now been ratified, but a situation has now occurred where we have two development streams (which ideally would be one) and one of these streams requires a number of changes where implementing them will have knock on effects which aren't fixable as part of a single story (or possibly a single sprint).
For example: a high number of database schema changes which require a large number of query logic changes.
These changes aren't easily feature flagged as they systematically break existing functionality until the work if completed. We could theoretically put the system back in a 'building' state, but it would prevent the customer from using the existing functionality.
To cater for this, we're proposing creating a new branch for the 'breaking changes' so that the other development stream isn't broken and can be released intermittently.
Whilst I'm loathed to create new branches, I can't see a better way of doing this at the moment.
Is there a recommended practice, either branch strategy to manage or architecturally to protect from, breaking changes in parallel development?
EDIT: The only other thing which has crossed my mind is to literally 'copy-paste' the existing functionality (including tables/web pages etc) renaming them and working on that in the same branch, behind a feature flag.
This is obviously pretty messy for a number of reasons!
Does anyone have any suggestions for how they have coped with this in the past?
You should deliver the database schema (and other changes) in an incremental fashion. Don't take on the whole change at once, but instead break it down into things that you can deliver.
If your current application architecture does not support this model then that is the first problem to tackle.
Always meet your DoD for every Sprint. All work integrated with no further work required to ship...

Specflow Feature File Best Practice

Thanks in advance for the help.
My question pertains to best practices inside a SpecFlow feature file?
Question:
Is using a wait command inside of the feature file considered bad practice.
Example:
And i click on the username
And wait 5 seconds
And i input new value into last name
The wait command forces a 5 second wait. I am doing this to make sure the page is loaded to prevent "element not found" errors or other errors. Basically to make sure I have a clean page to manipulate.
Would a better practice be to use a wait inside of the Step file itself?
//using Fluent Automation
I.WaitUntil(() => ());
//or
I.Wait(); //timespan
My reasoning for not using the Fluent Automation wait is:
By utilizing the Fluent Automation method you are dependent on the default timeout in the Settings object. The default timeout in some cases may not be long enough or may be to long. Seems very verbose to me to continually change/reset the Settings object with the only benefit being to remove wait commands from the feature file.
So what is really the best practice?
Thanks,
-n
I think the best practice is to keep the feature file for your scenarios, and free of the implementation details.
Since we are following a BDD process (http://dannorth.net/introducing-bdd) then the feature file is the output of that conversation between you and the process expert, and the scenario represents the steps that you are going to take to prove that your functionality works for that example. You could hope that those steps define the business process and could be performed by any system, not just the one we might be developing now. Ideally this logic captures our intent and can be reused on any future systems that might replace the current one.
So I just don't see you saying that you need to wait
....
Although you might want to say
When the page has loaded
and that maps quite nicely onto the fluent automation.

Why don't people access database in Rspec?

I often see the code which uses mock in Rspec, like this:
describe "GET show" do
it "should find and assign #question" do
question = Question.new
Question.should_receive(:find).with("123").and_return(question)
get :show, :id => 123
assigns[:question].should == question
end
end
But why they don't add a Question with id => 123 in database, retrieve it by get, and destroy it? Is this a best practice? If I don't follow the rule, will something bad happen?
When you write a behavioral test (or a unit test), you're trying to test only a specific part of code, and not the entire stack.
To explain this better, you are just expressing and testing that "function A should call function B with these parameters", so you are testing function A and not function B, for which you provide a mock.
This is important for a number of reasons:
You don't need a database installed on every machine you build your code, this is important if you start using build machines (and/or continuous integration) in your company with hundreds of projects.
You get better test results, cause if function B is broken, or the database is not working properly, you don't get a test failure on function A.
Your tests run faster.
It's always a pain to have a clean DB before each test. What if a previous run of your tests was stopped, leaving on the database a Question with that id? You'll probably get a test failure because of duplicate id, while in reality the function is working properly.
You need a proper configuration before running your test. This is not such an incredible problem, but it's much better if tests can run "out of the box", without having to configure a database connection, a folder of temporary test files, an SMTP server for testing email stuff, etc...
A test that actually test the entire stack is called "end to end testing", or "integration testing" (depending on what it tests). These are important as well, for example a suite of tests without mock database can be used to see if a given application can run safely of a different DB than the one used during development, and eventually fix functions that contain offending SQL statements.
Actually, many people do, including me. Generally speaking, since tests are there to check behavior, it can feel a bit unnatural to insert database entries to some people.
Question.new would be enough because it goes through the valid methods of rails anyway, so many people tend to use them, also because they are faster.
But, indeed, even if you start using factories, there will be times that you will probably inserting data to your testing environment as well. I personally don't see anything wrong with this.
Overall, in some situations were the testing suite is really large, it can be quite an advantage not to save database entries. But if speed is not your top concern, i would say that you would not really have to worry on how the test looks, as long as it is well constructed and to the point.
BTW, you do not need to destroy test data, it's done automatically after the test ends. So, unless you are checking on the actual delete methods, avoid doing that explicitly.

ruby cucumber testing practices

I have many cucumber feature files, each consists of many scenarios.
When run together, some of them fails.
When I run each single test file, they passes.
I think my database is not correctly clean after each scenario.
What is the correct process to determine what is causing this behavior ?
By the sound of it your tests are depening upon one another. You should be trying to get each indervidual test to do what ever set up is required for that indervidual test to run.
The set up parts should be done during the "Given" part of your features.
Personally, to stop the features from becoming verbose and to keep them close to the business language that they where written in, i sometimes add additional steps that are required to do the setup and call them from the steps that are in the feature file.
If this makes sence to you
This happens to me for different reasons and different times.
Sometimes its that a stub or mock is invoked in one scenario that screws up another, but only when they are both run (each is fine alone).
The only way I've been able to solve these is debugging while running enough tests to get a failure. You can drop the debugger line in step_definitions or call it as a step itself (When I call the debugger) and match that up to a step definition that just says 'debugger' as the ruby code.

How to fail gracefully and get notified if screen scraping fails in ruby on rails

I am working on a Rails 3 project that relies heavily on screen scraping to collect data mainly using Nokogiri. I'm aggregating essentially all the same data but I'm grabbing it from many difference sources and as time goes on I will be adding more and more. However I am acutely aware that screen scraping can be notoriously unreliable.
As such I am interested in how other people have handled the problem of verifying the data and then also getting notified if it is failing.
My current plan is as follow.
I am going to have validation on my model for most of the fields. If they fail I won't get bad data into my system. Although logging this failure in a meaningful way is still a problem.
I was thinking of some kind of counter where after so many failures from a particular source I somehow turn it off. Not sure how to keep track of that. I guess the only way is to have a field on my Source model that counts it and can be reset.
Logging is 800 pound gorilla I'm not sure how to deal with. I could just do standard writing to logs but if something fails I'd like to store the entire html so I can figure it out. Also I need to notify myself somehow so I can address the issues. I thought of maybe just creating a model for all this and storing it in the database. If I did this I'd probably have to store the html on s3 or something. I'm running this on heroku so that influences what I can do.
Setup begin and rescue blocks around every field. I was trying to figure out a to code this in a nicer ruby way so I just don't have a page of them but although I do have some fields are just straight up doc.css_at("#whatever") there are quite a number that require various formatting or calculations so I think it makes sense to try to rescue that so I can then log what went wrong. The other option is to let the exception bubble up and catch it when I try to create the model.
Anyway I'm sure I'm not even thinking of everything but that is why I'm trying to figure out how other people have handled this problem.
Our team does something similar to this, so here's some ideas:
we use a really high level begin/rescue transaction to make sure we don't get into weird half loaded states:
begin
ActiveRecord::Base.transaction do
...try to load a data source...
end
rescue
...error handling...
end
Email/page yourself when certain errors occur. We use exception_notifier but if you're sitting on Heroku the Exceptional plugin also seems like a good option. I've also heard of people having success w/ hoptoad
Capturing state is VERY important for troubleshooting issues. Something that's worked quite well for us is GMail. Our loaders effectively have two phases:
capture data and send it to our gmail account
log into gmail, download latest data and parse it
The second phase is the complex one, and if it fails a developer can simply log into the gmail account and easily inspect the failed message. This process has some limitations (per email and per mailbox storage limits, two phase pipeline, etc.) and we started out doing it because we had no other option, but it's proven shockingly resilient and convenient. Keep email in mind as a cheap/easy way to store noncritical state. We didn't start out thinking of using it that way and are now really glad we do. Logging into GMail feels better than digging through log files.
Build a dashboard UI. We have a simple dashboard with a grid of sources by day that looks like this. Each box is colored either red or green based on whether the load for that source on that day succeeded. You can go one step further and set up a monitor on this UI (mon.itor.us or equivalent) that alarms if some error threshold is met.

Resources