What's the best way to test against my MVC repository? - asp.net-mvc

I've built a repository and I want to run a bunch of tests on it to see what the functions return.
I'm using Visual Studio 2008 and I was wondering if there's any sandbox I can play around in (Whether in visual studio 2008 or not) or if I actually have to build a mock controller and view to test the repository?
Thanks,
Matt

By repository do you mean to say something that is part of your data access layer? If so then what I do is to hook up a clean database as part of my build process (using Nant). This way when I run my build, my clean db is hooked up, any update scripts i have are ran against it to bring it up to speed, then all my unit tests are ran against my code, then my repository tests are ran to insure that my DAL is working as expected, then my db is rebuilt (essentially reset to normal), and then I am ready to go. This way I can pump in and out as much data as I like through my repository to make sure that all of the functions work there...without impacting my day to day development db/data.
If you just run tests on the your working db then you run into the problem that the data may change which might break your tests. If as part of your tests you pump known data in, and then run tests on your repository, the outcome is assumed to be known and should not change over time. This makes your test more likely to endure through time.
Hope this is what you meant!

Related

Exclude specific paths during triggering TFS team build

I'm configuring continuous integration with TFS 2012. I have one problem need solve.
I need to exclude some paths from triggering builds.
For e.g. I have:
$/Project1
$/Project2
And I want that after each check-in of $/Project1 - build has been triggered. And it must build both $/Project1 and $/Project2.
But after checking in $/Project2 I don't want to trigger a build for that Build Definition.
In Source Settings of Build Definition are only functions "Active" and "Cloaked", but it isn't what I need.
Thanks a lot in advance.
P.S. The worth solution is to add the comment ***NO_CI*** on check-in. It will be great if there is some other way.
Based on the comments, this boils down to an X-Y problem. You can't do what you want to do, but the reason you want to do it is because you're trying to solve the wrong problem.
You're running the UI tests at the wrong point in your dev-test cycle. UI tests should not run during a build, they should run after a release. A change to your test project should absolutely result in a build.
Someone is developing UI tests against code that's not yet in source control, which makes no sense. If someone is writing tests against code, the code should be source controlled.
I'm guessing that someone is manually pushing uncommitted code out to a dev server, which is being used by someone else to write tests. Don't do this. Use a real release management solution so that as developers write code, each check-in is automatically deployed to a dev/QA environment. Then the folks writing the UI tests will have something to test against. What's the point in writing tests against code that's in such a state of flux that the developer responsible for it isn't even sure it's worth being source controlled? That just results in spending a lot of time rewriting tests as the code evolves.
Assuming you set everything up properly, every commit of the application code will result in the current set of tests being run against the latest commit. Every commit of the test code will result in the new set of tests being run against the existing application code. The two things (application code and test code) should be coupled, and should always build together.
And one last thing, mostly opinion: UI tests are awful and serve very little utility. They are brittle, slow, and hard to maintain. I have never seen a comprehensive UI test suite actually provide value. UI tests are best served as a small set of post-release smoke tests. Business logic should be primarily unit tested, with a smaller suite of integration tests to back it up.

How does the test controller get the binaries to the agent during remote execution?

I am trying to get a tests to run via remote execution, and I can't find any documentation on how the following:
I understand that when the controller is registered to a team project collection and the agent runs through a lab environment, then a build needs to be attached to the process - and it then makes perfect sense that the controller pulls the dll that contains the tests from this build.
However, what does not make sense to me is in the more simplified scenario:
I have my test solution with a testsetting file, here I define the controller under roles. I also have 1 agent connected to the controller. Now in visual studio when I run the test, it runs this through the controller -> which delegates to the agent. However I have not setup any build.
I'm assuming that visual studio pushes the dll's to the controller when you first run the test. The controller then creates a cache of the dll's? This is just a guess. Is it correct?
I need to know how the internals work because I have not yet got any test to run on a remote controller. So far after much headaches I can only get the scenario to work when the controller and agent and local dev environment are located on the same machine.
All the MSDN documentation talks about the high level reuse, and does not go into any details of the internals.
Thanks in advance!
You likley want to run the tests automatedly after a deployment. If that is the case then you probably want a TFS integrated experience rather than the Visual Studio client one. The client experience is primarily for load testing at small scale.
Try: http://nakedalm.com/execute-tests-release-management-visual-studio-2013/
In this configuration your app is installed and pre-configured prior to running the tests. The agent then lifts the test assemblies from the build drop.

Best way to manage dependent ant builds over multiple servers?

I have these ant scripts that build and deploy my appservers. My system though is actually over 3 servers. They all use the same deploy script (w/flags) and all work fine.
Problem is there are some dependencies. They all use the same database so I need a way to stop all appservers across all machines before the build first happens on machine 1. Then I need the deployment on machine 1 to go and complete first as it's the one that deals with the database build (which all appservers need to start).
I've had a search around and there are some tools that might be useful but they all seem overkill for what I need.
What do you think would be the best tool to sync and manage the ant builds over multiple machines (all running linux)?
Thanks,
Ryuzaki
You could make your database changes non-breaking, run your database change scripts first and then deploy to your appservers. This way your code changes aren't intrinsically tied to your database changes and both can happen independently.
When I say non-breaking I mean that database changes are written in such a way that 2 different version of code can function against the same database. For example rather than renaming columns, you add a new one instead.

Robot Framework use cases

Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine

How to setup ASP.Net MVC solution for quickest build time

I want to find the best setup for ASP.Net MVC projects to get the quickest code-build-run process in Visual Studio.
How can you set up your solution to achieve near zero second build times for small incremental changes?
If you have a test project, with dependencies on other projects in your solution, a build of the test project will still process the other projects even if they have not changed.
I'm don't think it is entirely rebuilding these projects but it is certainly processing them. When doing TDD you want an near zero second build time for your small incremental changes, not a 20 - 30 second delay.
Currently my approach is to reference the dll of a dependent project instead of referencing the project itself, but this has the side effect of requiring me to build these projects independently should I need to make a change there, then build my test project.
One small tip, if you use PostSharp, you can add the Conditional Compilation symbol SKIPPOSTSHARP to avoid rebuilding the aspects in your projects during unit testing. This works best if you create a separate build configuration for unit testing.
I like Onion architecture.
Solution should have ~3 projects only =>
Core
Infrastructure
UI
Put 2 more projects (or 1 and use something like nUnit categories to separate tests) =>
UnitTests
IntegrationTests
It's hard to trim down more. <= 5 projects aren't bad. And yeah - avoid project references.
Unloading unnecessary projects through VS might help too.
And most importantly - make sure your pc is not clogged up. :)
Anyway - that's just another trade off. In contrast to strongly typed languages - dynamic languages are more dependent on tests but it's faster and easier to write them.
Small tip - instead of rebuilding whole solution, rebuild selection only (Tools=>Options=>Keyboard=>Build.RebuildSelection). Map it to ctrl+shift+b. Original map remap to ctrl+shift+alt+b.
Here's how you could structure your projects in the solution:
YourApp.BusinessLogic : class library containing controllers and other logic (this could reference other assemblies)
YourApp : ASP.NET MVC web application referencing YourApp.BusinessLogic and containing only views and static resources such as images and javascript
YourApp.BusinessLogic.Tests : unit tests
Then in the configuration properties of the solution you may uncheck the Build action for the unit tests project. This will decrease the time between you press Ctrl+F5 and you see your application appearing in the web browser.
One way you can cut down on build times is to create different build configurations that suit your needs and remove specific projects from being built.
For example, I have Debug, Staging, Production, and Unit Test as my configurations. The Debug build does not build my Web Deployment project or my Unit Test project. That cuts down on the build time.
I don't think "code-build-run" is any way a tenet of TDD.
You don't need zero-second build times -- that is an unreasonable expectation -- 5 - 10 seconds is great.
You're not running tests after every tiny incremental change. Write a group of tests around the interface for a new class (a controller, say, or a library class that will implement business logic). Create a skeleton of the class -- the smallest compilable version of the class. Run tests -- all should fail. Implement the basic logic of the class. Run tests. Work on the failing code until all tests pass. Refactor the class. Run tests.
Here you've built the solution a few times, but you've only waited for the builds a total of 30 seconds. Your ratio of build time to coding time is completely negligible.

Resources