Use database connection to Unit testing project - asp.net-mvc

How to use database connection with our Unit testing project. We have some thousands of unit test code that follows NUnit framework for our MVC application. We have lot of business logic codes that connect with SQL database with MVC application. So we have written unit testing codes that apply the same business logic which connect with database.
For example we have written to insert, update, retrieve operation in unit testing which connect to database.
We have two different source one for Local or Staging application, and another Production application. So different database for Local and Production. Now our testing codes are connected with local database. Is this better way to use different database for unit testing codes to run?
Is changing connection string to change different database connection in web config file of Unit testing projection enough and right way to do or else do we have any other proper way to switch database based on different Solution configurations?
Help us the address this scenario.
Regards,
Karthik.

Related

Rails tests with various integrations

I have a rails app that interacts with an external api (Salesforce) that relies upon external data sitting in a remote database. I've written a wrapper that wraps this code so that users can just call get_by_id(id) instead of writing the corresponding sql query.
I want to test this code, and I am not sure how I should go about it. Should I be hitting the Salesforce backend database for the tests, calling the real methods? Or should I just mock the results of the method calls? I am perpetually confused by what I should test...
You should write like a suite for Salesforce's interaction.
A basic principle of testing, is that your test should not fail because of external factors. However, your app should be able to recover from SalesForce's errors.
From Rails 4 Test Prescriptions
Unfortunately, interacting with a third-party web service introduces a
lot of complexity to our testing. Connecting to a web service is
slow—even slower than the database connections we’ve already tried to
avoid. Plus, connection to a web service requires an Internet
connection... Some external services are public—we don’t want to post an update to Twitter every time we run our tests, let alone post a credit-card payment to PayPal.
Also, the book has some guidelines,
A fake server, which intercepts HTTP requests during a test and
returns a canned response object. We’ll be using the VCR gem ...* An
adapter, which is an object that sits between the client and the
server to mediate access between them.
A smoke test, which goes from the client all the way to the real server...a full end-to-end test of the entire interaction. We don’t
want to do this often, for all the reasons listed earlier, but it’s
useful to be able to guard against changes in the server API.
An integration test, which goes from the client to the fake server.
This tests the entire end-to-end functionality of our application but
uses a stubbed response from the server.
A client unit test, which starts on the client and ends in the
adapter. The adapter’s responses are stubbed, meaning that the adapter
isn’t even making fake server calls. This allows us to unit-test our
client completely separate from the server API.
An adapter unit test, which starts in the adapter and ends in the
fake server. These tests are the final piece of the chain and allow us
to validate the behavior of the adapter separate from any client or
the actual server
By the way, I think the book is a must-have

Creating secure connection between more than one sql server and databases

I am new to the MVC world, i am developing an mvc5 code first from database project which i want to to create securely connection between my local server with more than one sql servers, and each server can contains one or more databases.
so can someone clarify me these:
1. Creating SQL Server Connections[if possible in one entity framework]
2. Securing the Connections.
3. if possible how to access the database other then the initial catalog database
Note:
i found to use entity framework to data access , after creating connection then encrypt the connection string by command-line utility, Aspnet_regiis.exe
any other techniques rather than this.
Thanks
As far as securing connections go, all you've got is SSL. Make each SQL Server instance only available via SSL and your connection is as secure as it will ever be. Encrypting the connection string, using a limited access user, etc. are important, but aren't really aspects of securing the connection, merely security in general.
For multiple database access via Entity Framework, you just need multiple contexts. Each context belongs to one and only database, but you can have as many contexts as your want. The only thing to be somewhat mindful of is having multiple contexts that you run migrations against. While I believe EF has supported this since version 6, it's still not recommended. Generally, you should only have one context that you run migrations against (the entities that inherently belong to your application), while your other contexts will merely work with existing database created and managed outside of your application. Really, if you think you have a need for multiple migrate-able context in a single project, that's really an argument for splitting up your project into multiple projects.
So, for connecting to an existing database, you just need a regular old DbContext subclass, with a couple of tweaks: 1) you need to specify the connection to use manually and 2) you need to disable database initialization. Here's a skeleton of what that looks like:
public ExistingDatabaseContext : DbContext
{
public ExistingDatabaseContext()
: base("ConnectionStringName")
{
Database.SetInitializer<ExistingDatabaseContext>(null);
}
// DbSets here
}

Rails unit tests without db setup or teardown?

I want to write a few unit tests that do not make any changes to a database.
I have a Rails 2.3.11 application. This app has a SQLite database as its primary database. In many ways, this is a run-of-the-mill Rails app.
What makes this app unique is that it also establishes a connection to a SQL Server database. I have some models which are abstract classes and they use the SQL Server database. I have before_save and before_destroy callbacks to prevent any changes being made to the SQL Server database. Also, the user credentials to connect to SQL Server are supposed to be read-only.
I would like to write unit tests that make assertions on the data that is already present in the SQL Server database. But I don't want to setup or teardown the SQL Server database.
I am afraid to just see what happens. I would like to have a setting in the unit test that will prevent Rails from trying to setup or teardown the SQL Server database. Is this possible? How do I do it?
Thank you!
The setup/tear down only affects the application database (SQLite, sounds like), not additional, external database connections.
Also, you should keep your test environment completely separate from your production environment. So, if you're using a test SQLServer DB as well (and you should be, with test data in it - not the production one) then you should be fine even if the worst happens.

Integration testing an ASP.NET MVC application

I need some advice about efficient way of writing integration tests for our current ASP.NET MVC application. Our architecture consists of:
A Service Layer below Controllers
Service Layer uses Repositories and Message Queue (sometimes) to send messages to external applications.
What I think should be done is to:
Write behavioral unit tests for all pieces in isolation. So, for example, we should unit test the Service Layer while mocking Repositories and Message Queues.
Same goes for Controllers. So we mock the Service Layer and unit test the Controller.
Finally, we write separate integration tests for Repositories against a real database and for Message Queue classes against real message queues to ensure they persist/retrieve data successfully.
My Questions:
Are there any other types of integration tests that we need to write?
Do we still need to write integration tests for Services with the real Repositories and Message Queues?
Do we still need to write integration tests for Controllers with the real Services (which in turn consists of real Repositories and Message Queues).
Any advice would be greatly appreciated.
Cheers
Here at office we do not test against real services.
We have test in service side
We are testing controllers as unit tests, we use mock in these unit tests
Yet we don't have a integration test :-(
We were advised to not use real services for testing, we use Rhino Mocks to simulate answers for methods being called inside controller actions.
So the problem is still about how to do integration tests in a good way.
Maybe this could help you:
http://www.codeproject.com/Articles/98373/Integration-testing-an-ASP-NET-MVC-application-wit.aspx
but I am still looking for a better understanding about its possibilities.
I'm using Ivonna for testing various levels of isolation. I'm able to test that a particular Url with some particular POST data hits the expected Action method with the expected arguments, and at the same time I'm able to stub/mock external services.
I've been using SpecsFor.MVC for integration testing. Essentially you write code in a test class and the framework runs a browser interpreting your C# into browser actions. It's beautifully simple to use and setup.

Testing Applications built on top of RESTful web services

Lets say; I am developing a Web Application which talks to a RESTful web service for certain things.
The RESTful web service isn't third party, but is being developed parallely with main application (A good example would be, E commerce application and payment processor; or Social network and SSO system).
In such a system, acceptance(cucumber) or functional tests can be done in two ways:
By mocking out all external calls using object level mocking library; such as Mocha or JMock.
By doing mocking at http level, using libraries such as webmock.
By actually letting the main application make the actual call.
Problem with #1 and #2 is, if API of underlying application changes; my tests would keep passing and code will actual break and hence defeating the purpose of tests in first place.
Problem with #3 is, there is no way I can do rollback of data, the way test suite does on teardown. And I am running my tests parallely and hence if I let actual web services go, I will get errors such as "username taken" or stuff.
So the question to community is, what is the best practice?
Put your main application in a development or staging environment. Spin up your web service in the same environment. Have the one call the other. Control the fixture data for both. That's not mocking; you're not getting a fake implementation.
Not only will this allow you to be more confident in real-world stability, it will allow you to test performance in a staging environment, and also allow you to test your main application against a variety of versions of the web service. It's important that your tests don't do the wrong thing when your web service changes, but it's even more important that your main application doesn't either. You really want to know this confidently before either component is upgraded in production.
I can't see that there is a middleground between isolating your client from the service and actually hitting the service. You could have erroneously passing tests because the service has changed behavior, but don't you have some "contract" with the development team working on that service that holds them responsible for breakage?
You might try fakeweb and get a fresh copy of expected results each day so your tests won't run against stale data responses.

Resources