Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am new to Unit testing. I didn't found a complete example/sample of unit testing a CRUD application.
I want to add unit test in MVC application which is using entity framework.
Do we actually add data in database everytime we run a test?
Do we create seperate unit test projects for every project?
Unit tests should be fast
Unit tests should be executed in isolation of other application's units
Unit tests should be executed in isolation of other tests
Using database in unit tests will violate "principles" above
Accessing database is slow
Database represents "global state" of your application - so using same database will force you to setup and clear database for every tests and remove possibility to run tests in parallel.
For unit tests you need abstract everything which make tests slow or depend on global state.
In your case will be enough to abstract only database operations and mock it in the tests.
Entity Framework Core provide nice In-Memory Database Provider, where you can write fast unit tests which will tests database operations too.
For writing tests with mocked database you need to configure mock object to return or assert expected data for your tests.
If you use InMemory database provider you will need to insert data for tests or read database for asserting expected result.
Having own test project for every "tested" project is common practice, but feel free to invent your own structure of your solution, which will fit your needs and expectations. Main idea that you will be able quickly find correspondent tests for some of the behavior or concrete method.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I've used mvc5 and Entity Framework 6 in my project. I decided to use MSTest in my project, but I have several questions.
For example I have a class that i called Employee, this class has several dependency to other classes in project for instance company,organization,user.
If i want create test method for an action in Employee controller that return an Employee regard to current user, company and organization in test method I have to create object for employee, user, company and organization then I'll can test this action.
If I want create all objects for test, I have to create a lot of object in each test method this task is very time consumer also I have more complicated object with more dependency to the other object in my project.
I had research in this case, some people recommended to create a database with specific data in it for test purpose, but I know one of the principle in unit test that is that have not been used a database and all test should be able run in memory.
If I want mock up all class is time consumer and possibility of error is high.
What is the best approach to test in this situation ?
Is a unit testing a good choice?
In the web most example is about write unit test for example phone number format or ... somethings like that where can I find proper sample ?
I think you are confusing unit testing your controller with your repository classes.
If you want to do unit testing to your controllers, first you must make its dependencies to services "loosely coupled" using Dependency Injection.
Now, to load all Employee associated objects is a responsibility of the repository service, not the controller. So you must not test that in your controller tests.
You must mock the repository service with a fake repository that returns an object with just the properties your controller need to do work.
Then, in your controllers test, you must check that it does with this data what it has to do.
Of course, you can have multiple test to the same action testing different behavior for different types of data received.
See "Iteration #4 – Make the application loosely coupled" for an
explanation with examples on how to use dependency injection with
controllers.
See "Iteration #5 – Create unit tests", for how to unit test
your controllers. Also here you will find some tools to mock
session and request data to unit test your controllers.
Testing the repository is a completely different task. This is more difficult because to really test it you need a real database.
It depends on your situation, but what I use is a "Test database" that is cleaned and filled with some basic data (or regenerated if the schema has changed) each time some tests are started.
By the way, repository services only responsibility is to load classes from a database, so they don't need dependencies to other services and their tests never will use mocking.
By your question details I can say that not only unit testing should be involved, but actually the entire testing process must be set. My answer will be too broad, so I'll try to boil it down to these couple of suggestions:
in order to gain proper testing activities, you need Test environments (at least one)
unit testing is just one of the (test) levels that must be performed. As you know Unit testing is all about the smallest testable part of an application and nothing more. If you need to cover complex objects creation, usage etc., maybe you need to consider another test level. For example Integration Testing will help you with combined parts of your application and to determine if they function correctly together.
About the part with
mock up all class
In many cases is inevitable to use such approach and even can improve performance. By proper combination of yours
class has several dependency to other classes
For example I use such mocked Test object and cached it, so it can be reused in other test executions. Here very useful is the Flyweight-design-pattern.
An year ago I had to work on such project (ASP .Net, Entity framework etc.) and as an answer to
What is the best approach to test in this situation ?
To establish a serious Test process, that will help you with the reliability of your software.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm doing a contract job in Ruby on Rails and it is turning into a disaster for productivity largely because of writing tests. This is my first time using TDD in practice and it hasn't gone well because I am spending so much time writing tests that I've hardly gotten any results to show. I'm thinking that perhaps I'm trying to test too much by writing tests for every feature in each model and controller.
If I can't aim for 100% test coverage, what are some criteria that I could use to determine "is this feature worth testing"? For example, would integration tests trump unit tests?
If you're just getting started with testing in the ruby or Rails world, I'd make the following suggestions:
Start with rspec. Automated acceptance/integration testing with a tool like Cucumber can be a large timesink for a single developer who has never used it before. Success with those tools is often contingent upon A) quality UI specs that are very very specific, B) UI conventions which are easily testable with headless browser emulators and C) Familiarity with the tools ahead of go time.
Test individual methods. Test that they return the values you expect. Test that when you feed them bad data, they respond in an appropriate manner. Test any edge cases as you become aware of them.
Be very careful that you are stubbing and mocking correctly in your tests. It's easy to write a test for 30 minutes only to discover that you're not really testing the thing you need to be testing.
Don't go overboard with micro-managing your TDD - some folks will tell you to test every tiny step in writing a method: first test that the model has a method called 'foo', then test whether it returns non-nil, then test that it returns a string, then test that the string contains a certain substring. While this approach can be helpful when you're implementing something complex, it can be a time sink as well. Just skip the first two steps. That being said, it's easy to go too far in the other direction, specifying a method's behavior with a complex test before you begin implementing it, then beginning the implementation only to find you've botched the test.
Don't write tests that just say 'this is how i wrote the feature, don't change it'. Tests should reflect the essential business logic of a method. If you are writing tests specifically so that they will fail if another developer changes some non-critical part of your implementation, you are wasting time and adding superfluous lines of code.
These are just a few observations I've made from having been in similar situations. Best of luck, testing can be a lot of fun! No, really. I mean it.
100% test coverage is a fantasy and a waste of time. Your tests should serve a purpose, typically to give you confidence that the code you wrote works. Not absolute confidence, but some amount of confidence. TDD should be a tool, not a restriction.
If it's not making your work come out better, why are you doing it? More importantly, if you fail to produce useful code and lose the contract, those tests weren't too useful after all were they? It's a balance, and it sounds like you're on the wrong side.
If you're new to Rails, you can get a small dose of its opinionated creator's view on testing in this 37signals blog article on the topic. Small rules of thumb, but maybe something to push you in a new direction on the subject.
There are also good references on improving your use of RSpec like betterspecs.org, The RSpec Book and Everyday Rails Testing with RSpec. Using it poorly can result in a lot of headache maintaining the specs.
My advice is to try and get your testing and your writing of code as tightly coupled as possible, combined with an Agile approach to the project.
This way you will constantly have new stuff to show the client as testing will just be baked in. The biggest mistake I see with teams that are new to testing is to continue to see the testing as a separate activity. Most of all I continue to see developers say that a feature is done... but will need some refactoring and some better tests at "some points". "Some point" rarely comes. One thing is inescapable though - at least for several months it will be much slower in the short term but much better quality and you'll avoid building the "big ball of mud" I've seem in so many larger institutions.
A few things:
Don't
Test the database
Test ActiveRecord or whatever ORM you're using
Do
For models:
Test validations
Test custom logic
For controllers:
Test non-trivial routes
Test redirects
Test authentication
Test instance variable assignment
For views:
I haven't gotten around to testing views, but I've run into situations where I wish I did. For example testing fields in forms.
More at Rails Guides
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I looked in stackoverflow and could fine one or two questions that have a similar title than this one, but none of it answers what I'm asking. Sorry if this is duplicated.
In unity tests, there is a guideline that says "One assertion per test". By reading around stackoverflow and the internet, it is commonly accepted that this rule can be relaxed a bit, but every unit test should test one aspect of the code, or one behavior. This works well because when a test fails you can immediately see what failed and fixing it most likely the test will not fail again in other point in the future.
This works well for Rails unit tests, and I have been using it for functional testing as well without any problem. But when it comes to integration tests, it is somewhat implicit that you should have many assertions in your tests. Apart from that, they usually repeat tests that are already done once in functional and in unit tests.
So, what are considered good practices when writing integration tests in these two factors:
Length of the integration tests: How to measure when a integration test should be splited in two? Number of requests? Or larger is always better
Number of assertions on integration tests: Should it repeat the assertions presented on unit tests and functional tests about the current state of the system every time, or should it have only 5 or so asserts on the end to test if the correct output was generated?
Hopefully someone will provide a more authoritative answer, but my understanding is that an integration test should be built around a specific feature. For example, in an online store, you might write one integration test to make sure that it's possible to add items to your cart, and another integration test to make sure it's possible to check out.
How long should an integration test be?
As long as it takes to cover a feature, and no more. Some features are small, some are large, and their size is a matter of taste. When they're too big, they can easily be decomposed into several logical sub-features. When they're too small, their integration tests will look like view or controller tests.
How many assertions should they have?
As few as possible, while still being useful. This is true of all tests, but it goes doubly for integration tests because they're so slow. This means testing only the things that are most important, and trying not to test things that are implied by other data. In the case of the checkout feature, I might assert that the order was created for the right person and has the right total, but leave the exact items untested (since my architecture might generate the total from the items). I wouldn't make any assertions before that that I didn't have to, since traversing the application—filling this field, clicking that button, waiting for this modal to open—covers all the integration behavior I need tested, and anything else could be covered by view tests if they need to be tested at all.
All together, in general this means that whereas unit tests tend to be only a couple lines long and preceded by a larger setup block, Rails integration tests tend to be a dozen lines long or more (most of which are interaction), and lack a setup block entirely.
Length of the integration tests: I agree that length here doesn't matter that much. It's more about the feature you're testing and how many steps does it take to test it. For example let's say you're testing a wizard of five steps which creates a project. I would put all the five steps in one test end check if the relevant data appeared on screen. However I would split the test if the wizard would allow for different scenarios that need to be covered.
Number of assertions on integration tests: Don't test things that are already tested in other test, but make sure user expectations are met. So test what, the user expects to see on the screen not back-end specific code. Sometimes you might still need to check the right data is in the database, for example when its not supposed to appear on the screen.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
First, I apologize for the open-ended nature of this question. However, I've been paralyzed by this for months, and in spite of consent searching, still cannot get past this.
I have been working on an MVC/EF app for a while. I'm trying to get an understanding on how to design and build a testable MVC3 application backed by Entity Framework (4.1). You can see a few questions I've asked on the subject here, here, and here.
I'm trying not to over complicate it, but I want it to be a sound, loosely coupled design that can grow. The way I understand it, the following are pretty much bare minimum required components:
MVC app
This is very thin. As little logic as possible goes here. My views have as little conditional logic as possible, my view models are never more than POCOs, and my controllers simply handle the mapping between the view models and domain models, and calling out to services.
Service layer + interfaces (separate assemblies)
This is where all of my business logic goes. The goal of this is to be able to slap any thin client (forms app, mobile app, web service) on top of this to expose the guts of my application. Interfaces for the service layer sits in another assembly.
Core utilities/cross-cutting + interfaces (separate assemblies)
This is stuff I build that is not specific to my application, but is not part of the framework or any third party plugin I'm using. Again, interfaces to these components sit in their own assembly.
Repository (EF context)
This is the interface between my domain models and my database. My service layer uses this to retrieve/modify my database via the domain models.
Domain models (EF POCOs)
The EF4 generated POCOs. Some of these may be extended for convenience to other nested properties or computed properties (such as Order.Total = Order.Details.Sum(d => d.Price))
IoC container
This is what is used for injecting my concrete/fake dependencies (services/utilities) into the MVC app & services. Constructor injection is used exclusively throughout.
Here is where I'm struggling:
1) When integration testing is appropriate vs. unit testing. For example, will some assemblies require a mix of both, or is integration testing mainly for the MVC app and unit testing for my services & utilities?
2) Do I bother writing tests against the repository/domain model code? Of course in the case of POCOs, this is not applicable. But what about when I extend my POCOs w/ computed properties?
3) The proper pattern to use for repositories. I know this is very subjective, as every time I see this discussed, it seems everyone has a different approach. Therefore it makes it hard to figure out which way to go. For example, do I roll my own repositories, or just use EF (DbContext) directly?
4) When I write tests for my services, do I mock my repositories, or do I use SQL Lite to build a mock database and test against that? (See debates here and here).
5) Is this an all-or-nothing affair, as in, if I do any testing at all, I should test everything? Or, is it a matter of any testing is better than no testing? If the latter, where are the more important areas to hit first (I'm thinking service layer)?
6) Are there any good books, articles, or sample apps that would help answer most of these questions for me?
I think that's enough for now. If this ends up being too open ended, let me know and I will gladly close. But again, I've already spent months trying to figure this out on my own with no luck.
This is really complex question. Each of your point is large enough to be a separate question so I will write only short summary:
Integration testing and unit testing don't replace each other. You always needs both if you want to have well tested application. Unit test is for testing logic in isolation (usually with help of mocks, stubs, fakes, etc.) whereas integration test is for testing that your components work correctly together (= no mocks, stubs or fakes). When to use integration test and when to use unit test really depends on the code you are testing and on the development approach you are following (for example TDD).
If your POCOs contains any logic you should write unit tests for them. Logic in your repositories is usually heavily dependent on database so mocking context and test them without database is usually useless so you should cover them with integration tests.
It is really dependent on what you expect from repositories. If it is only stupid DbContext / DbSet wrapper then the value of repository is zero and it will most probably not make your code unit testable as described in some referenced debates. If it wraps queries (no LINQ-to-entites in upper layer), expose access to aggregate roots then the meaning of repository is correct separation of data access and exposing mockable interface.
It is fully dependent on previous point. If you expose IQueryable or methods accepting Expression<Func<>> passed to IQueryable internally you cannot correctly mock the repository (well you can but you still need to pair each unit test with integration test testing the same logic). LINQ-to-entities is "side effect" / leaky abstraction. If you completely wrap the queries inside repository and use your own declarative query language (specification pattern) you can mock them.
Any testing is better then no testing. Many methodologies expects high density coverage. TDD goes even to 100% test coverage because test is written always first and there is no logic without test. It is about the methodology you are following and up to your professional decision to chose if you need a test for piece of code.
I don't think that there is any "read this and you will know how to do that". This is software engineering and software engineering is an art. There is no blue print which works in every case (neither in most cases).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I can think of quite a few components that need to be created when authoring a web application. I know it should probably be done incrementally, but I'd like to see what order you usually tackle these tasks in. Layout your usual order of events and some justification.
A few possible components or sections I've thought of:
Stories (i.e. pivotaltracker.com)
Integration tests (Rspec, Cucumber, ...)
Functional tests
Unit Tests
Controllers
Views
Javascript functionality
...
The question is, do you do everything piecemeal? (one story, one integration test, get it passing, move onto the next one, ...) OR complete all of one component first then move onto the next.
I'm a BDDer, so I tend to do outside-in. At a high level that means establishing the project vision first (you'd be amazed how few companies actually do this), identifying other stakeholders and their goals (legal, architecture, etc.) then breaking things down into feature sets, features and stories. A story is the smallest usable piece of code on which we can get feedback, and it may be associated with one or more scenarios. This is what Chris Matts calls "feature injection" - creating features because they are needed to support stakeholder goals and the project vision. I wrote an article about this a while back. I justify this because regardless of how good or well-tested your code is, it won't matter if it's the wrong code in the first place.
Once we have the story and scenarios, I tend to write the UI first, followed by the classes which support it. I wrote a blog post about a real-life example here - we were programming in Java so you might have to do things a bit differently with Rails, but the principles remain. I tend to start writing unit tests when there's actually behaviour to describe - that is, a class behaves differently depending on its context, on what has already happened before. Normally the first class will indeed be the controller, which I tend to populate with static data just to get the UI into shape. I'll write the first unit tests to help me get rid of that static data.
Doing the UI first lets me get feedback from stakeholders early, since it's the UI that the users will be interacting with. I then start with the "happy path" - the thing which lets the users do the most valuable thing - followed by the exceptional cases, validation, etc.
Then I do my best to persuade my PM to let us release our code early, because it's only when the users actually get hold of it to play with that you find out what you really did wrong.