On Rails, each test case creates an ActiveRecord transaction, which allows to test everything and then revert the database to the original state, without having to drop all tables, or anything like that that might affect seeders, etc.
Is it possible to do something like this on TypeORM?
From what I've seen, the main issue with the way transactions are documented to work is that a call to another method would not be using the created transaction, but I'm hoping I'm missing some other way of implementing it.
Thanks!
I had exactly the same expectations as you. Coming from Rails and Spring, I expected to have transactional tests and found no solution directly from Typeorm.
It is hard to reuse the same transactions during the tests because the connection class always create a new QueryRunner for every database command or transaction. Diving into TypeORM, the solution I found was to monkey patch the method which creates the query runner, to reuse it during the tests. I created this library to reuse this code in several projects: https://github.com/viniciusjssouza/typeorm-transactional-tests.
I know this is quite late, but I also actually worked on a solution which you can see here: https://www.npmjs.com/package/typeorm-test-transactions
A disclaimer from my side is that you have to use the #Transactional() decorator, but I like how that makes the code a lot cleaner and you don't have to pass the transaction manager down.
#viniciusjssouza I checked your solution and I really like it! It's funny, I think we both had the same problem at the same time :P
You might consider an alternative approach. Instead of isolating tests through transactions you could isolate them through serialization + multiple databases.
At a high level the approach is as follows:
Split tests into N groups where each group gets its own database and N is roughly the number of cpu cores you have.
Within each group tests are run serially. Tests also reset the database when they start.
Each group runs in parallel to each other.
This approach is remarkably easy to set up with Jest and Docker and allows you to achieve a high degree parallelism.
I wrote a blog post describing this approach in more detail here:
https://blog.mikevosseller.com/2021/11/25/how-to-run-jest-with-multiple-test-databases.html
Related
I'm new to testing in rails and I don't understand why or when I should use fixtures (or factory) rather than just seeding my test db and querying it to run the tests?
In many cases, it should be faster and easier to have the same data in dev and test env.
For example, if I want to test an index page, should I create 100 records via a factory or should I seed the db with 100 records?
I someone could clarify this, it would be great.
Thanks!
This is actually a deeper question of how to test efficiently, and you will find a lot of different opinions.
The reason to avoid a database in your unit tests is merely speed. Database operations are slow. It might not seem slow with one test, but if you have continuous integration going (as you should) or when you made a quick change and just want to see what happens, those delays add up. So prefer mocks to truly unit test code.
Then your own integration tests should hit an in-memory database rather than your real database--for the same reason, speed. These will be slower than your mocked tests, but still faster than hitting the real database. When you're developing, the build-test-deploy cycle needs to be as fast as possible. Note that some people call these unit tests as well. I wouldn't, but I guess it is just semantics.
These first two kinds of tests are by developers for developers.
Then the testers will hit the real database, which will be populated with test data defined by the testers and subject-matter experts. There are lots of clever ways to speed this up as well, but this will be the place where they test the integration of your code with the production-like database. If all your in-memory database tests passed and something goes wrong here, then you know it has to do with something like database configuration, vendor-specific SQL, etc. rather than something fundamentally bad. You will also get your first taste of what the performance is like.
Note that everything I've said here is a matter of debate. But hopefully it clarifies what you should consider about when to do certain things and why.
What I need is a way inside a Factory.define block to know if the factory has been called using create or build, either explicitly or simply using the default strategy.
I have a factory that has to manually adjust associations that the original author of the code took so far off the rails that normal creation barfs and normal build can be managed. I don't want to adjust those associations in the build case, but I have to in the create case.
I've been looking to see if there is something analogous to 'current_strategy' but I haven't seen anything yet. I know I can distinguish using after_create vs. after_build, but the original author made it so that the act of saving the object without doing the adjustments causes massive unhappiness--save exceptions and garbage in the database.
I currently have no mandate to fix the "models" he wrote and the existing rspec tests use the differentiation to do the right thing at any time. In every case the prior test author(s) have opted to simply never use create, which means setting up most of the test data is an arcane and lengthy process.
Any help would be deeply appreciated--I'm still exercising my GoogleFu but would love to be short circuited...
Oh, this is in Rails 2 (/cry)
thanks!
This sounds like a very strange problem indeed, but since you say that you're cleaning up someone else's code, I'll assume there's no easy way out of this.
I wouldn't approach this from the factory side. The factory shouldn't care because the model (not the factory) is supposed to be the gatekeeper of validity in terms of object structure and associations.
I would write specs that separately create and build objects, and test their associations to make sure they are correct (according to what you want the new behavior to ultimately be). Then, get those specs to pass by refactoring the models to do what you actually need them to do. This is how you clean up legacy code, and alter its behavior - write tests that will pass when the new functionality is correct, and refactor until they pass, making incremental changes with each test/refactoring.
When your new specs are passing, you're well on your way. If the previous author put in specs of their own that verify the previous behavior, then you'll have to work on figuring out which, if any, of those tests are currently valid (many of them may be, since they represent the requirements that the app currently fulfils), and removing ones that aren't.
I am writing a program using Ruby on Rails and PostgreSQL. The system generates alot of reports which are frequently updated and frequently accessed by users. I am torn between whether I should use Postgres triggers to create the report tables (like Oracle materialized views) or the Rails built in ActiveRecord callbacks. Has anyone got any thoughts or experiences on this?
Callback are useful in the following cases:
Combine all business logic in Rails models which ease maintainability.
Make use of existing Rails model code
Easy to debug
Ruby code is easier to be written/read than sql "maintainability"
Triggers are useful in the following cases:
Performance is a big concern. It is faster than callbacks.
If your concern is ease and clean then use callbacks. If your concern is performance then use triggers.
We had the same problem, and since this is an interesting topic, I'll elaborate based on our choice/experience.
I think the concept is more complex than what highlighted in the current answer.
Since we're talking about reports, I assume that the use case is updating of data warehousing tables - not a "generic" application (this assumption/distinction is crucial).
Firstly, the "easy to debug" idea is not [necessarily] true. In our case, it's actually counterproductive to think so.
In sufficiently complex applications, some types of callbacks (data warehousing updates/millions of lines of code/mid (or more) sized team) are simply impossible to maintain, because there are so many places/ways the database will be updated, that it will be practically impossible to debug missed callbacks.
Triggers don't have to be necessarily designed as the "complex and fast" logic.
Specifically, triggers may also works as low-level callback logic, therefore being simple and lean: they would simply forward the update events back to the rails code.
To wrap up, in the use case mentioned, rails callbacks should be avoided like the plague.
An efficient and effective design is to have RDBMS triggers adding records to a queue table, and a rails-side queueing system, which acts upon them.
(Since this post is old, I'm curious about which has been the experience of the OP)
I really like using Factory Girl to setup my tests. I can build chains of associations in a single line. For example:
Factory.create(:manuscript)
Automatically creates a journal, a journal owner, a manuscript author, etc. It allows me to to keep my setup blocks really simple, and that's fantastic.
However, there's a cost of course. Creating several objects in the background means my unit tests are sometimes as long as 0.8 seconds. That's fine when your app is small, but now I've got a few hundred tests and my specs take over a minute to run (not including the time it takes for the app to spin up). It's starting to feel painful.
I'm not especially interested in anything too drastic, like mocking everything. At least while my app is relatively small, I'd like to maintain my factory girl abstractions. I just want to figure out a way to make them work a bit faster.
Any suggestions?
If you're testing object behavior and don't need to actually save them to a database, you can use Factory.build(:model). It basically instantiates the object and it's associations, but does not write it to the DB. This will be much faster than creating and storing all those objects. If you still want to write some or most objects to the DB, you can setup an SQlite memory test database. Here's an example
Not sure there's a good solution to this problem. As Beerlington suggested, you can save some time using Factory.build rather than Factory.create. But even that's not nearly as fast as testing a plain old ruby objects. The fact, it seems, is that factory girl is not a good choice if you're very concerned with speed.
That said, I was able to make some fairly significant speed improvements by reading through my entire suite and making liberal use of the rspec-set gem. This allows you to run your setup once -- and only once -- for the entire group of tests. It's similar to using before(:all) except that it takes advantage of transactions to reset the state of objects between each spec.
I got pretty experienced with testing controllers, my question here is though, aren't we supposed to test the data context as well, and how ? I mean, there are a lot of relationships and constraints coming from the DB that simply testing controllers does not cover.
On the other hand, testing against the DB is not considered a good practice - what then ? Simply testing without db.SubmitChanges() or what ?
IMHO you should not test the DataContext. Hopefully Microsoft have already done this, so testing that SubmitChanges will persist data into the DB is pointless for me. You should do integration or web testing where you would define a specific scenarios and verify the output from the application.
When it comes to testing your repositories, the typical approach is to create an in-memory database that can be torn down and rebuilt for each time you run your tests. By using this approach, you will always know what the data will look like so you can more easily make assertions against it. In addition, you won't be touching your real data, which is always a positive. Sqlite is the most popular one out there in the .NET space for this.
Yes, you should do integration testing of your data context to ensure that any "code" that you put in the database itself works -- uniqueness constraints, triggers, etc. This doesn't imply that you should do your unit testing against the database, however. Having said that, any code you put in your model classes should be unit tested. Usually, you can do this without having to test against the database directly. For example, any validation code should be able to run without requiring that you actually insert or update the DB.