I have a fairly complex service that creates and saves a lot of domain instances, and because of the logic, I need to create different instances at different times, and in the middle check for certain conditions, that are not only that each instance is valid, but for example I need to check if certain files exist in the file system, etc.
I'm testing the incorrect cases, where the service throws an exception, and I need to test that no instances where persisted if an exception is thrown.
One specific test case was failing, even if the expected exception was thrown, a domain instance was saved to the DB. Then I read that because the integration test was transactional itself, the rollback really occurs at the end of the test, instead of after using the service, so I can check conditions on the "then" section of the spock test case.
So if the rollback occurs after I can test it, I can't test it :(
Then I read that making the integration test not transaction might help, so I added:
static transactional = false
After this, all my other tests started to fail!
My question is: what is the right way of testing services that should rollback when an exception is thrown? I just need to verify that after an exception occurs, there are no alterations to the database (since this is a healthcare application, data consistency is key)
FYI:
This is the service I need to test: https://github.com/ppazos/cabolabs-ehrserver/blob/master/grails-app/services/com/cabolabs/ehrserver/parsers/XmlService.groovy
This is my current test: https://github.com/ppazos/cabolabs-ehrserver/blob/master/test/integration/com/cabolabs/ehrserver/parsers/XmlServiceIntegrationSpec.groovy
Thanks!
If you can modify your code to rely on the framework, then you don't need to test the framework.
Grails uses Spring and the Spring transaction manager rolls back automatically for RuntimeExceptions but not for checked exceptions. If you throw a RuntimeException for your various validation scenarios then Spring will handle it for you. If you rely on those facts, your test could then stop at verifying that a RuntimeException is thrown.
If you want to use a checked exception (such as javax.xml.bind.ValidationException ) then you can use an annotation to ensure it causes a rollback:
#Transactional(rollbackFor = ValidationException.class)
def processCommit( ....
and then your test need only check
def ex = thrown(ValidationException)
See http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html#transaction-declarative-attransactional-settings for more information.
Related
For context, this question arose because we are migration from Rails 5 to Rails 6, and introducing reader / writer database connections via the new replication features.
Our specific problem is with request specs, with an eye towards using transactional fixtures. When we run our request specs files in isolation, they pass. When run as part of a multiple-file pass (such as a full bundle exec parallel_rspec pass used on circle CI) they fail. If we turn off transactional fixtures, the tests take far too long to run, but pass.
Using byebug, we've poked in and determined that the problem is that our test data has been written to / is accessible by the writer DB connection, but the route is attempting to use the reader DB connection to read it. I. E. ActiveRecord::Base.connected_to(role: :reading) { puts Foo.count } is 0, while the same code connecting to writing role is non-zero.
The problem from there seems fairly obvious: because we're using transactional tests / fixtures, the code is never committed to the DB. It's only available on the connection it was made on. The request spec is reading from the 'right' db for the call (a GET request should use the reader db), but in the use-case of tests that's producing errors.
It seems like this is a fairly obvious use case that either Rails or rspec should have a tool for handling, we just don't seem to be able to find the relevant documentation.
You need to tell the test environment that it should be using a single connection for both. There are multiple ways of doing this:
You can configure your test environment not to use replicas at all. See Setting up your application for examples of using a replica and not using a replica then reproduce the non-replica version in your database.yml for the test environment only.
You can use connected_to within your specs themselves so that those tests are forced to use the specific connection you want them to use. One way to do this is with around hooks:
describe "around filter" do
around(:each) do |example|
puts "around each before"
ActiveRecord::Base.connected_to(role: :writing) { example.run }
puts "around each after"
end
it "gets run in order" do
puts "in the example"
end
end
You can monkey patch your ActiveRecord configuration in rails_helper so that it doesn't use replicas (but I'd really recommend #1 over this option)
I am in need of adding some test suite for asserting Rails database.yml configuration. I have configured db statement timeout to 2000ms, I need a way to assert that the configuration is working through a test case.
There are 2 ways to test:
Verify that database.yml statement timeout is configured to 2500ms but not checking if configuration is working or not ?
Issue a sql a statement that takes more than 2500ms and assert there is a exception raised.
ActiveRecord::Base.connection.execute(<<~SQL)
select pg_sleep(86400);
SQL
This code will raise the exception but actually executes for 2500ms to raise the exception, So I need a way to assert this without waiting for
2500ms.
ActiveRecord::Base.connection.execute(<<~SQL)
select pg_sleep(86400);
SQL
By definition, there is no way to test that a timeout is working without waiting for that timeout.
What you could do, is set a much smaller timeout value specifically for your test environment (let's say, 500ms) and assert the exception is raised after you sleep for 600ms.
Background: I am unit testing a game server which is built upon rails 4.1.1 and separate socket.io/node.js for socket messaging. Messages from node.js to rails are going through RESTful http requests.
Single test case runs as follows:
(1) rake unit test --> (2) rails controller --> (3) node.js/socket.io --> (4) rails controller
Problem description: Some DB entries are created with ActiveRecord at step (2), then upon receiving a socket message at step (3) node.js sends HTTP request back to rails controller and finally(!!) at step (4) rails controller tries to access DB entries from step (2), but TEST DB contents are empty at this point.
Question: It seems like desired behavior of rake to cleanup TEST DB, but how can I persist TEST db across test cases and prevent such problem?
Thanks in advance
You should prepare and send request to node app inside a test and assert response there.
But it's not a good practice. The better solution would be HTTP mocks (like webmock gem). This approach will save lots of time in the future.
Luckily, I figured out the solution.
By default, rake is wrapping all tests in separate DB transactions and rolls back on cleanup. Moreover, whatever requests/queries are coming outside of TestCase are not included in transaction and not visible inside the test case.
To avoid such behavior, we have to disable transactional fixtures in test/test_helper.rb
class ActiveSupport::TestCase
self.use_transactional_fixtures = false
end
As downside, we have to cleanup test db manually. So #Alexander Shlenchack points out to avoid such practice in the first place and use http/socket mocks in future.
Here is brief summary http://devblog.avdi.org/2012/08/31/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/
And related question Rails minitest, database cleaner how to turn use_transactional_fixtures = false
I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.
For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!
Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.
You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.
I am trying to test an app that uses gem devise_token_auth, which basically includes a couple extra DB read/writes on almost every request (to verify and update user access tokens).
Everything is working fine, except when testing a controller action that includes several additional db read/writes. In these cases, the terminal locks up and I'm forced to kill the ruby process via activity monitor.
Sometimes I get error messages like this:
ruby /Users/evan/.rvm/gems/ruby-2.1.1/bin/rspec spec/controllers/api/v1/messages_controller_spec.rb(1245,0x7fff792bf310) malloc: *** error for object 0x7ff15fb73c00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
I have no idea how to interpret that. I'm 90% sure the problem is due to this gem and the extra DB activity it causes on each request because I when I revert to my previous, less intensive auth, all the issues go away. I've also gotten things under control by giving postgres some extra time on the offending tests:
after :each do
sleep 2
end
This works fine for all cases except one, which requires a timeout before the expect, otherwise it throws this error:
Failure/Error: expect(#user1.received_messages.first.read?).to eq true
ActiveRecord::StatementInvalid:
PG::UnableToSend: another command is already in progress
: SELECT "messages".* FROM "messages" WHERE "messages"."receiver_id" = $1 ORDER BY "messages"."id" ASC LIMIT 1
which, to me, points to the DB issue again.
Is there anything else I could be doing to track down/control these errors? Any rspec settings I should look into?
If you are running parallel rspec tasks, that could be triggering this. When we've run into issues like this, we have forced those tests to run in single, non-parallel instance of rspec in our CI using tags.
Try something like this:
context 'when both records get updated in one job', non_parallel do
it { is_expected.to eq 2 }
end
And then invoke rspec singularly on the non_parallel tag:
rspec --tag non_parallel
The bulk of your tests (not tagged with non_parallel) can still be run in parallel in your CI solution (e.g. Jenkins) for performance.
Of course, be careful applying this band-aid. It is always better to identify what is not race-safe in your code, since that race could happen in the real world.