I did not add #RunWith(AndroidJUnit4.class) in my instrumentation tests - android-testing

The android's testing documentation and a lot of other tutorials keep saying that I should include this line #RunWith(AndroidJUnit4.class) in all of my instrumentation tests. But, I did not add this annotation in any of my instr. tests and still all of them are running fine. So,
What is happening and Why to add #RunWith(AndroidJUnit4.class) above instr. test classes?

Related

Find all undefined variables in Ruby file?

I am coming across lot of production issues related to undefined variables. How can I FIND all undefined variables in Ruby file? Is there any gem/script which can scan through code files and detect possible issues.. This way I can test code before production deployment or accepting Pull Requests.
This can't really be done. There are too many ways a variable could be defined. There's also an equal number of ways a variable that should be defined could be set to nil for whatever reason.
You can run Ruby with warnings enabled (-w on the command-line), which will complain vigorously about these things. It will only complain about variables along the execution path of the code, so if there are sections you don't exercise, you won't get warnings.
This is why having a test suite that exhaustively tests branches is essential to shaking out bugs like this. If you have an if in your code, you need two tests for it. If you have two if clauses, you may need three or four. Anything with a case will need all branches tested. This can be a lot of work for a project that's got a lot of business logic in it.
Since Ruby isn't compiled per-se, it's not really able to detect these sorts of issues before the code is run. What's in your file and what actually gets executed can be worlds apart depending on the impact of other parts of code. This is not true in more conservative languages like C or Rust.
Test Unit is built into rails so write some unit tests. Then run with rake test.
Try the rspec and simplecov gems.
rspec allows you to write unit tests that find undefined variables.
simplecov measures code coverage to make sure your tests cover all variables.

Rspec: testing Mac OS app integration

I have a Rails app that interacts with some Mac software and I need to write some tests for it. How on earth do I do that? Where do I even start?
The Rails app connects to the Mac app through AppleScript and Terminal. Any ideas?
Update
Found this gem to help with Shell expectations. Is that as good as I'm gonna get?
https://github.com/matthijsgroen/rspec-shell-expectations
Testing external dependencies can certainly be a challenge.
Remember that tests you write should test the behavior of the Rails app, not the external dependency. You should have integration tests to verify that the code actually works with the application, but they should be "smoke tests", not a full suite to test every feature.
Write unit tests to verify the behavior of the code that relies on the dependency, and mock out the interactions. Typically with command line apps that means:
the app wrote to STDOUT
the app wrote to STDERR
the app read from STDIN
the app exited with a particular status code
The gem you mentioned is a good start, but you may find it worthwhile to look at rolling your own helper code using Open3 from the Ruby standard library, which can be useful for all the items in the list above.
Use a tag on the specs that need to use the Mac application to make it easier to filter out those specs as necessary.
You may already be familiar with vcr for mocking out HTTP interactions; its "playback" feature is a good source of inspiration.

How do I set up NCrunch to run nspec tests

I'm struggling to set up NCrunch to run my nspec tests automatically. On the ncrunch forums it says this functionality has not been implemented yet, but then MattFlo says he prefers using NCrunch, so I'm pretty sure it can be made to work. Help would be greatly appreciated!
We are working on getting NCrunch fully supported.
For now you can use the DebuggerShim (it's a cs file included with NSpec) as a shim to run it via NCrunch. The DebuggerShim is pretty much an NUnit test that runs NSpec tests.
You may want to take a look at specwatchr. Matt likes to use NCruch, but I find that it's to eager to run my tests. I have to consciously STOP typing to give NCrunch a chance to run my tests...I'd rather just hit the save button and have a background process run my tests for me (ie specwatchr). Hope that helps.
Amir (Hacker on NSpec)
NUnit extension for NSpec: https://github.com/ddaysoftware/NSpec4NUnit

What testing tools and methods did Corey Haines use at GoGaRuCo 2011?

In this video from GoGaRuCo 2011, Corey Haines shows some techniques for making Rails test suites much faster. I would summarize it as follows:
Put as much of your code as possible outside the Rails app, into other modules and classes
Test those separately, without the overhead of loading up Rails
Use them from within your Rails app
There were a couple of things I didn't understand, though.
He alternates between running tests with rspec and spn or spna (for example, at about 3:50). Is spn a commonly-known tool?
In his tests for non-Rails classes and modules, he includes the module or class being tested, but I don't see him including anything like spec_helper. How does he have Rspec available?
Sorry about the confusion. spn and spna are aliases I have that add my non-rails code to rspec's load path. There isn't anything special about them, other than adding a -I path_to_code on the command-line.
These days, I add something like this to my .rspec file:
-I app/mercury_app
Then I can do simple require 'object_name' at the top of my specs.
As for not including spec_helper: that is true, I don't. When you execute your spec file with rspec <path_to_spec_file>, it gets interpreted, so you don't need to require rspec explicitly.
For my db specs these days, I also have built an active_record_spec_helper which requires active_record, establishes a connection to the test database, and sets up database_cleaner; this allows me to simply require my model at the top of my spec file. This way, I can test the AR code against the db without having to load up my whole app.
A client I am working at where we are using these techniques is interested in supporting some blog posts about this, so hopefully they will start coming out towards the middle of June.

How to unstub Mocha mock?

I have the following mocha mock that works great.
In a test.rb file:
setup do
Date.stubs(:today).returns(Date.new(2011, 7, 19))
Time.stubs(:now).returns(Time.new(2011,1,1,9,0))
end
The problem is that the timing is broken for the tests. After the tests run the date and time objects are still mocked.(!)
Finished in -21949774.01594216 seconds.
I added the following:
teardown do
Date.unstubs(:today)
Time.unstubs(:now)
end
This throws the following error for each test: WARNING: there is already a transaction in progress
Is this the proper way to unstub? Is it better to unstub at the end of the test file or even at the end of unit test suite?
Working in Rails 3.07 and Mocha 0.9.12
Thanks.
I don't know if this is fully your problem, but it is just unstub, not pluralized.
Other than that, there should be no issue. You definitely want to unstub after each test (or set of tests, if a bunch of tests need the stubbing) because once stubbed, it will stay stubbed, and that can screw up other tests.
The accepted answer is spreading misinformation and should be considered harmful.
One of the main purposes of a mocking library like Mocha is to provide automatic mock/stub teardown as part of the integration to various testing libraries. In fact if you look at the GitHub repo for Mocha you will see that significant maintenance effort is put into making Mocha work smoothly with all the versions of several different testing frameworks.
If this isn't working properly then you need to figure out why Mocha's built-in teardown isn't working. Unstubbing manually in your own teardown is just papering over the problem, and could hide subtler issues with stub leakage or Mocha otherwise misbehaving.
If I had to take a wild guess money would be on your stub somehow being run outside of an actual test because that's the most common cause I've seen for this kind of thing in the wild, but there's not enough information from the question to really ascertain.

Resources