Minitest - A test suite with method-level granularity - ruby-on-rails

After an upgrade, I'm finding the same several test methods failing, so I'd like to automate testing just those instead of all methods in all classes. I want to list each class-method pair (e.g. TestBlogPosts.test_publish, TestUsers.test_signup) and have them run together as a test suite. Either in a file or on the command-line, I don't really care.
I'm aware of these techniques to run several entire classes, but I'm looking for finer granularity here. (Similar to what -n /pattern/ does on the command-line - to run a subset of test methods - but across multiple classes.)

You could renounce minitest/autorun and call Minitest.run with your self defined test selection.
An example:
gem 'minitest'
require 'minitest'
#~ require 'minitest/autorun' ##No!
#Define Test cases.
#The `puts`-statements are kind of logging which tests are executed.
class MyTest1 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(0, 1-1)
end
end
class MyTest2 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(1, 1-1) #will fail
end
end
#Run two suites with defined test methods.
Minitest.run(%w{-n /MyTest1.test_subtract|MyTest2.test_add/}) #select two specific test method
The result:
Run options: -n "/MyTest1.test_subtract|MyTest2.test_add/" --seed 57971
# Running:
call MyTest2.test_add
.call MyTest1.test_subtract
.
Finished in 0.002313s, 864.6753 runs/s, 864.6753 assertions/s.
2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
When you call the following test:
Minitest.run(%w{-n /MyTest1.test_subtract/}) #select onespecific test method
puts '=================='
Minitest.run(%w{-n /MyTest2.test_add/}) #select one specific test method
then you get
Run options: -n /MyTest1.test_subtract/ --seed 18834
# Running:
call MyTest1.test_subtract
.
Finished in 0.001959s, 510.4812 runs/s, 510.4812 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
==================
Run options: -n /MyTest2.test_add/ --seed 52720
# Running:
call MyTest2.test_add
.
Finished in 0.000886s, 1128.0825 runs/s, 1128.0825 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
The Minitest.run takes the same parameters you use from the command line. So you can use the -n option with your selection, e.g. /MyTest1.test_subtract|MyTest2.test_add/.
You could define different tasks or methods with different Minitest.run-definition to define your test suites.
Attention:
No test file you load may contain a require 'minitest/autorun'.

Related

In Jenkins job, behave tests stops after any failure

I have created a jenkins "freestyle" job, in which I am trying to run multiple BDD testing process. Following is the "commands" I have put in "Jenins/Build/execute shell" section:
cd ~/FEXT_BETA_BDD
rm -rf allure_reports allure-reports allure-results
pip install behave
pip install selenium
pip install -r features/requirements.txt
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature
What I have found is in Jenkins, if there is any test case intermittent failure, such message is shown in the Console Output:
"
...
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 1 skipped, 0 undefined
Took 2m48.770s
Build step 'Execute shell' marked build as failure
"
And the leftover test cases are skipped. But if I was to run the behave command on my local host directly, I don't get this type of behaviour. The failure will be detected and the remaining test cases continues till all are finished.
So How may I work around this issue in Jenkins ?
Thanks,
Jack
You may try the following syntax:
set +e
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature || echo 'ALERT: Build failed while running the plan section'
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature || echo 'ALERT: Build failed while running the blueprint section'
# Restoring original configuration
set -e
Note:
Goal of set -e is to cause the shell to abort any time an error occurs. If you will see your log output, you will notice sh -xe at the start of execution which confirms that Execute Shell in Jenkins uses -e option. So, to disable it, you can use +e instead. However, it's good to restore it once your purpose is fulfilled so that subsequent commands produce expected result.
Ref: https://superuser.com/questions/1113014/what-would-set-e-and-set-x-commands-do-in-the-context-of-a-shell-script
The ConsoleOutput from the SummaryReporter above indicates that you have only one feature with one scenario (that fails). Behave has no such thing that it stops when the first scenario fails.
An early abortion of the test run can only occur if critical things happen:
A failure/exception in the before_all() hook occurs
A critical exception is raised (SystemExit, KeyboardInterrupt) to end the test run
Your implementation tells behave to abort the test run (make sense on critical failures when all other tests will also fail; why waste the time)
BUT: If the test run is aborted early, all the features/scenarios that are not executed yet are reported as untested counts in the SummaryReporter.
...
0 features passed, 1 failed, 0 skipped, 2 untested
0 scenarios passed, 1 failed, 0 skipped, 3 untested
0 steps passed, 1 failed, 0 skipped, 0 undefined, 6 untested
HINT: Untested counts are normally hidden. They are only shown if the counter is not zero (greater than zero).
This is not the case in your description.
SEE ALSO:
behave: features/runner.abort_by_user.feature

How to mail the exact file content through Jenkins?

I want to mail the below file through Jenkins(include newlines).
$ cat summary.txt
---| My First test |---
Total Tests: 1
Total Passes: 0
Total Errors: 0
Total Failures: 1
Total Skipped tests: 0
Jenkins job configuration:
Inject environment variables:
Properties File Path
/path/summary.txt
Editable Email Notification:
Default content:
$DEFAULT_CONTENT
${FILE, path="summary.txt"}
Received Mail:
First - Build # 111 - Still Failing: Check console output at http://1.1.1.1:8080/job/First/11/ to view the results. ---| My First test |--- Total Tests: 1 Total Passes: Total Errors: 0 Total Failures: 1 Total Skipped tests:
Expected Mail:
First - Build # 111 - Still Failing: Check console output at http://1.1.1.1:8080/job/First/111/ to view the results.
---| My First test |---
Total Tests: 1
Total Passes: 0
Total Errors: 0
Total Failures: 1
Total Skipped tests: 0
If your build log is too long, putting all the content in the body might not be good and cumbersome too. So instead of that, please make it as an attachment to the email. So that Long content, format issues etc issues will be gone. You can do that in the script by this(build_log) calling this. Hope this is helps.

Cucumber and Rails - printing scenarios as they pass

I'm using Cucumber to test my Rails app. Is there anyway to print the scenario description as the tests run? Thanks!
Sample run:
laptop:rails_proj mark$ rake cucumber
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -S bundle exec cucumber --profile default
Using the default profile...
........................................................................................
20 scenarios (20 passed)
88 steps (88 passed)
0m0.593s
Loaded suite /usr/bin/rake
Started
Finished in 0.000174 seconds.
0 tests, 0 assertions, 0 failures, 0 errors
I think it's the --profile option. I have mine set to "dev_report".
See here:
https://github.com/cucumber/cucumber/wiki/cucumber.yml
Are you using the Cucumber Rails gem ? It prints them by default for me...
Can you post a sample of one of yours tests? The output shows "0 tests, 0 assertions, 0 failures, 0 errors" which sounds like they aren't doing anything.

Passing a bash script error to the console in rails

I have a shell script that runs some acceptance tests for an application I'm working on. The scripts runs the tests, checks if there were errors and then exits with either 0 (success) or 1 (failure).
I have a rake task that calls the shell script, and then gets the result. The problem I'm having is, how do I pass that result to the rails console so that when I echo $? it will be equal the value returned by the shell script?
My current code is as follows:
def acceptance_tests
system("./run_tests.sh");
error_code = $?.success? ? 0 : 1
result = error_code == 0 ? 'passed' : 'failed'
puts ("The acceptance tests have #{result}.")
SystemExit.new(error_code)
end
The tests pass / fail as expected when I run them, but after they are complete, I run echo $? and it's always equal to 0.
Any ideas about what I'm doing wrong?
SystemExit is an Exception, so raise it:
$ echo "raise SystemExit.new(5)" | ruby; echo $?
5
In the end, changing the SystemExit.new() to exit() worked for me.
def acceptance_tests
system("./run_tests.sh");
error_code = $?.success? ? 0 : 1
result = error_code == 0 ? 'passed' : 'failed'
puts ("The acceptance tests have #{result}.")
exit(error_code)
end

Capistrano & Bash: ignore command exit status

I'm using Capistrano run a remote task. My task looks like this:
task :my_task do
run "my_command"
end
My problem is that if my_command has an exit status != 0, then Capistrano considers it failed and exits. How can I make capistrano keep going when exit when the exit status is not 0? I've changed my_command to my_command;echo and it works but it feels like a hack.
The simplest way is to just append true to the end of your command.
task :my_task do
run "my_command"
end
Becomes
task :my_task do
run "my_command; true"
end
For Capistrano 3, you can (as suggested here) use the following:
execute "some_command.sh", raise_on_non_zero_exit: false
The +grep+ command exits non-zero based on what it finds. In the use case where you care about the output but don't mind if it's empty, you'll discard the exit state silently:
run %Q{bash -c 'grep #{escaped_grep_command_args} ; true' }
Normally, I think the first solution is just fine -- I'd make it document itself tho:
cmd = "my_command with_args escaped_correctly"
run %Q{bash -c '#{cmd} || echo "Failed: [#{cmd}] -- ignoring."'}
You'll need to patch the Capistrano code if you want it to do different things with the exit codes; it's hard-coded to raise an exception if the exit status is not zero.
Here's the relevant portion of lib/capistrano/command.rb. The line that starts with if (failed... is the important one. Basically it says if there are any nonzero return values, raise an error.
# Processes the command in parallel on all specified hosts. If the command
# fails (non-zero return code) on any of the hosts, this will raise a
# Capistrano::CommandError.
def process!
loop do
break unless process_iteration { #channels.any? { |ch| !ch[:closed] } }
end
logger.trace "command finished" if logger
if (failed = #channels.select { |ch| ch[:status] != 0 }).any?
commands = failed.inject({}) { |map, ch| (map[ch[:command]] ||= []) << ch[:server]; map }
message = commands.map { |command, list| "#{command.inspect} on #{list.join(',')}" }.join("; ")
error = CommandError.new("failed: #{message}")
error.hosts = commands.values.flatten
raise error
end
self
end
I find the easiest option to do this:
run "my_command || :"
Notice: : is the NOP command so the exit code will simply be ignored.
I just redirect STDERR and STDOUT to /dev/null, so your
run "my_command"
becomes
run "my_command > /dev/null 2> /dev/null"
this works for standard unix tools pretty well, where, say, cp or ln could fail, but you don't want to halt deployment on such a failure.
I not sure what version they added this code but I like handling this problem by using raise_on_non_zero_exit
namespace :invoke do
task :cleanup_workspace do
on release_roles(:app), in: :parallel do
execute 'sudo /etc/cron.daily/cleanup_workspace', raise_on_non_zero_exit: false
end
end
end
Here is where that feature is implemented in the gem.
https://github.com/capistrano/sshkit/blob/4cfddde6a643520986ed0f66f21d1357e0cd458b/lib/sshkit/command.rb#L94

Resources