I'm using Cucumber to test my Rails app. Is there anyway to print the scenario description as the tests run? Thanks!
Sample run:
laptop:rails_proj mark$ rake cucumber
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -S bundle exec cucumber --profile default
Using the default profile...
........................................................................................
20 scenarios (20 passed)
88 steps (88 passed)
0m0.593s
Loaded suite /usr/bin/rake
Started
Finished in 0.000174 seconds.
0 tests, 0 assertions, 0 failures, 0 errors
I think it's the --profile option. I have mine set to "dev_report".
See here:
https://github.com/cucumber/cucumber/wiki/cucumber.yml
Are you using the Cucumber Rails gem ? It prints them by default for me...
Can you post a sample of one of yours tests? The output shows "0 tests, 0 assertions, 0 failures, 0 errors" which sounds like they aren't doing anything.
Related
We use CppUTest to run unit tests.
This is being performed by Cmake/Ninja where after building the tests, we use ninja to execute them ninja test
an example output of this is:
1/3 Test #1: Test1................................................... Passed 0.03 sec
Start 2: Test2
2/3 Test #2: Test2......................................................... Passed 0.00 sec
Start 3: Test3
3/3 Test #3: Test3..............................................................***Exception: SegFault 0.00 sec
66% tests passed, 1 tests failed out of 3
Total Test time (real) = 0.26 sec
The following tests FAILED:
3 - Test3 (SEGFAULT)
Errors while running CTest
FAILED: CMakeFiles/test.util
This is ok if i trigger the build locally on my machine and analyze it manually. Now what i am looking for is an already existing solution to help jenkins analyze the output.
Right now, Jenkins executes the build and exits "successfully", because the command itself ninja test executed successfully, but not all of the tests.
Maybe you already found this but you can create a JUnit output with cpputest with the -ojunit output flag. Jenkins should then be able to import the results from this file.
CppUTest Commandline Switches
I have created a jenkins "freestyle" job, in which I am trying to run multiple BDD testing process. Following is the "commands" I have put in "Jenins/Build/execute shell" section:
cd ~/FEXT_BETA_BDD
rm -rf allure_reports allure-reports allure-results
pip install behave
pip install selenium
pip install -r features/requirements.txt
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature
What I have found is in Jenkins, if there is any test case intermittent failure, such message is shown in the Console Output:
"
...
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 1 skipped, 0 undefined
Took 2m48.770s
Build step 'Execute shell' marked build as failure
"
And the leftover test cases are skipped. But if I was to run the behave command on my local host directly, I don't get this type of behaviour. The failure will be detected and the remaining test cases continues till all are finished.
So How may I work around this issue in Jenkins ?
Thanks,
Jack
You may try the following syntax:
set +e
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature || echo 'ALERT: Build failed while running the plan section'
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature || echo 'ALERT: Build failed while running the blueprint section'
# Restoring original configuration
set -e
Note:
Goal of set -e is to cause the shell to abort any time an error occurs. If you will see your log output, you will notice sh -xe at the start of execution which confirms that Execute Shell in Jenkins uses -e option. So, to disable it, you can use +e instead. However, it's good to restore it once your purpose is fulfilled so that subsequent commands produce expected result.
Ref: https://superuser.com/questions/1113014/what-would-set-e-and-set-x-commands-do-in-the-context-of-a-shell-script
The ConsoleOutput from the SummaryReporter above indicates that you have only one feature with one scenario (that fails). Behave has no such thing that it stops when the first scenario fails.
An early abortion of the test run can only occur if critical things happen:
A failure/exception in the before_all() hook occurs
A critical exception is raised (SystemExit, KeyboardInterrupt) to end the test run
Your implementation tells behave to abort the test run (make sense on critical failures when all other tests will also fail; why waste the time)
BUT: If the test run is aborted early, all the features/scenarios that are not executed yet are reported as untested counts in the SummaryReporter.
...
0 features passed, 1 failed, 0 skipped, 2 untested
0 scenarios passed, 1 failed, 0 skipped, 3 untested
0 steps passed, 1 failed, 0 skipped, 0 undefined, 6 untested
HINT: Untested counts are normally hidden. They are only shown if the counter is not zero (greater than zero).
This is not the case in your description.
SEE ALSO:
behave: features/runner.abort_by_user.feature
After an upgrade, I'm finding the same several test methods failing, so I'd like to automate testing just those instead of all methods in all classes. I want to list each class-method pair (e.g. TestBlogPosts.test_publish, TestUsers.test_signup) and have them run together as a test suite. Either in a file or on the command-line, I don't really care.
I'm aware of these techniques to run several entire classes, but I'm looking for finer granularity here. (Similar to what -n /pattern/ does on the command-line - to run a subset of test methods - but across multiple classes.)
You could renounce minitest/autorun and call Minitest.run with your self defined test selection.
An example:
gem 'minitest'
require 'minitest'
#~ require 'minitest/autorun' ##No!
#Define Test cases.
#The `puts`-statements are kind of logging which tests are executed.
class MyTest1 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(0, 1-1)
end
end
class MyTest2 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(1, 1-1) #will fail
end
end
#Run two suites with defined test methods.
Minitest.run(%w{-n /MyTest1.test_subtract|MyTest2.test_add/}) #select two specific test method
The result:
Run options: -n "/MyTest1.test_subtract|MyTest2.test_add/" --seed 57971
# Running:
call MyTest2.test_add
.call MyTest1.test_subtract
.
Finished in 0.002313s, 864.6753 runs/s, 864.6753 assertions/s.
2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
When you call the following test:
Minitest.run(%w{-n /MyTest1.test_subtract/}) #select onespecific test method
puts '=================='
Minitest.run(%w{-n /MyTest2.test_add/}) #select one specific test method
then you get
Run options: -n /MyTest1.test_subtract/ --seed 18834
# Running:
call MyTest1.test_subtract
.
Finished in 0.001959s, 510.4812 runs/s, 510.4812 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
==================
Run options: -n /MyTest2.test_add/ --seed 52720
# Running:
call MyTest2.test_add
.
Finished in 0.000886s, 1128.0825 runs/s, 1128.0825 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
The Minitest.run takes the same parameters you use from the command line. So you can use the -n option with your selection, e.g. /MyTest1.test_subtract|MyTest2.test_add/.
You could define different tasks or methods with different Minitest.run-definition to define your test suites.
Attention:
No test file you load may contain a require 'minitest/autorun'.
versions
ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux]
rails-3.2.18
rspec-2.14.1
parallel_tests-0.9.3
configs
.rspec
--color
--format documentation
--drb
--profile
.rspec_parallel
If installed as plugin: -I vendor/plugins/parallel_tests/lib
--color
--format documentation
--profile
when normal rspec success and not output Test unit messages
% RAILS_ENV=test bundle exec rspec
when parallel_tests rspec failed and output Test Unit messages
% RAILS_ENV=test bundle exec rake parallel:create\[4\] db:migrate parallel:prepare\[4\]
% RAILS_ENV=test bundle exec rake parallel:spec\[4\]
error
invalid option: -O
Test::Unit automatic runner.
Usage: /var/lib/jenkins/jobs/anyone.develop.spec/workspace/vendor/bundle/ruby/2.0.0/bin/rspec [options] [-- untouched arguments]
-r, --runner=RUNNER Use the given RUNNER.
(c[onsole], e[macs], x[ml])
--collector=COLLECTOR Use the given COLLECTOR.
(de[scendant], di[r], l[oad], o[bject]_space)
-n, --name=NAME Runs tests matching NAME.
(patterns may be used).
--ignore-name=NAME Ignores tests matching NAME.
(patterns may be used).
-t, --testcase=TESTCASE Runs tests in TestCases matching TESTCASE.
(patterns may be used).
--ignore-testcase=TESTCASE Ignores tests in TestCases matching TESTCASE.
(patterns may be used).
--location=LOCATION Runs tests that defined in LOCATION.
LOCATION is one of PATH:LINE, PATH or LINE
--attribute=EXPRESSION Runs tests that matches EXPRESSION.
EXPRESSION is evaluated as Ruby's expression.
Test attribute name can be used with no receiver in EXPRESSION.
EXPRESSION examples:
!slow
tag == 'important' and !slow
--[no-]priority-mode Runs some tests based on their priority.
--default-priority=PRIORITY Uses PRIORITY as default priority
(h[igh], i[mportant], l[ow], m[ust], ne[ver], no[rmal])
-I, --load-path=DIR[:DIR...] Appends directory list to $LOAD_PATH.
--color-scheme=SCHEME Use SCHEME as color scheme.
(d[efault])
--config=FILE Use YAML fomat FILE content as configuration file.
--order=ORDER Run tests in a test case in ORDER order.
(a[lphabetic], d[efined], r[andom])
--max-diff-target-string-size=SIZE
Shows diff if both expected result string size and actual result string size are less than or equal SIZE in bytes.
(1000)
-v, --verbose=[LEVEL] Set the output level (default is verbose).
(important-only, n[ormal], p[rogress], s[ilent], v[erbose])
--[no-]use-color=[auto] Uses color output
(default is auto)
--progress-row-max=MAX Uses MAX as max terminal width for progress mark
(default is auto)
--no-show-detail-immediately Shows not passed test details immediately.
(default is yes)
--output-file-descriptor=FD Outputs to file descriptor FD
-- Stop processing options so that the
remaining options will be passed to the
test.
-h, --help Display this help.
Deprecated options:
--console Console runner (use --runner).
Coverage report Rcov style generated for RSpec to /var/lib/jenkins/jobs/anyone.develop.spec/workspace/coverage/rcov
Rspecs Failed
try to
https://github.com/grosser/parallel_tests/issues/189
add spec_helper.rb
Test::Unit::AutoRunner.need_auto_run = false if defined?(Test::Unit::AutoRunner)
But, It did not resolve
delete spork
add
Test::Unit::AutoRunner.need_auto_run = false if defined?(Test::Unit::AutoRunner)
spec_helper.rb last line
resolved!
try this on your Gemfile
gem "test-unit", :require => false
or try test-unit 3.1.5.
https://github.com/test-unit/test-unit/issues/32#issuecomment-146885235
someone says about broke the application to run on Heroku. but likely the same as your problem
echo how many testcases
read s1
echo Enter the Testcases
for (( c=1; c<=$s1; c++ ))
do
read a1
a[$c]=$a1
#echo ${a[$c]}
done
for (( c=1; c<$s1; c++ ))
do
str=${a[$c]}'|'
str1=$str1$str
done
str1=$str1${a[$c]}
echo $str1
str1=\($str1\)
echo $str1
CMD="ruby final2.rb --name "\"\/test_$str1\/\"
#echo $CMD
$CMD
i have the testsuite final2.rb which contains test_1 test_2 test_3 test_4 test_5 test_6 test_7 as testcases in it.
Above i have created a script that will take only the number of testcases to run like
1
2
5 these will be converted to the pattern ruby final2.rb --name "/test_(1|2|5)/"
As we know this command runs the testcases:-
test_1 test_2 test_3 in the testsuite final2.rb.
but when executed using Bash Script the test suite runs only for a milli seconds like..
DEMO
*Loaded suite final2
Started
Finished in 0.000135 seconds.
0 tests, 0 assertions, 0 failures, 0 errors*
but if i write the same command ruby final2.rb --name "/test_(1|2|5)/"
in termial myself the desired testcases runs and the output is
***Loaded suite final2
Started
Finished in 124.1212135 seconds.
3 tests, 6 assertions, 0 failures, 0 errors***
so
runnig a commad in terminal is working and then runing same command by script is not working...
any suggestions..
help
to ran system command you need to wrap it in this quotes "`":
`CMD="ruby final2.rb --name "\"\/test_$str1\/\"`
another aproaches: System call from Ruby