How to mail the exact file content through Jenkins? - jenkins

I want to mail the below file through Jenkins(include newlines).
$ cat summary.txt
---| My First test |---
Total Tests: 1
Total Passes: 0
Total Errors: 0
Total Failures: 1
Total Skipped tests: 0
Jenkins job configuration:
Inject environment variables:
Properties File Path
/path/summary.txt
Editable Email Notification:
Default content:
$DEFAULT_CONTENT
${FILE, path="summary.txt"}
Received Mail:
First - Build # 111 - Still Failing: Check console output at http://1.1.1.1:8080/job/First/11/ to view the results. ---| My First test |--- Total Tests: 1 Total Passes: Total Errors: 0 Total Failures: 1 Total Skipped tests:
Expected Mail:
First - Build # 111 - Still Failing: Check console output at http://1.1.1.1:8080/job/First/111/ to view the results.
---| My First test |---
Total Tests: 1
Total Passes: 0
Total Errors: 0
Total Failures: 1
Total Skipped tests: 0

If your build log is too long, putting all the content in the body might not be good and cumbersome too. So instead of that, please make it as an attachment to the email. So that Long content, format issues etc issues will be gone. You can do that in the script by this(build_log) calling this. Hope this is helps.

Related

CppUTest on Jenkins

We use CppUTest to run unit tests.
This is being performed by Cmake/Ninja where after building the tests, we use ninja to execute them ninja test
an example output of this is:
1/3 Test #1: Test1................................................... Passed 0.03 sec
Start 2: Test2
2/3 Test #2: Test2......................................................... Passed 0.00 sec
Start 3: Test3
3/3 Test #3: Test3..............................................................***Exception: SegFault 0.00 sec
66% tests passed, 1 tests failed out of 3
Total Test time (real) = 0.26 sec
The following tests FAILED:
3 - Test3 (SEGFAULT)
Errors while running CTest
FAILED: CMakeFiles/test.util
This is ok if i trigger the build locally on my machine and analyze it manually. Now what i am looking for is an already existing solution to help jenkins analyze the output.
Right now, Jenkins executes the build and exits "successfully", because the command itself ninja test executed successfully, but not all of the tests.
Maybe you already found this but you can create a JUnit output with cpputest with the -ojunit output flag. Jenkins should then be able to import the results from this file.
CppUTest Commandline Switches

In Jenkins job, behave tests stops after any failure

I have created a jenkins "freestyle" job, in which I am trying to run multiple BDD testing process. Following is the "commands" I have put in "Jenins/Build/execute shell" section:
cd ~/FEXT_BETA_BDD
rm -rf allure_reports allure-reports allure-results
pip install behave
pip install selenium
pip install -r features/requirements.txt
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature
What I have found is in Jenkins, if there is any test case intermittent failure, such message is shown in the Console Output:
"
...
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 1 skipped, 0 undefined
Took 2m48.770s
Build step 'Execute shell' marked build as failure
"
And the leftover test cases are skipped. But if I was to run the behave command on my local host directly, I don't get this type of behaviour. The failure will be detected and the remaining test cases continues till all are finished.
So How may I work around this issue in Jenkins ?
Thanks,
Jack
You may try the following syntax:
set +e
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature || echo 'ALERT: Build failed while running the plan section'
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature || echo 'ALERT: Build failed while running the blueprint section'
# Restoring original configuration
set -e
Note:
Goal of set -e is to cause the shell to abort any time an error occurs. If you will see your log output, you will notice sh -xe at the start of execution which confirms that Execute Shell in Jenkins uses -e option. So, to disable it, you can use +e instead. However, it's good to restore it once your purpose is fulfilled so that subsequent commands produce expected result.
Ref: https://superuser.com/questions/1113014/what-would-set-e-and-set-x-commands-do-in-the-context-of-a-shell-script
The ConsoleOutput from the SummaryReporter above indicates that you have only one feature with one scenario (that fails). Behave has no such thing that it stops when the first scenario fails.
An early abortion of the test run can only occur if critical things happen:
A failure/exception in the before_all() hook occurs
A critical exception is raised (SystemExit, KeyboardInterrupt) to end the test run
Your implementation tells behave to abort the test run (make sense on critical failures when all other tests will also fail; why waste the time)
BUT: If the test run is aborted early, all the features/scenarios that are not executed yet are reported as untested counts in the SummaryReporter.
...
0 features passed, 1 failed, 0 skipped, 2 untested
0 scenarios passed, 1 failed, 0 skipped, 3 untested
0 steps passed, 1 failed, 0 skipped, 0 undefined, 6 untested
HINT: Untested counts are normally hidden. They are only shown if the counter is not zero (greater than zero).
This is not the case in your description.
SEE ALSO:
behave: features/runner.abort_by_user.feature

Invoke-Pester -CodeCoverage claims 0% code coverage when testing module function

I wrote a function for dbatools called New-DbaSqlConnectionStringBuilder. I wrote unit tests for it. I know these unit tests cover most of the function. I am getting 0% code coverage report with the following command.
Invoke-Pester .\tests\New-DbaSqlConnectionStringBuilder.Tests.ps1 -CodeCoverage .\functions\New-DbaSqlConnectionStringBuilder.ps1
Abridged output below:
**********************
Running C:\Users\zippy\Documents\dbatools\tests\New-
. . .
Unit tests happen
. . .
Passed: 16 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0
Code coverage report:
Covered 0.00% of 21 analyzed commands in 1 file.
To get this version of the code:
git clone https://github.com/zippy1981/dbatools.git
cd dbatools
git checkout testing/PesterCodeCoverage
Import-Module .\dbatools.psd1
What am I doing wrong?
Just psychic debugging:
Your module is installed and your test are running against the module instead of the: ' .\functions\New-DbaSqlConnectionStringBuilder.ps1' file.

Minitest - A test suite with method-level granularity

After an upgrade, I'm finding the same several test methods failing, so I'd like to automate testing just those instead of all methods in all classes. I want to list each class-method pair (e.g. TestBlogPosts.test_publish, TestUsers.test_signup) and have them run together as a test suite. Either in a file or on the command-line, I don't really care.
I'm aware of these techniques to run several entire classes, but I'm looking for finer granularity here. (Similar to what -n /pattern/ does on the command-line - to run a subset of test methods - but across multiple classes.)
You could renounce minitest/autorun and call Minitest.run with your self defined test selection.
An example:
gem 'minitest'
require 'minitest'
#~ require 'minitest/autorun' ##No!
#Define Test cases.
#The `puts`-statements are kind of logging which tests are executed.
class MyTest1 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(0, 1-1)
end
end
class MyTest2 < MiniTest::Test
def test_add
puts "call %s.%s" % [self.class, __method__]
assert_equal(2, 1+1)
end
def test_subtract
puts "call %s.%s" % [self.class, __method__]
assert_equal(1, 1-1) #will fail
end
end
#Run two suites with defined test methods.
Minitest.run(%w{-n /MyTest1.test_subtract|MyTest2.test_add/}) #select two specific test method
The result:
Run options: -n "/MyTest1.test_subtract|MyTest2.test_add/" --seed 57971
# Running:
call MyTest2.test_add
.call MyTest1.test_subtract
.
Finished in 0.002313s, 864.6753 runs/s, 864.6753 assertions/s.
2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
When you call the following test:
Minitest.run(%w{-n /MyTest1.test_subtract/}) #select onespecific test method
puts '=================='
Minitest.run(%w{-n /MyTest2.test_add/}) #select one specific test method
then you get
Run options: -n /MyTest1.test_subtract/ --seed 18834
# Running:
call MyTest1.test_subtract
.
Finished in 0.001959s, 510.4812 runs/s, 510.4812 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
==================
Run options: -n /MyTest2.test_add/ --seed 52720
# Running:
call MyTest2.test_add
.
Finished in 0.000886s, 1128.0825 runs/s, 1128.0825 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
The Minitest.run takes the same parameters you use from the command line. So you can use the -n option with your selection, e.g. /MyTest1.test_subtract|MyTest2.test_add/.
You could define different tasks or methods with different Minitest.run-definition to define your test suites.
Attention:
No test file you load may contain a require 'minitest/autorun'.

jtl file is not getting parsed in jenkins for jmeter

I am trying to run the jmeter test from Jenkins. I've already installed performance plugin and restarted the jenkins. I don't want to use any maven/ant.
Execute shell command
cd /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/workspace
java -jar /Users/Shared/Jenkins/apache-jmeter/bin/ApacheJMeter.jar -n -t Login_Logout.jmx -l result.jtl
In the post build actions of jenkin-> publish performance test result report -> jmeter -> report files -> **/*.jtl
While I am running from jenkin the console says
Performance: Failed to parse /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/builds/2013-10-03_17-14-53/performance-reports/JMeter/result.jtl: Content is not allowed in prolog.
So I am not able view the result/report in the performance Report section. Any suggestion how to fix.
==================================console output=============
+ cd /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/workspace
+ java -jar /Users/Shared/Jenkins/apache-jmeter/bin/ApacheJMeter.jar -n -t Login_Logout.jmx -l result.jtl
Creating summariser <summary>
Created the tree successfully using Login_Logout.jmx
Starting the test # Thu Oct 03 17:14:55 BST 2013 (1380816895721)
Waiting for possible shutdown message on port 4445
summary + 2 in 4.1s = 0.5/s Avg: 2013 Min: 766 Max: 3260 Err: 0 (0.00%) Active: 1 Started: 1 Finished: 0
summary + 10 in 4s = 2.5/s Avg: 392 Min: 286 Max: 573 Err: 0 (0.00%) Active: 0 Started: 1 Finished: 1
summary = 12 in 8s = 1.5/s Avg: 662 Min: 286 Max: 3260 Err: 0 (0.00%)
Tidying up ... # Thu Oct 03 17:15:04 BST 2013 (1380816904307)
... end of run
Performance: Percentage of errors greater or equal than 0% sets the build as unstable
Performance: Percentage of errors greater or equal than 0% sets the build as failure
Performance: Recording JMeter reports '**/*.jtl'
Performance: Parsing JMeter report file result.jtl
Performance: Failed to parse /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/builds /2013-10-03_17-14-53/performance-reports/JMeter/result.jtl: Content is not allowed in prolog.
Finished: SUCCESS
result.jtl
1380816896268,766,Login,200,OK,Group1 1-1,text,true,230,766
1380816897071,3260,Reservations,200,OK,Group1 1-1,text,true,3295,3260
1380816900339,335,ReservationID,200,OK,Group1 1-1,text,true,8683,335
1380816900681,353,Weather,200,OK,Group1 1-1,text,true,2022,353
1380816901039,563,Summary,200,OK,Group1 1-1,text,true,6528,563
1380816901607,573,Home,200,OK,Group1 1-1,text,true,11955,573
1380816902187,329,HolidayCountdown,200,OK,Group1 1-1,text,true,344,329
1380816902520,375,Contacts,200,OK,Group1 1-1,text,true,2835,375
1380816902899,286,Excursions,200,OK,Group1 1-1,text,true,237,286
1380816903189,361,TravelAgent,200,OK,Group1 1-1,text,true,570,361
1380816903554,319,Profile,200,OK,Group1 1-1,text,true,395,319
make the following changes in the jmeter.properties file:
remove the comment from the following line and change csv by xml
#jmeter.save.saveservice.output_format=csv
like this:
jmeter.save.saveservice.output_format=xml
remove the (#) comment from the following lines:
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
And change the extension of the file to be generate jtl to xml.
With latest versions of Jenkins Performance plugin (e.g. v1.14) you can parse both CSV and XML formats.
Depending on the format of your result files, you need to select the appropriate report type in the "Publish performance tests result report" section:
chose "JMeter" report type if your result files are XML
chose "JMeterCSV" report type if your result files are CSV.

Resources