Unable to save results in results directory using jmeter with jenkins - jenkins

summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%) Tidying up ... # Thu Nov 21 09:13:03 UTC 2019 (1574327583681) Error generating the report: org.apache.jmeter.report.core.SampleException: Could not read metadata ! ... end of run Finished: SUCCESS

As per your Summariser output:
summary = 0 in 00:00:0
it means that 0 Samplers were executed, to wit your there are no results to parse/plot. I would recommend checking jmeter.log file for any suspicious entries, in case of JMeter failure the root cause can be figured out from the log file.
If you have problems with interpreting the log - update your question with the log file contents.

Are you using transaction controller in your script?If that is the case then update following line to false in jmeter property file,it will resolve your issue
With value set to true :
With value set to false:

Related

How to get SoapUi assertion result back in jenkins script

In my Jenkins file, i am executing maven command and it is executing very well.
mvn com.smartbear.soapui:soapui-maven-plugin:5.5.0:test -f src/main/resources/testcases/pom.xml
I can see reports generated and in Jenkins log i can see status of test execution.
SoapUI 5.3.0 TestCaseRunner Summary
Time Taken: 3922ms
Total TestSuites: 1
Total TestCases: 1 (0 failed)
Total TestSteps: 1
Total Request Assertions: 3
Total Failed Assertions: 0
Total Exported Results: 1
what i want is to get the status of test execution, like success or failure, how can i get test execution result back in Jenkins file so i can add stage as success of failure.

import and Parse syslog file to elastisearch

I need to import a set of SYSLOG files to elasticsearch. I'am using a filebeat agent.
I succeeded the data importation, however the data in elasticsearch is not parsed.
This is the input file:
Feb 14 03:43:40 my_host_name run-parts(/etc/cron.daily)[1544] finished rhsmd
Feb 14 03:43:40 my_host_name anacron[240673]: Job `cron.daily' terminated (produced output)
Feb 14 03:43:41 my_host_name anacron[240673]: Normal exit (1 job run)
Feb 14 03:43:41 my_host_name postfix/pickup[241860]: 7E8CFC00BB50: uid=0 from=<root>
I work on the 7.15.2 version of Filebeat and Elasticsearch. I get an index output with the field message not parsed. That contain for example the hole line " Feb 14 03:43:41 my_host_name anacron[240673]: Normal exit (1 job run)".
On the versions 8.0 there is a processor option to add to the configuration file that parse this field:
processors:
- syslog:
field: message
However in the version 7.15.2 this option is not available.
How can I parse this Field in the Filebeat configuration ?
Thank you for your help.
What you could do is either use the dissect or script processors to parse the values according to your needs. Not saying this is the best option, but it is an option

Unable to upload wasm file on terra-station

I developed NFT smart contract based on Cosmwasm for Terra blockchain. It was working well, but when I upgraded cosmwasm-std version from 0.9.2 to 1.0.0-beta8, despite of successful compiling and optimization of source code, storing wasm on chain is invoking error.
My code is based on https://github.com/terran6/nft_on_terra/ and deployed using following command
terrain deploy cw721-base --signer custom_tester_1 --network testnet --set-signer-as-admin
terrain sync-refs
This command made error as follows.
...
Optimizing cw721_base.wasm ...
Creating hashes ...
5401a4be4cccc8c52109391ed3473074941153eecb71d79bdb2fd813fe3a77d9 cw721_base.wasm
Info: sccache stats after build
Compile requests 41
Compile requests executed 25
Cache hits 0
Cache misses 25
Cache misses (Rust) 25
Cache timeouts 0
Cache read errors 0
Forced recaches 0
Cache write errors 0
Compilation failures 0
Cache errors 0
Non-cacheable compilations 0
Non-cacheable calls 16
Non-compilation calls 0
Unsupported compiler calls 0
Average cache write 0.000 s
Average cache read miss 2.733 s
Average cache read hit 0.000 s
Failed distributed compilations 0
Non-cacheable reasons:
crate-type 12
- 4
Cache location Local disk: "/root/.cache/sccache"
Cache size 15 MiB
Max cache size 10 GiB
done
storing wasm bytecode on chain... !
Error: Request failed with status code 400
Response: failed to execute message; message index: 0: Error calling the
VM: Error during static Wasm validation: Wasm contract has unknown
interface_version_* marker export (see
https://github.com/CosmWasm/cosmwasm/blob/main/packages/vm/README.md):
store wasm contract failed: invalid request
Error: Process completed with exit code 1.
This error is pressing me several days. Thanks in advance.

How to get execution time of each test in bazel?

When running bazel test the output contains only summary of the all tests, including total run time.
Running bazel with performance profiling does not help, because it does not indicate each test time.
So how to get the info about each test execution time?
UPD:
I have a sample repo to reproduce my problem:
$ git clone https://github.com/MikhailTymchukFT/bazel-java
$ cd bazel-java
$ bazel test //:AllTests --test_output=all --test_summary=detailed
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (20 packages loaded, 486 targets configured).
INFO: Found 2 test targets...
INFO: From Testing //:GreetingTest:
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:56 --
================================================================================
INFO: From Testing //:MainTest:
==================== Test output for //:MainTest:
JUnit4 Test Runner
.
Time: 0.016
OK (1 test)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:57 --
================================================================================
INFO: Elapsed time: 21.009s, Critical Path: 6.68s
INFO: 10 processes: 6 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 18 total actions
Test cases: finished with 3 passing and 0 failing out of 3 test cases
INFO: Build completed successfully, 18 total actions
I can see execution time of both tests in GreetingTest
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
, but cannot see the execution time of each test in this class/rule.
With --test_summary=short (the default value), the end of the output looks like this (lines for the other 325 tests truncated):
INFO: Elapsed time: 148.326s, Critical Path: 85.71s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 680 processes: 666 linux-sandbox, 14 worker.
INFO: Build completed successfully, 724 total actions
//third_party/GSL/tests:no_exception_throw_test (cached) PASSED in 0.4s
//third_party/GSL/tests:notnull_test (cached) PASSED in 0.5s
//aos/events:shm_event_loop_test PASSED in 12.3s
Stats over 5 runs: max = 12.3s, min = 2.4s, avg = 6.3s, dev = 3.7s
//y2018/control_loops/superstructure:superstructure_lib_test PASSED in 2.3s
Stats over 5 runs: max = 2.3s, min = 1.3s, avg = 1.8s, dev = 0.4s
Executed 38 out of 329 tests: 329 tests pass.
INFO: Build completed successfully, 724 total actions
Confusingly, --test_summary=detailed doesn't include the times, even though the name sounds like it should have strictly more information.
For sharded tests, that output doesn't quite have every single test execution, but it does give statistics about them as shown above.
If you want to access the durations programmatically, the build event protocol has a TestResult.test_attempt_duration_millis field.
Alternatively, using --test_output=all will print all the output from your actual test binaries, including the ones that pass. Many testing frameworks print a total execution time there.
There is a testlogs folder where you can find .xml files with the execution times of each testcase.
The bazel-testlogs symlink points to the same location.
For my example, these files will be located at /private/var/tmp/_bazel_<user>/<some md5 hash>/execroot/<project name>/bazel-out/<kernelname>-fastbuild/testlogs/GreetingTest/test.xml
The content of that file is like this:
<?xml version='1.0' encoding='UTF-8'?>
<testsuites>
<testsuite name='com.company.core.GreetingTest' timestamp='2020-04-07T09:58:28.409Z' hostname='localhost' tests='2' failures='0' errors='0' time='0.01' package='' id='0'>
<properties />
<testcase name='sayHiIsString' classname='com.company.core.GreetingTest' time='0.01' />
<testcase name='sayHi' classname='com.company.core.GreetingTest' time='0.0' />
<system-out />
<system-err /></testsuite></testsuites>

busted No test files found matching Lua pattern: spec

my directory
the contents of the file 'hhh.lua' is the same as file 'btest_spec.lua' (see my directory)
when I run 'busted' (just use commond 'busted') ,it return an error:
0 successes / 0 failures / 1 error / 0 pending : 0.00003 seconds
Error → No test files found matching Lua pattern: _spec
when I run 'busted btest_spec.lua' , it success and return :
●●
2 successes / 0 failures / 0 errors / 0 pending : 0.003049 seconds
when I run 'busted *', it success and return :
●●●●
4 successes / 0 failures / 0 errors / 0 pending : 0.006815 seconds
so ,why busted fail to find file 'btest_spec.lua' when I run 'busted'?
I had the same error (macOS Sierra, fish shell) and solved it by running busted . instead of just busted. Note the period indicating busted should look in the current working directory.
This is due to a break in the dependency "penlight", which busted relies on.
See here - https://github.com/Olivine-Labs/busted/issues/528
The fixed version of penlight (1.4.1) is now on luarocks, which should fix your issue if you update busted.

Resources