busted No test files found matching Lua pattern: spec - lua

my directory
the contents of the file 'hhh.lua' is the same as file 'btest_spec.lua' (see my directory)
when I run 'busted' (just use commond 'busted') ,it return an error:
0 successes / 0 failures / 1 error / 0 pending : 0.00003 seconds
Error → No test files found matching Lua pattern: _spec
when I run 'busted btest_spec.lua' , it success and return :
●●
2 successes / 0 failures / 0 errors / 0 pending : 0.003049 seconds
when I run 'busted *', it success and return :
●●●●
4 successes / 0 failures / 0 errors / 0 pending : 0.006815 seconds
so ,why busted fail to find file 'btest_spec.lua' when I run 'busted'?

I had the same error (macOS Sierra, fish shell) and solved it by running busted . instead of just busted. Note the period indicating busted should look in the current working directory.

This is due to a break in the dependency "penlight", which busted relies on.
See here - https://github.com/Olivine-Labs/busted/issues/528
The fixed version of penlight (1.4.1) is now on luarocks, which should fix your issue if you update busted.

Related

How to get SoapUi assertion result back in jenkins script

In my Jenkins file, i am executing maven command and it is executing very well.
mvn com.smartbear.soapui:soapui-maven-plugin:5.5.0:test -f src/main/resources/testcases/pom.xml
I can see reports generated and in Jenkins log i can see status of test execution.
SoapUI 5.3.0 TestCaseRunner Summary
Time Taken: 3922ms
Total TestSuites: 1
Total TestCases: 1 (0 failed)
Total TestSteps: 1
Total Request Assertions: 3
Total Failed Assertions: 0
Total Exported Results: 1
what i want is to get the status of test execution, like success or failure, how can i get test execution result back in Jenkins file so i can add stage as success of failure.

Unable to upload wasm file on terra-station

I developed NFT smart contract based on Cosmwasm for Terra blockchain. It was working well, but when I upgraded cosmwasm-std version from 0.9.2 to 1.0.0-beta8, despite of successful compiling and optimization of source code, storing wasm on chain is invoking error.
My code is based on https://github.com/terran6/nft_on_terra/ and deployed using following command
terrain deploy cw721-base --signer custom_tester_1 --network testnet --set-signer-as-admin
terrain sync-refs
This command made error as follows.
...
Optimizing cw721_base.wasm ...
Creating hashes ...
5401a4be4cccc8c52109391ed3473074941153eecb71d79bdb2fd813fe3a77d9 cw721_base.wasm
Info: sccache stats after build
Compile requests 41
Compile requests executed 25
Cache hits 0
Cache misses 25
Cache misses (Rust) 25
Cache timeouts 0
Cache read errors 0
Forced recaches 0
Cache write errors 0
Compilation failures 0
Cache errors 0
Non-cacheable compilations 0
Non-cacheable calls 16
Non-compilation calls 0
Unsupported compiler calls 0
Average cache write 0.000 s
Average cache read miss 2.733 s
Average cache read hit 0.000 s
Failed distributed compilations 0
Non-cacheable reasons:
crate-type 12
- 4
Cache location Local disk: "/root/.cache/sccache"
Cache size 15 MiB
Max cache size 10 GiB
done
storing wasm bytecode on chain... !
Error: Request failed with status code 400
Response: failed to execute message; message index: 0: Error calling the
VM: Error during static Wasm validation: Wasm contract has unknown
interface_version_* marker export (see
https://github.com/CosmWasm/cosmwasm/blob/main/packages/vm/README.md):
store wasm contract failed: invalid request
Error: Process completed with exit code 1.
This error is pressing me several days. Thanks in advance.

How to get execution time of each test in bazel?

When running bazel test the output contains only summary of the all tests, including total run time.
Running bazel with performance profiling does not help, because it does not indicate each test time.
So how to get the info about each test execution time?
UPD:
I have a sample repo to reproduce my problem:
$ git clone https://github.com/MikhailTymchukFT/bazel-java
$ cd bazel-java
$ bazel test //:AllTests --test_output=all --test_summary=detailed
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (20 packages loaded, 486 targets configured).
INFO: Found 2 test targets...
INFO: From Testing //:GreetingTest:
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:56 --
================================================================================
INFO: From Testing //:MainTest:
==================== Test output for //:MainTest:
JUnit4 Test Runner
.
Time: 0.016
OK (1 test)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:57 --
================================================================================
INFO: Elapsed time: 21.009s, Critical Path: 6.68s
INFO: 10 processes: 6 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 18 total actions
Test cases: finished with 3 passing and 0 failing out of 3 test cases
INFO: Build completed successfully, 18 total actions
I can see execution time of both tests in GreetingTest
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
, but cannot see the execution time of each test in this class/rule.
With --test_summary=short (the default value), the end of the output looks like this (lines for the other 325 tests truncated):
INFO: Elapsed time: 148.326s, Critical Path: 85.71s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 680 processes: 666 linux-sandbox, 14 worker.
INFO: Build completed successfully, 724 total actions
//third_party/GSL/tests:no_exception_throw_test (cached) PASSED in 0.4s
//third_party/GSL/tests:notnull_test (cached) PASSED in 0.5s
//aos/events:shm_event_loop_test PASSED in 12.3s
Stats over 5 runs: max = 12.3s, min = 2.4s, avg = 6.3s, dev = 3.7s
//y2018/control_loops/superstructure:superstructure_lib_test PASSED in 2.3s
Stats over 5 runs: max = 2.3s, min = 1.3s, avg = 1.8s, dev = 0.4s
Executed 38 out of 329 tests: 329 tests pass.
INFO: Build completed successfully, 724 total actions
Confusingly, --test_summary=detailed doesn't include the times, even though the name sounds like it should have strictly more information.
For sharded tests, that output doesn't quite have every single test execution, but it does give statistics about them as shown above.
If you want to access the durations programmatically, the build event protocol has a TestResult.test_attempt_duration_millis field.
Alternatively, using --test_output=all will print all the output from your actual test binaries, including the ones that pass. Many testing frameworks print a total execution time there.
There is a testlogs folder where you can find .xml files with the execution times of each testcase.
The bazel-testlogs symlink points to the same location.
For my example, these files will be located at /private/var/tmp/_bazel_<user>/<some md5 hash>/execroot/<project name>/bazel-out/<kernelname>-fastbuild/testlogs/GreetingTest/test.xml
The content of that file is like this:
<?xml version='1.0' encoding='UTF-8'?>
<testsuites>
<testsuite name='com.company.core.GreetingTest' timestamp='2020-04-07T09:58:28.409Z' hostname='localhost' tests='2' failures='0' errors='0' time='0.01' package='' id='0'>
<properties />
<testcase name='sayHiIsString' classname='com.company.core.GreetingTest' time='0.01' />
<testcase name='sayHi' classname='com.company.core.GreetingTest' time='0.0' />
<system-out />
<system-err /></testsuite></testsuites>

Writing graceful timeout for Nagios plugin

From Nagios' Plugin Development Guidelines:
Plugins have a very limited runtime - typically 10 sec. As a result, it is very important for plugins to maintain internal code to exit if runtime exceeds a threshold.
All plugins should timeout gracefully, not just networking plugins.
How can I implement a timeout mechanism into my custom plugin? Basically I want my plugin to return a status code 3 - UNKNOWN instead of the default 1 - CRITICAL when the plugin times out, to reduce the number of false positives generated.
EDIT: My plugin is written in Bash.
You can use timeout. Here is example usage:
timeout 15 ping google.com
if [ $? -eq 124 ]; then
echo "UNKNOWN - Time limit exceeded."
exit 3
if
You will get return exit status 124 from timeout when your command don't finish in defined time - 15 sec.

CVS error - CVS exited with error code 1

I am seeing this error for quite sometime now.
I am running ant build on CYGWIN which inturn runs on WindowsXP.
The resolution(bad one) I found was to delete my gcct/first directory and run ant build again (which runs from another directory). It runs successfully but if I modify some code under gcct/first, I do not want to delete it because of this error.
I did see this link. The resolution here does not apply to me since I do not have .cvspass defined anywhere in the build.xml.
C:\svn\CEL_v3681\buildCore.xml:1883: cvs exited with error code 1
Command line was [Executing 'cvs' with arguments:
'checkout'
'-A'
'-rfirst_v2_126'
'gcct/first'
The ' characters around the executable and arguments are
not part of the command.
environment:
ALLUSERSPROFILE=C:\Documents and Settings\All Users
ANT_HOME=C:/Apps/Apache/apache-ant-1.7.0
APPDATA=C:\Documents and Settings\shankarc\Application Data
CLASSPATH=./;C:/Program Files/Java/jre1.5.0_07/lib/ext/QTJava.zip
COMMONPROGRAMFILES=C:\Program Files\Common Files
COMPUTERNAME=NYKPWM2035798
COMSPEC=C:\WINNT\system32\cmd.exe
CUSTPROF=Roaming700Live
CVSROOT=:pserver:shankarc#amcvs2.lehman.com:/home/eqcvs/cmte
CVS_RSH=/bin/ssh
FP_NO_HOST_CHECK=NO
HOME=C:\Apps\CYGWIN\home\shankarc
HOMEDRIVE=F:
HOMEPATH=\
HOSTNAME=nykpwm2035798
IDEA_PROPERTIES=C:\Documents and Settings\shankarc\idea.properties
INFOPATH=/usr/local/info:/usr/share/info:/usr/info:
JAVA_HOME=C:/Program Files/Java/jdk1.6.0_21/
JDK_HOME=C:\Program Files\Java\jdk1.6.0_21\
LOGONSERVER=\\NYKPSM00069
MANPATH=/usr/local/man:/usr/share/man:/usr/man::/usr/ssl/man
NUMBER_OF_PROCESSORS=2
OS=Windows_NT
PATH=C:\Apps\CYGWIN\usr\local\bin;C:\Apps\CYGWIN\bin;C:\Apps\CYGWIN\bin;C:\Apps\CYGWIN\usr\X11R6\bin;C:\Apps\Apache\apache-ant-1.7.0\bin;C:\Program Files\Java\jdk1.6.0_21\bin\;C:\Apps\CYGWIN\bin;C:\Program Files\VisualSVN Server\bin;C:\Program Files\Sudowin\Clients\Console;C:\Program Files\Fortify Software\Fortify 360 v2.5.0\bin
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PSC1
PRINTER=\\NYKPSM04020\NYKLPR1301-03-03C05
PROCESSOR_ARCHITECTURE=x86
PROCESSOR_IDENTIFIER=x86 Family 6 Model 15 Stepping 6, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=0f06
PROFGROUP=FONP
PROGRAMFILES=C:\Program Files
PROMPT=$P$G
PWD=/cygdrive/c/svn/CEL_v3681/gcct/cel
QHOME=c:\q
QTJAVA=C:\Program Files\Java\jre1.5.0_07\lib\ext\QTJava.zip
SESSIONNAME=Console
SHLVL=1
SITECODE=NYK
SITEIDENT=NYK
SVN_ASP_DOT_NET_HACK=1
SYSTEMDRIVE=C:
SYSTEMROOT=C:\WINNT
TEMP=C:\TEMP
TERM=cygwin
TMP=C:\TEMP
UATDATA=C:\WINNT\system32\CCM\UATData\D9F8C395-CAB8-491d-B8AC-179A1FE1BE77
USER=shankarc
USERDNSDOMAIN=INTRANET.BARCAPINT.COM
USERDOMAIN=INTRANET
USERNAME=shankarc
USERPROFILE=C:\Documents and Settings\shankarc
WINDIR=C:\WINNT
CVS_PASSFILE=C:\Apps\CYGWIN\home\shankarc\.cvspass]
Total time: 58 seconds
How I resolve this?
I had the same issue and found that even though I was not using .cvspass I did have a build property of cvs.pass set which needed to be reset to OVERRIDE to function depending on how you set up your cvs access (though it looked similar from your post). This needed to be changed in build.properties and .build.properties. Hope this helps!

Resources