I am in the process of creating a Groovy email template for a Jenkins pipeline running Robot Framework tests. I intend to use Groovy's XMLSlurper to parse the output.xml created by Jenkins to extract the information I need. However, the template also relies on using Robot Publisher which I've now realized automatically deletes the output.xml. I would rather not have to archive the artifacts and access them that way, so is there a way to create a copy of the output.xml in the Jenkins pipeline before the Robot Publisher stage, that will not be deleted by Robot Publisher, that I can parse in my email stage?
Please bear with me as I'm relatively new to Jenkins (and stackoverflow for that matter), so apologies if I've excluded vital information, but any ideas would be much appreciated! Thanks
I would approach your problem from a different angle. First of all I do not suggest using Groovy's XMLSlurper or any other XML parser to extract the information you need from Robot Framework's output.xml.
What you should use is Robot Framework's own API that already implements the parsers you need. You could easily access any information described in the robot.result.model module. You can find everything here, suites, tests and keywords with all thier attributes like, test messages, failure messages, execution times, test results, etc.
All in all this would be the most future proof parsing solution as this parser will always match the version of the framework. Make sure to use the API documentation that matches your current framework version.
Now back to your task, you should utilize the above mentioned API via Robot Framework's listener interface. Implementing the output_file listener method you can access the output.xml (you can even make a copy of it here) file before the Robot Publisher plugin moves the file. The output_file will be automatically called once the output.xml is ready. The method will get the path to the xml file as an input. You can pass this path straight to the ExecutionResult class from the API, then you could "visit" the results by your ResultVisitor and acquire the information needed.
Last step would be to write the data into a file that would serve as an input to your e-mail stage. Note that this file won't be touched by the Robot Publisher by default as it is not a standard output, but a custom you just made using Robot Framework's API.
As it could sound a lot, here is an example to demonstrate the idea. The listener and the result visitor in EmailInputProvider.py:
from robot.api import ExecutionResult, ResultVisitor
class MyTestResultVisitor(ResultVisitor):
def __init__(self):
self.test_results = dict()
def visit_test(self, test):
self.test_results[test.longname] = test.status
class EmailInputProvider:
ROBOT_LISTENER_API_VERSION = 3
def output_file(self, path):
output = 'EmailInput.txt'
visitor = MyTestResultVisitor() # Instantiate result visitor
result = ExecutionResult(path) # Parse up execution result using robot API
result.visit(visitor) # Visit and top level suite to retrive needed metadata
with open(output, 'w') as f: # Write retrived data into a file
for testname, result in visitor.test_results.items():
print(f'{testname} - {result}', file=f)
# You can make a copy of the output.xml here as well
print(f'Email: Input saved into {output}') # Log about custom output to console
globals()[__name__] = EmailInputProvider
This would give the following results for this dummy suite (SO2.robot):
*** Test Cases ***
Test A
No Operation
Test B
No Operation
Test C
No Operation
Test D
No Operation
Test E
No Operation
Test F
Fail
Console output:
$ robot --listener EmailInputProvider SO2.robot
==============================================================================
SO2
==============================================================================
Test A | PASS |
------------------------------------------------------------------------------
Test B | PASS |
------------------------------------------------------------------------------
Test C | PASS |
------------------------------------------------------------------------------
Test D | PASS |
------------------------------------------------------------------------------
Test E | PASS |
------------------------------------------------------------------------------
Test F | FAIL |
AssertionError
------------------------------------------------------------------------------
SO2 | FAIL |
6 critical tests, 5 passed, 1 failed
6 tests total, 5 passed, 1 failed
==============================================================================
Email: Input saved into EmailInput.txt
Output: ..\output.xml
Log: ..\log.html
Report: ..\report.html
Custom output file:
SO2.Test A - PASS
SO2.Test B - PASS
SO2.Test C - PASS
SO2.Test D - PASS
SO2.Test E - PASS
SO2.Test F - FAIL
Related
I am using jmeter for functional testing and have 2 different jmx.
The first jmx has all APIs automated and the second jmx is used to send the html report (generated using Ant-Jmeter task) through SMTP sampler.
Now, I want to send the count of Total, Pass, Fail sample counts in the same email by parsing the jtl file generated by first jmx.
Here is what I can see in the jtl file, s="true" and s="false".
I want count of the same and save it as property to use it further in SMTP sampler.
Example in jtl:
<sample t="2" it="0" lt="2" ct="0" ts="1565592433268" s="false" lb="Verify Latest Patch" rc="200" rm="OK" tn="Tenant_Login 3-1" dt="text" by="9" sby="0" ng="1" na="1">
Any help will be appreciated.
Add the next line to user.properties file:
jmeter.save.saveservice.autoflush=true
it will instruct JMeter to immediately write results to file as soon as they're available
Add tearDown Thread Group to your Test Plan
Add HTTP Request sampler to the TearDown Thread Group
Configure it as follows:
Protocol: file
Path: `location of your .jtl result file
Add XPath Extractor as a child of the HTTP Request sampler
Configure it as follows:
Reference Name: anything meaningful, i.e. successCount
XPath query: count(//sample[#s='true'])
That's it, now you should be able to refer the successful samples count as ${successCount} where required
The question is mostly about step #3.
I need to implement the following loop in F#:
Load some input parameters from database. If input parameters specify that the loop should be terminated, then quit. This is straightforward, thanks to type providers.
Run a model generator, which generates a single F# source file, call it ModelData.fs. The generation is based on input parameters and some random values. This may take up to several hours. I already have that.
Compile the project. I can hard code a call to MSBuild, but that does not feel right.
Copy executables into some temporary folder, let’s say <SomeTempData>\<SomeModelName>.
Asynchronously run the executable from the output location of the previous step. The execution time is usually between several hours to several days. Each process runs in a single thread because this is the most efficient way to do it in this case. I tried the parallel version, but it did not beat the single threaded one.
Catch when the execution finishes. This seems straightforward: https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.exited?view=netframework-4.7.2 . The running process is responsible for storing useful results, if any. This event can be handled by F# MailBoxProcessor.
Repeat from the beginning. Because execution is async, monitor the number of running tasks to ensure that it does not exceed some allowed number. Again, MailBoxProcessor will do that with ease.
The whole thing will run on Windows, so there is no need to maintain multiple platforms. Just NET Framework (let’s say 4.7.2 as of this date) will do fine.
This seems like a very straightforward CI-like exercise and F# FAKE seemed as a proper solution. Unfortunately, none of the provided scarce examples worked (even with reasonable tweaks) and the bugs were cryptic. However, the worst part was that the compiling feature did not work at all. The provided example: http://fake.build/fake-gettingstarted.html#Compiling-the-application cannot be run at all and even after accounting for something like that: https://github.com/fsharp/FAKE/issues/1579 : it still silently choses not to compile the project. I’d appreciate any advice.
Here is the code that I was trying to run. It is based on the references above:
#r #"C:\GitHub\ClmFSharp\Clm\packages\FAKE.5.8.4\tools\FakeLib.dll"
#r #"C:\GitHub\ClmFSharp\Clm\packages\FAKE.5.8.4\tools\System.Reactive.dll"
open System.IO
open Fake.DotNet
open Fake.Core
open Fake.IO
open Fake.IO.Globbing.Operators
let execContext = Fake.Core.Context.FakeExecutionContext.Create false "build.fsx" []
Fake.Core.Context.setExecutionContext (Fake.Core.Context.RuntimeContext.Fake execContext)
// Properties
let buildDir = #"C:\Temp\WTF\"
// Targets
Target.create "Clean" (fun _ ->
Shell.cleanDir buildDir
)
Target.create "BuildApp" (fun _ ->
!! #"..\SolverRunner\SolverRunner.fsproj"
|> MSBuild.runRelease id buildDir "Build"
|> Trace.logItems "AppBuild-Output: "
)
Target.create "Default" (fun _ ->
Trace.trace "Hello World from FAKE"
)
open Fake.Core.TargetOperators
"Clean"
==> "BuildApp"
==> "Default"
Target.runOrDefault "Default"
The problem is that it does not build the project at all but no error messages are produced! This is the output when running it in FSI:
run Default
Building project with version: LocalBuild
Shortened DependencyGraph for Target Default:
<== Default
<== BuildApp
<== Clean
The running order is:
Group - 1
- Clean
Group - 2
- BuildApp
Group - 3
- Default
Starting target 'Clean'
Finished (Success) 'Clean' in 00:00:00.0098793
Starting target 'BuildApp'
Finished (Success) 'BuildApp' in 00:00:00.0259223
Starting target 'Default'
Hello World from FAKE
Finished (Success) 'Default' in 00:00:00.0004329
---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target Duration
------ --------
Clean 00:00:00.0025260
BuildApp 00:00:00.0258713
Default 00:00:00.0003934
Total: 00:00:00.2985910
Status: Ok
---------------------------------------------------------------------
Every example I've seen (task-launcher sink and triggertask source ) shows how to launch the task defined by uri attribute.
My tasks definitions look like this :
sampleTask <t2: timestamp || t1: timestamp>
sampleTask-t1 timestamp
sampleTask-t2 timestamp
sampleTaskRunner composed-task-runner --graph=sampleTask
My question is how do I launch the composed task runner (sampleTaskRunner, defined by DSL) from stream application.
Thanks
UPDATE
I ended up with the below solution that triggers task using SCDF REST API :
composedTask definition :
<timestamp || mySampleTask>
Stream definition :
http | httpclient | log
Deployment properties :
app.http.port=81
app.httpclient.body=name=composedTask&arguments=--increment-instance-enabled=true
app.httpclient.http-method=POST
app.httpclient.url=http://localhost:9393/tasks/executions
app.httpclient.headers-expression={'Content-Type':'application/x-www-form-urlencoded'}
Though it's easy to implement http sink component, would be great if stream application starters will provide one out of the box.
Another concern I have is about discovering the SCDF REST URL when deployed in distributed environment.
Here's a quick take from one of the SCDF's R&D team members (Glenn Renfro).
stream create foozer --definition "trigger --fixed-delay=5 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT --command-line-arguments='--graph=sampleTask-t1||sampleTask-t2 --increment-instance-enabled=true --spring.datasource.url=jdbc:mariadb://localhost:3306/test --spring.datasource.username=root --spring.datasource.password=password --spring.datasource.driverClassName=org.mariadb.jdbc.Driver' | task-launcher-local" --deploy
In the foozer stream definition,
1) "trigger" source happens to trigger an upstream event every 5s
2) "tasklaunchrequest-transform" processor takes a few arguments; more specifically, it uses "composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT" to launch a composed-task graph (i.e., sampleTask-t1||sampleTask-t2)
3) Pay attention to --increment-instance-enabled. This was recently added to CTR application and this provides the ability to re-launch a composed-task in a recurring cadence
4) Since the CTR and SCDF must share the same database, we are also passing datasource properties as command-line args. (SCDF-server is already started with the same datasource credentials)
Hope this helps.
Lastly, we will add a sample to the reference guide via: spring-cloud/spring-cloud-dataflow#1780
I am wondering that is there any possibility in FitNesse, while executing the test stages, to get a value from the response of a test stage and use this value in the next test stage.
I am using hsac-fitnesse-fixtures and SOAP web services.
For example, we have 3 test stages, and value from the response of the first stage can automatically transfer to the second stage to get the response of the second stage.
When comparing with SOAP UI we have property transfer.
Example below:
We have request XML:
!define POST_BODY_2 { {{{
<ns1:ZIP>#{zip}</ns1:ZIP>
</s11:Envelope>
}}} }
Stage 1:
|check|xPath|//weather:City/text()|#{City}|
And we get a response XML that contains the city name, as here.
Is it possible to pass this city name as a value to the second test stage?
I.e we have another post XML request !define POST_BODY_3 and to this request can we pass the value (city value) and have the next response XML.
Stage 2 test:
|check |response status|200|
If you are using SLiM as test system, you can use slim symbol
$slimSymbol is a "runtime variable" used in SLiM test system. They are defined by using $slimSymbol= symtax in the test case, and the value will only be available in the runtime. Documentation here
In your case, you are using a decision table in the first test case. So instead of having only one output column, I guess you can do
#some setup here
| send request |
| zip | City? | City? |
| 10007 | New York | $response1= |
| 94102 | San Francisco | $response2= |
And in the later test case, you can refer to the city names using $response1 and $response2. Note that there isn't {} around the variables.
The following Gherking test defines the desired behaviour for one of my servers:
Scenario Outline: Calling the server with a valid JSON
Given A GIS update server
When I call /update with the json in the file <filename>
Then the response status code is <status_code>
And the response is a valid JSON
And the response JSON contains the key status
And the response JSON contains the key timestamp
And the response JSON contains the key validity
Examples:
| filename | status_code |
| valid_minimal.json | 200 |
| valid_minimal_minified.json | 200 |
| valid_full.json | 200 |
| valid_full_minified.json | 200 |
| valid_full_some_nulls.json | 200 |
| valid_full_all_nulls.json | 200 |
I wrote this code for unit testing a Flask server. The steps file, which interpret the Gherkin directives, open a test client and make the necessary calls and assertions:
#given(u'A GIS update server')
def step_impl(context):
context.app = application.test_client()
The feature file is similar for unit and functional tests. The only difference is in a few steps file which would have to make HTTP calls rather than calling the test client's methods.
What's the right way to re-use this behave feature file by passing parameters to the steps file?
Expanding on Parva's comments, I suggest sending parameters via the command line which will be detected in your step definitions and adjust behaviour for unit vs functional testing (the decision of whether you separate your unit and functional tests for clarity is up to you ;).
There is an example given in the Behave documentation around Debug on error, which gives a good example of using environmental attributes in modifying the execution of your steps method:
# -- FILE: features/environment.py
# USE: BEHAVE_DEBUG_ON_ERROR=yes (to enable debug-on-error)
from distutils.util import strtobool as _bool
import os
BEHAVE_DEBUG_ON_ERROR = _bool(os.environ.get("BEHAVE_DEBUG_ON_ERROR", "no"))
def after_step(context, step):
if BEHAVE_DEBUG_ON_ERROR and step.status == "failed":
# -- ENTER DEBUGGER: Zoom in on failure location.
# NOTE: Use IPython debugger, same for pdb (basic python debugger).
import ipdb
ipdb.post_mortem(step.exc_traceback)
You may change that to detect a command line passed variable such as UNIT_TESTING and have it hit a different endpoint or perform alternate functionality for your tests.
REQUIRES: behave >= 1.2.5
I think, the test stage concept should help you with your needs. It allows you to use different step implementations for different test stages.
behave --stage=functional
If your changes are minor, use the userdata concept to pass a flag to your step implementation, like:
behave -D test_stage=unit …
behave -D test_stage=functional …