How do I tell Taurus that my (Postman/Newman) test is a Blazemeter Functional test, and not a Performance test? Below is my bzt.yaml I created with the help of https://gettaurus.org/docs/Postman/.
execution:
- executor: newman
iterations: 1
scenario: functional/simple
scenarios:
functional/simple:
script: my.postman_collection.json
reporting:
- module: blazemeter
modules:
blazemeter:
request-logging-limit: 20240
public-report: false
report-name: my-postman-collection
test: newmantrials
project: test
final-stats:
summary-labels: true
I run it using the taurus Docker image:
docker run --rm -t -v `pwd`:/bzt-configs -v `pwd`/artifacts:/tmp/artifacts blazemeter/taurus:1.14.0 bzt.yaml -o modules.blazemeter.token="${token}"
When I log into the Blazemeter UI, I see that it's listed under the "Performance" tab, and looks like a performance test. I would like it to run as a Functional test to get more details on the request and response payloads.
I do not believe it's possible at the moment, because presently BlazeMeter functional tests are geared toward either straight API functional tests or GUI (Selenium) functional tests.
The problem is that from BlazeMeter's side, the file validator is failing to correctly identify the Postman/Newman JSON file (despite the YAML file referencing it properly). I reported this to the BlazeMeter R&D team fairly recently, so it's being looked into.
In the meantime though, I don't expect this to work in BlazeMeter. It likely won't correctly identify your Newman script unless you run it as a Performance test for the interim.
(Sorry for the bad news on this one -- Hopefully it'll get sorted soon!)
Feel free to bring this up with BlazeMeter support at support#blazemeter.com as well.
Related
I am wondering how to
1) how to run model directly in Eclipse without GUI - just run the model like run other java codes in Eclipse and print out something i am interested.
2) how to run it in headless mode without even Eclipse - I plan to deploy my model in a remote server, which the server or my own PC could run the model automatically at a specific time of the day.
3) Every time when I change the code, I have to launch a new GUI in order to reflect the code changes. It takes at least 5 seconds to open the GUI. This is very inefficient way of model development and debugging. What is the better strategy available?
For headless, or batch, running of models, take a look at the Repast Batch Getting Started Guide. This can either allow you to run multiple runs without a GUI, as in (1), or if you look at section 9.2, it will allow you to run from the command line without invoking Eclipse, as in your case (2). If you want more control, I'd suggest looking at the InstanceRunner class and utilize the complete_model.jar payload that is generated by the Batch GUI or batch_runner.jar.
Unarchive the complete_model.jar
Then use the InstanceRunner class from the command line, like so from within the complete_model directory
java -Xmx512m -cp "../lib/*" repast.simphony.batch.InstanceRunner \
-pxml ../scenario.rs/batch_params.xml \
-scenario ../scenario.rs \
-id $instance \
-pinput localParamFile.txt
where the localParamFile.txt is an unrolled parameter file specifying the combination(s) of parameters to run (see the unrolledParamFile.txt within the payload for an example) and if you're running just one instance this would just be one line.
I'm trying to build an AWS application using SAM (Serverless Application Model) with the Lambdas written in Java.
I was able to get it running locally by using a resource definition like this in the template:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::handleRequest
Runtime: java8
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
But to get the sam package phase to upload only the actual code (and not the whole project directory) to S3 I had to change it to this:
...
Properties:
CodeUri: HelloWorldFunction/target/HelloWorld-1.0.jar
...
as documented in the AWS SAM example project README.
However, this breaks the ability to run the application locally with sam build followed by sam local start-api.
I tried to get around this by giving the CodeUri value as a parameter (with --parameter-overrides) and this works locally but breaks the packaging phase because of a known issue with the SAM translator.
Is there a way to make both the local build and the real AWS deployment working, preferably with the same template file?
The only workaround I've come up myself with so far is to use different template files for local development and actual packaging and deployment.
To avoid maintaining two almost equal template files I wrote a script for running the service locally:
#!/bin/bash
echo "Copying template..."
sed 's/CodeUri: .*/CodeUri: HelloWorldFunction/' template.yaml > template-local.yaml
echo "Building..."
if sam build -t template-local.yaml
then
echo "Serving local API..."
sam local start-api
else
echo "Build failed, not running service."
fi
This feels less than optimal but does the trick. Would love to hear better alternatives, still.
Another idea that came to mind was extending a mutual base template with separate CodeUri values for these cases but I don't think SAM templates support anything like that.
I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).
I'm playing with cucumber but for some reason the features and scenarios are not being outputted to the console.
When I run
cucumber features
I get
Using the default profile...
....
1 scenario (1 passed)
4 steps (4 passed)
0m0.071s
So my tests have passed but I can't see my features or scenarios. Is there a command line flag or something?
Yes, you can use -f pretty resp. --format pretty.
I really like the html output too, especially when debugging my step definitions:
cucumber features -f html -o "path/to/some/file.html"
Is there a cucumber command that will print out just the feature info and the scenario names?
I recently began a project and want to print out the cucumber features and scenarios I wrote to describe the scope of the project and get it confirmed with the client.
Do you mean cucumber -d?
$ cucumber -h
-d, --dry-run Invokes formatters without executing the steps.
This also omits the loading of your support/env.rb file if it exists.
Implies --no-snippets.
Implies --dry-run --formatter pretty.