How to override k6 CLI arguments from within the script? - load-testing

I have a rather complicated setup which forces me to pass --duration and --vus to k6 CLI.
It ends up looking like
k6 run --vus 200 --duration 60s fixed-scenarios.js
Because of this my custom scenarios are being overridden by a default scenario.
Is there a way to prevent it from within the script?

In k6 order of presence cli flags will override everything else.
import http from "k6/http";
const options = {
foo_scenario: {
executor: "shared-iterations",
vus: 1,
iterations: 1,
maxDuration: "10s",
},
};
export default function () {
http.get("https://stackoverflow.com");
}
If you only run with k6 run script.js it will make one request.
If you override with cli flags, like k6 run script.js --vus 10 --iterations 10 it'll make 10 ten requests.

Related

Playwright - Test against different environments and different variables

Im looking to use Playwright to test against a web page.
The system im working on has 4 different environments that we need to deploy against,
for example the test urls may be
www.test1.com
www.test2.com
www.test3.com
www.test4.com
The first question is how do I target the different Environment? In my playwright config I had a baseUrl but I need to override that.
In addition each environment has different login credentials, how can I create and override these as parameters per environment?
Since Playwright v1.13.0, there is a baseURL option available. You can utilise that in this way probably
In your config.js file, you can have this
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
use: {
baseURL: process.env.URL,
},
};
export default config;
Now in the package.json file, you can have the environment variables set in the test commands for various env in the scripts , like this
...
"scripts": {
"start": "node app.js",
"test1": "URL=www.test1.com mocha --reporter spec",
"test2": "URL=www.test2.com mocha --reporter spec",
.
.
},
...
Similarly you can set the environment variables for the login credentials also and then pass them in the script in the same way the URL is passed.
Another approach to this is to use a Bash script. I use something like the following to run tests across environments, to ensure that my Playwright tests will work in all environments they're run in -
#!/bin/bash
echo "Running tests against env 1";
ENV_URL=https://www.env1.com SOMESERVICE_ENV_URL=http://www.env1.com/scholarship npx playwright test $1;
echo "Running tests against env 2"
ENV_URL=https://env2.com SOMESERVICE_ENV_URL=http://env2.com:4008 npx playwright test $1;
echo "Running tests against env 3";
ENV_URL=http://localhost:3000 SOMESERVICE_ENV_URL=http://localhost:4008 npx playwright test $1;
And then run with ./myScript.sh myTest.test.ts
(In a Bash script, the first argument passed in is available via $1.)

how do you properly pass a command to a container when using "azure-arm-containerinstance" from azure node sdk?

just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)

Jest did not exit one second after the test run has completed (with Nuxt and Jest)

For unknown reasons my Jest tests seem to block at the end of my CI via Travis.
The Travis logs say as follow:
Test Suites: 5 passed, 5 total
Tests: 31 passed, 31 total
Snapshots: 0 total
Time: 21.993s
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't stopped in your tests. Consider running Jest with --detectOpenHandles to troubleshoot this issue.
Note that --detectOpenHandles does not display anything.
As you can see my tests pass but do not exit, even though I do the following :
describe('[INTEGRATION] Index.test.js', () => {
beforeAll(async () => {
const rootDir = resolve(__dirname, '../..');
let config = {};
// ... config
nuxt = new Nuxt(config);
new Builder(nuxt).build();
await nuxt.listen(3000, 'localhost');
homePage = await nuxt.renderAndGetWindow('http://localhost:3000/');
}, 30000);
// This is called after every suite (even during the CI process)
afterAll(() => {
nuxt.close();
});
// ... my tests
});
My tests work fine locally but only do this during the CI process via Travis.
Here is the content of my Jest config file:
module.exports = {
verbose: true,
moduleFileExtensions: ['js', 'vue', 'json'],
transform: {
'^.+\\.js$': 'babel-jest',
'.*\\.(vue)$': 'jest-vue-preprocessor',
},
setupTestFrameworkScriptFile: './jest.setup.js',
silent: true,
};
With jest.setup.js containing jest.setTimeout(30000);.
Finally, here is my .travis.yml config file:
language: 'node_js'
node_js: '8'
cache:
directories:
- 'node_modules'
before_script:
- npm run build
- npm run lint
- npm install
- npm run generate
What could be causing this problem? The timeout shouldn't be it as it is needed in order to execute all my integration tests, and I close my nuxtsession after my integration test suites.
Nothing has majorly been changed between yesterday when it worked and today.
I think you may need to make sure you wait for the nuxt.close() call to resolve. It looks like close is asynchronous and emits a promise that resolves when the connection is actually closed. So Jest probably finishes up before that asynchronous operation finishes. And it's really a timing issue so the CI machine may run things slightly differently causing the close call to take longer than it does on your local machine.
Try changing your afterAll to something like this:
afterAll(async () => {
await nuxt.close();
});

How to get started with dockerode

I am planning on running my app in docker. I want to dynamically start, stop, build, run commands, ... on docker container. I found a tool named dockerode. Here is the project repos. This project has doc, but I am not understanding very well. I would like to understand few thing. This is how to build an image
docker.createContainer({Image: 'ubuntu', Cmd: ['/bin/bash'], name: 'ubuntu-test'}, function (err, container) {
container.start(function (err, data) {
//...
});
});
It is possible to make RUN apt-get update like when we use Dockerfile, or RUN ADD /path/host /path/docker during build ? how to move my app into container after build ?
Let's see this code :
//tty:true
docker.createContainer({ /*...*/ Tty: true /*...*/ }, function(err, container) {
/* ... */
container.attach({stream: true, stdout: true, stderr: true}, function (err, stream) {
stream.pipe(process.stdout);
});
/* ... */
}
How can I know how many params I can put here { /*...*/ Tty: true /*...*/ } ?
Has someone tried this package too ? please help me to start with.
Dockerode is just a node wrapper for Docker API. You can find all params you can use for each command in api docs.
For example docker.createContainer will call POST /containers/create (docs are here: https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/create-a-container)
Check files in lib folder of dockerode repo to see what api command is wrapped for each dockerode method.

Parallel run with jenkins

I have several (few hundreds) of files to run test on (each test takes few minutes).
Running sequentially is not acceptable and neither all together. So I am looking for something like a producer-consumer.
I tried pipeline jobs and parallel command the following way:
def files = findFiles glob: 'test_files/*'
def branches = [:]
files.each{
def test_command = "./test ${it}"
branches["${it}"] = { sh "${test_command} ${it}"}
}
stage name:'run', concurrency:2
parallel branches
Problem:
All the tasks are launch at the same time (OOM and all the fun)
Doesn't have the same introspection as the Jenkins parallel step, but since it seems not to support a fixed pool you can use xargs to achieve the same result:
def files = findFiles glob: 'test_files/*'
def branches = [:]
// there's probably a more efficient way to generate the list of commands
files.each{
sh "echo './test ${it}' >> tests.txt"
}
sh 'cat tests.txt | xargs -L 1 -I {} -P 2 bash -c "{}"'
The -P argument is the one that specifies a fixed number of 2 (or N) processes should always be running. Other tools like GNU Parallel offer even more tuning on how many processes should be used.
You can also try to use the lock step from the Lockable Resources plugin, the node step targeting a fixed number of executors. However this seems too much overhead to me unless your single tests are already taking tens of second each.

Resources