Running jmeter using docker - docker

I am facing issue while running jmeter using docker container. The script works fine when I run it through GUI or CLI on my local machine. But when I execute same script using container it getting failed.
Below is the issue.
So I am using beanshell postprocessor for capturing response cookies. Below is the code for same.props.put("MyCookie1","${COOKIE_one}");
props.put("MyCookie2","${COOKIE_two}");
props.put("MyCookie3","${COOKIE_three}");
And this parameterized value works fine in my local machine(windows 10). But when I run the same in container these parameterized value doesn't gets resolved.
I am using "alpine:3.12" base image in container.
NOTE : Jmeter version in my local machine is "5.4.1" and java version is "java 11". In docker container Jmeter version is "5.3" and java version is "java 8". The API which I am hitting is hosted in AWS Lambda.

You forgot the most important detail: your Dockerfile
Blind shot: in order to be able to access cookies as COOKIE_one, etc. - you need to add an extra property to wit CookieManager.save.cookies=true either to user.properties file or to pass it to JMeter startup script via -J command-line argument like:
./jmeter -JCookieManager.save.cookies=true -n -t test.jmx -l result.jtl
Also according to JMeter Best Practices:
Since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting
You should always be using the latest version of JMeter
So maybe it worth consider migrating to Groovy, you will only need to amend your code from:
props.put("MyCookie1","${COOKIE_one}")
to
props.put("MyCookie1",vars.get("COOKIE_one"))
where vars stands for JMeterVariables class instance, see Top 8 JMeter Java Classes You Should Be Using with Groovy for more information if needed.
And update your Dockerfile to use the latest stable version of JMeter

Related

Autoload/PHPUnit not found by PhpStorm with Docker Desktop + Docker Compose + WSL

My PHP project is stored in WSL, accessed by PhpStorm installed on Windows and running with Docker Desktop installed on Windows.
The Project itself is totally fine, but running Tests is not possible because PhpStorm cannot find the vendor autoload or phpunit.phar in Test Framework configuration.
Setup:
Windows 10 with WSL2 Ubuntu 20.04 LTS
PhpStorm on Windows
Docker Desktop on Windows, Docker Compose files in WSL
Code in home folder in WSL (see following screens)
I read in some older threads that Docker Compose v2 needs to be enabled in Docker Desktop. It is:
Docker is configured inside of PhpStorm and shows that the connection is successful (I know that works because things like Xdebug is working without any issues):
Notice that I configured a path mapping here for the project root.
in WSL: \\wsl$\Ubuntu\home\USERNAME\workspace\PROJECTNAME-web-docker
in Docker: /var/www/PROJECTNAME-web
I can see that those paths are correct by either logging into the Docker container or by checking the Service Tab of PhpStorm and inspecting files:
This is my CLI Interpreter using the docker-compose configuration:
It does not matter if I use the existing container or if it should be starting a new one
PHP Version is always detected
And finally the error inside of Test Framework:
Here I tried different things:
use composer autoloader or phpunit.phar
it doesn't matter if I use a full path /var/www... or just vendor/...
tried different path mappings here
clicking on refresh shows this error in a small popup
Cannot parse PHPUnit version output: Could not open input file: /var/www/PROJECTNAME-web/SUBTOPIC/vendor/phpunit/phpunit/phpunit
autoload.php is definitely available and correct, phpunit is installed and available.
Maybe someone has a hint what is missing or wrong? Thanks!
EDIT:
How do I know that autoload is available or path mapping is correct?
I have Xdebug configured and running. When Xdebug stops in my code, I know that the path mapping is correct. The output of Debug -> Console for example shows stuff like this:
PHP Deprecated: YAML mapping driver is deprecated and will be removed in Doctrine ORM 3.0, please migrate to annotation or XML driver. in /var/www/PROJECTNAME-web/SUBTOPIC/vendor/doctrine/orm/lib/Doctrine/ORM/Mapping/Driver/YamlDriver.php on line 62
so I know the path mapping for xdebug works, but seems like Test Framework config does not like it.

Is it possible to construct automation framework with Selenium+docker, without selenium grid but with Pytest Internal parallel execution system?

Currently, I have an automation framework that consists of the following stack:
Python
Pytest Framework (and pytest-xdist parallel mechanized)
Selenium
Allure report system
Jenkins
I'm going to add a docker containerization system as well, to run tests using it on a windows server.
I'm running my tests using pytest's internal parallel run (pytest-xdist) functionality using 'workers'
the run command looks like this:
os.system("pytest -v -s --alluredir="'C:\AllureReports\Data'" --html=report.html --developer=dmitriy --timeout=60 --self-contained-html --dist=loadfile -n=3 -m=regression")
As you can see '-n=3' is the number of 'nodes' that the suite will be executed. In other words, the regression suite will run in a parallel run of 3 nodes.
Can you tell me if it is possible to avoid using Selenium-Grid mechanized, and use this pytest's internal parallel run system instead -> pytest-xdist, and how?
Will appreciate relevant technical articles or explanations of how it can be done.
I have gone over this question - pytest-xdist vs selenium hub
but got only a really overview impression which is not enough

Is IntelliJ's support for Dockerized Python environments compatible with Python running on a Windows container?

My Python project is very windows-centric, we want the benefits of containers but we can't give up Windows just yet.
I'd like to be able to use the Dockerized remote python interpreter feature that comes with IntelliJ. This works flawlessly with Python running on a standard Linux container, but appears to work not at all for Python running on a Windows container.
I've built a new image based on a standard Microsoft Server core image. I've installed Miniconda, bootstrapped a Python environment and verified that I can start an interactive Python session from the command prompt.
Whenever I try to set this up I get an error message: "Can't retrieve image ID from build stream". This occurs at the moment when IntelliJ would have normally detected the python interpreter and it's installed libraries.
I also tried giving the full path for the interpreter: c:\miniconda\envs\htp\python.exe
I've never seen any mention that this works in the documentation, but nor have I seen any mention that it does not work. I totally accept that Windows Containers are an oddity, so it's entirely possible that IntelliJ's remote-Python feature was never tested on Python running in Windows containers.
So, has anybody got this feature working with Python running on a Windows container yet? Is there any reason to believe that it does or does not work?
Regrettably, it is not supported yet. Please vote for the feature request https://youtrack.jetbrains.com/issue/PY-45222 in order to increase its priority.

Bigquery CLI commands with Jenkins

I can run a BigQuery script interactively just fine:
But when i try with Jenkins on the same windows machine it fails:
Do i need to install BigQuery SDK on the Jenkins Localhost, or tinker with some PATH varibles? :)
Make sure the directory the bq executable is installed has been added to your PATH variable (that would be the PATH_TO/google-cloud-sdk/bin directory). But this is most definitely already true given that you're able to run commands from the C drive. Different users get different permissions, so also make sure the bq command is available for the users Jenkins is operating under .
I've had the same issue where I couldn't run bq commands using systemctl on linux. To go around this error, instead of running the command with bq, use the full path to the executable PATH_TO/google-cloud-sdk/bin/bq.

Zap scan is not running SOAP injection test

I am passing the 90019 scanner, for SOAP injection, into a zap script, but it is not running it, while it does run other rules, such as OS Command Injection, and SSI Server Side. I am running zap from a docker container and I noticed watching the output that these other rules correspond to a particular zap plugin. So, I am guessing I am missing a SOAP plugin in my environment and my question is: how can I install a plugin in Docker that corresponds to scanner 90019 to make sure that the script that runs zap scan checks for this rule? Many thanks. If there's something else that I am missing or more info is needed, please let me know.
The SOAP Scanner is included in this add-on: https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsSoapSoap
This is included in the weekly docker image but not in the stable one.
You can install it when you start ZAP in the docker container by adding the parameters:
-addoninstall soap
You can also install add-ons using the ZAP API, but thats only worth doing if you are already using the API.

Resources