Dockerized JMeter not sending requests to Dockerized Microservice - docker

I have a small Spring Boot API running in docker. Shown below Is the command I used to up the container.
docker run -d --rm --name factorialorialContainer --memory=$2 --cpus=$3 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags.txt)" suleka96/factorial:latest
Then I have a dockerized JMeter which I up using the below command
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
but all the tests fail and requests are not getting sent to the API. This is how the CLI of JMeter looks
test config of request:
Protocol: htttp
Server: localhost
Port:8080
Method:GET
Path:/api/factorial
This is what the complete bash file looks like:
#!/bin/bash
cd /Users/sulekahelmini/Documents/fyp/fyp_work/demo/target && docker build . -t suleka96/factorial
docker run -d --rm --name factorialorialContainer --memory='512m' --cpus=2 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags_base.txt)" suleka96/factorial:latest
sleep 15
#run test
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
sleep 15
#jtl split
java -jar /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jtl-splitter-0.4.6-SNAPSHOT.jar -f /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource/jmeter_results.jtl -s -t 1;
docker stop factorialorialContainer
docker stop jmeterContainer
What am I doing wrong? How can I fix this?

You're doing wrong absolutely everything.
When it comes to Spring Boot even "small" API is not small at all, if you want something really small - consider i.e. Jersey
I fail to see why do you need containers at all, in your situation they don't add any value but only consume resources
You're running the application under test and the load generator at the same physical machine. Both can be very resource intensive when it comes to high load and you won't be able to tell for sure where is the bottleneck
If you still want to ignore previous 2 points and proceed: you're using localhost in JMeter container and there is nothing deployed on port 8080 in that container. You need to run the following command:
docker inspect factorialorialContainer
you will see a line which looks like:
"IPAddress": "xxx.xxx.xxx.xxx",
you will need to get this IP address from the docker inspect command output and replace the localhost with this IP address in the JMeter's HTTP Request sampler
References:
Docker Network
JMeter Distributed Testing with Docker

Related

How to pass erlang.cookie in "docker run" after RABBITMQ_ERLANG_COOKIE got depricated

I want to start three RabbitMQ containers that will be joined together in a cluster. I want to keep it simple and not define complex Dockerfiles with specific volumes.
This is what I am doing right now:
docker network create rabbits
docker run -d --rm --net rabbits --hostname rabbit-1 --name rabbit-1 -p 8081:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-2 --name rabbit-2 -p 8082:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-3 --name rabbit-3 -p 8083:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
When I then try to tell the nodes to join each other with the following commands, I get an error message:
docker exec -it rabbit-2 rabbitmqctl stop_app
docker exec -it rabbit-2 rabbitmqctl reset
docker exec -it rabbit-2 rabbitmqctl join_cluster rabbit#rabbit-1
docker exec -it rabbit-2 rabbitmqctl start_app
docker exec -it rabbit-2 rabbitmqctl cluster_status
This results in:
RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
However I do not know how to pass this switch. When I add this to the docker run command it does not work. So i thought maybe add this after the join_cluster command, but then the cookie is already set.
How do I need to change the docker run command?
In response to your and other questions about RABBITMQ_ERLANG_COOKIE, I opened this issue:
https://github.com/rabbitmq/rabbitmq-server/issues/7262
Currently you should use the environment variable and disregard the warning.
The best practice is to use docker compose and your own image based off of the official RabbitMQ images:
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/docker-compose.yml
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/rmq/Dockerfile
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The RABBITMQ_ERLANG_COOKIE environment variable is no longer used in RabbitMQ starting from version 3.7.0. Instead, you can set the Erlang cookie value by using the -e option in the docker run command and setting the RABBITMQ_ERLANG_COOKIE environment variable to your desired value. Here's an example:
docker run -d --name rabbitmq -e RABBITMQ_ERLANG_COOKIE='your_cookie_value' rabbitmq:3
Alternatively, you can store the Erlang cookie in a file and mount it as a volume in your container. For example:
Create a file named erlang.cookie with your desired cookie value
echo 'your_cookie_value' > erlang.cookie
Start the RabbitMQ container, mounting the erlang.cookie file
docker run -d --name rabbitmq -v $(pwd)/erlang.cookie:/var/lib/rabbitmq/.erlang.cookie rabbitmq:3

What does it mean when Docker is simultaneously run in interactive and detatched modess

I'm new to Docker and came across this confusing (to me) command in one of the Docker online manuals (https://docs.docker.com/storage/bind-mounts/):
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
What I found confusing was the use of both the -it flag and the -d flag. I thought -d means to run the container in the background, but -it means to allow the user to interact with the container via the current shell. What does it mean that both flags are present? What am I not understanding here?
The -i and -t flags influence how stdin and stdout are connected, even in the presence of the -d flag. Furthermore, you can always attach to a container in the future using the docker attach command.
Consider: If I try to start an interactive shell without passing -i...
$ docker run -d --name demo alpine sh
...the container will exit immediately, because stdin is closed. If I want to run that detached, I need:
$ docker run -itd --name demo alpine sh
This allows me to attach to the container in the future and interact with the shell:
$ docker attach demo
/ #

Docker-compose pass stdout from a service to stdin in another service

I'm not sure that what's I'm looking for is possible or not... I'm a newbie in docker-compose world and I've read a lot of documentation and posts but I wasn't able to find out a solution.
I need to pass the stdout of a service defined in docker-compose to the stdin of another service. So the output of ServiceA will be the input of ServiceB.
Is it possible?
I see the function stdin_open but I cannot understand how to use the stdout of the other service as input.
Any suggestion?
Thanks
You can't do this in Docker easily.
Container processes' stdin and stdout aren't usually used for much. Most often the stdout receives log messages that can get reviewed later, and containers actually communicate through network sockets. (A container would typically run Apache but not grep.)
Docker doesn't have a native cross-container pipe, beyond the networking setup. If you're docker running containers from the shell, you can use an ordinary pipe there:
sudo sh -c 'docker run image-a | docker run image-b'
If it's practical to run both processes in the same container, you can use a shell pipe as the main container command:
docker run image sh -c 'process_a | process_b'
A differently hacky approach is to use a tool like Netcat to bridge between "stdin" and a network port. For example, consider a "server":
#!/bin/sh
# server.sh
# (Note, this uses busybox nc syntax)
nc -l -p 12345 \
| cat \ # <-- a process that reads from stdin
> out.txt
And a matching "client":
#!/bin/sh
# client.sh
cat in.txt \ # <-- a process that writes to stdout
| nc "$1" 12345
Build these into an image
FROM busybox
COPY client.sh server.sh /bin/
EXPOSE 12345
WORKDIR /data
CMD ["server.sh"]
Now run both containers:
docker network create testnet
docker build -t testimg .
echo hello world > in.txt
docker run -d -v $PWD:/data --net testnet --name server testimg \
server.sh
docker run -d -v $PWD:/data --net testnet --name client testimg \
client.sh server
docker wait client
docker wait server
cat out.txt
A more robust path would be to wrap the server process in a simple HTTP server that accepted an HTTP POST on some path and launched a subprocess to handle the request; then you'd have a single long-running server process instead of having to re-launch it for each request. The client would use a tool like curl or any other HTTP client.

Expose application that runs under a docker container

I'm trying to expose a nodejs application that runs under a docker
docker run -p 3005:3005 -p 5858:5858 -i -t -v /usuarios centos-nodejs:1.0 /bin/bash
after that command, I access my application
cd usuarios
node index
and then the application is running inside the docker container.
How can I expose a port to access in my browser something like localhost:5858/my_api_here
It seems a nodejs application is bound to localhost:5858 only inside a container. That's why you cannot access it via 127.0.0.1:5858 from the host. You need to find a way to bind it to 0.0.0.0:5858. After that you can access it on 127.0.0.1:5858 from the host.
Following the command below, it works
docker run -p 3005:3005 -p 5858:5858 -i -t -v C:\Users\lgermano\Documents
\Repositorios:/opt/rede/workspace centos-nodejs:1.0 /bin/bash

Docker : How to run a service and a terminal in one command?

I'm running an apache server like this
docker run -d -p 80:80 php:apache /usr/sbin/apache2ctl -D FOREGROUNDD
Then I determine the name of the container with
docker ps
and execute an interactive shell on the container with
docker exec -ti hungry_fermi bash
It works well, but I would like to do the same in one command. I've tried
docker run -ti -d -p 80:80 php:apache /bin/bash -c 'bash; apache2ctl -D FOREGROUND'
The problem is that, I don't obtain a terminal and the command returns.
You're trying this:
docker run -ti -d -p 80:80 php:apache \
/bin/bash -c 'bash; apache2ctl -D FOREGROUND'
There are several problems here. First, you're using the -d command line option, which asks the docker client to detach and leave the container running. You will never get an interactive shell when using -d.
Secondly, your command -- bash; apache2ctl -D FOREGROUND -- would run bash, wait for bash to exit, then run httpd. You can instead do something like this:
docker run -ti -p 80:80 php:apache \
/bin/bash -c 'apachectl start; bash'
This would start Apache in the background (because there is no -D FOREGROUND), and then start bash...but I'm not really clear why you would want to do this, because now if you were to exit your shell the container would exit as well (taking Apache with it).
I think you are much better simply starting Apache the way you are now, and using docker exec to get a shell inside the container.

Resources