How do I run Locust in a distributed Docker configuration? - docker

I'm working on running Locust with multiple workers in a Fargate environment, but I wanted to see what it looked like in a simple distributed Docker setup. I took the below docker-compose.yml from the website and modified it so that everything would run on localhost. I can start Locust just fine with docker-compose up --scale worker=4, and four workers and master come up, but when I try to run a test via the Web UI, I get:
Attaching to locust-distributed-docker-master-1, locust-distributed-docker-worker-1, locust-distributed-docker-worker-2, locust-distributed-docker-worker-3, locust-distributed-docker-worker-4
locust-distributed-docker-worker-1 | [2021-11-17 19:01:19,719] be1b465ae5c7/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-master-1 | [2021-11-17 19:01:19,956] 8769b6dcd3ed/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
locust-distributed-docker-master-1 | [2021-11-17 19:01:20,016] 8769b6dcd3ed/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-4 | [2021-11-17 19:01:20,144] bd481d228ef6/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-3 | [2021-11-17 19:01:20,716] 26af3d44e1c9/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-2 | [2021-11-17 19:01:21,122] d536c752bdee/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-master-1 | [2021-11-17 19:01:42,998] 8769b6dcd3ed/WARNING/locust.runners: You are running in distributed mode but have no worker servers connected. Please connect workers prior to swarming.
The whole point of this exercise is to watch the console to see how the workers interact with the master, nothing else.
docker-compose.yml:
version: '3'
services:
master:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host 127.0.0.1

To use the example in this way, with workers running on the same machine as the master, it's easiest to follow the example exactly and just not include the master IP address. The trouble is networking between Docker containers locally doesn't work the same as regular networking between hosts. You'll have to research the differences.
Locust workers will automatically discover the master if they run on the same host if you don't specify an IP address for the master. In the example docker-compose.yml --master-host master refers to the master service that will run by name, creating a networking bridge between the worker containers and the master container to allow the workers to automatically discover the master. When you actually deploy the workers, you may need a different setup that does specify a separate IP address to communicate with if they're run on separate hosts.
So just follow the example directly and have your workers command like this:
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host master
That should result in output like this:
% docker compose up --scale worker=4
[+] Running 5/0
⠿ Container docker-compose_master_1 Created 0.0s
⠿ Container docker-compose_worker_2 Created 0.0s
⠿ Container docker-compose_worker_3 Created 0.0s
⠿ Container docker-compose_worker_1 Created 0.0s
⠿ Container docker-compose_worker_4 Created 0.0s
Attaching to master_1, worker_1, worker_2, worker_3, worker_4
worker_3 | [2021-11-18 16:32:49,911] d90df67c6a69/INFO/locust.main: Starting Locust 2.5.0
worker_4 | [2021-11-18 16:32:50,062] 112a60412b1e/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,224] 859d07f8570b/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
worker_2 | [2021-11-18 16:32:50,233] 56ffce9d4448/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,238] 859d07f8570b/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,239] 859d07f8570b/INFO/locust.runners: Client '56ffce9d4448_dfda9f3bcff742909af80b63d7866714' reported as ready. Currently 1 clients ready to swarm.
master_1 | [2021-11-18 16:32:50,249] 859d07f8570b/INFO/locust.runners: Client '112a60412b1e_49c9e2df265d4fd7bc0f6554a76e66c9' reported as ready. Currently 2 clients ready to swarm.
worker_1 | [2021-11-18 16:32:50,256] 988707b23133/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,259] 859d07f8570b/INFO/locust.runners: Client '988707b23133_88ac7446afd843a5ae7a20dceaed9ea4' reported as ready. Currently 3 clients ready to swarm.
master_1 | [2021-11-18 16:32:50,336] 859d07f8570b/INFO/locust.runners: Client 'd90df67c6a69_e432779d02f94947abb992ff1043eb0e' reported as ready. Currently 4 clients ready to swarm.

Related

Connect the Cassandra container to application web container failed - Error: 202 Connecting to Node

So, I created two docker's images and I want to connect one to which other with the docker composer. The first image is Cassandra 3.11.11 (from the official hub docker) and the other I created by myself with the tomcat version 9.0.54 and my application spring boot.
I ran the docker-compose.ylm below to connect the two container, where cassandra:latest is the cassandra's image and centos7-tomcat9-myapp is my app web's image.
version: '3'
services:
casandra:
image: cassandra:latest
myapp:
image: centos7-tomcat9-myapp
depends_on:
- casandra
environment:
- CASSANDRA_HOST=cassandra
I ran the command line to start the app web's image : docker run -it --rm --name fe3c2f120e01 -p 8888:8080 centos7-tomcat9-app .
In the console log the spring boot show me the error below. It happened, because the myapp's container could not connect to the Cassandra's container.
2021-10-15 15:12:14.240 WARN 1 --- [ s0-admin-1]
c.d.o.d.i.c.control.ControlConnection : [s0] Error connecting to
Node(endPoint=127.0.0.1:9042, hostId=null, hashCode=47889c49), trying
next node (ConnectionInitException: [s0|control|connecting...]
Protocol initialization request, step 1 (OPTIONS): failed to send
request (io.netty.channel.StacklessClosedChannelException))
What am I doing wrong?
EDIT
This is the nodetool status about the cassandra's image:
[root#GDBDEV04 cassandradb]# docker exec 552d359d177e nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.18.0.3 84.76 KiB 16 100.0% 685b6e0a-13c2-4d41-ba99-f3b0fa94477c rack1
EDIT 2
I need to connect the Cassandra's DB image with the web application image. It is different to connect microservices. I tried to change the 127.0.0.0 (inside the cassandra.yaml) to 0.0.0.0 (only to test) and the error persist. I think something missing in my docker-compose.yml for sure. However, I did not know what.
Finally I found the error. In my case, I need to fixed the docker-compose.yml file adding the Cassandra and Tomcat's ports. And in my application.properties (spring boot config file), I changed the cluster's name.
Docker-compose.yml:
version: '3'
services:
cassandra:
image: cassandra:latest
ports:
- "9044:9042"
myapp:
image: centos7-tomcat9-myapp
ports:
-"8086:8080"
depends_on:
- cassandra
environment:
- CASSANDRA_HOST=cassandra
Application.config :
# CASSANDRA (CassandraProperties)
cassandra.cluster = Test Cluster
cassandra.contactpoints=${CASSANDRA_HOST}
This question help me to resolve my problem: Accessing docker container mysql databases

Cannot use docker-compose with overlay network

I'm pretty baffled what's going on here, but I've narrowed it down to a very small test case. Here's my docker-compose file:
version: "3.7"
networks:
cl_net_overlay:
driver: overlay
services:
redis:
image: "redis:alpine"
networks:
- cl_net_overlay
The cl_net_overlay network doesn't exist. When I run this with:
docker-compose up
It stalls for a little while, then says:
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating network "tmp_cl_net_overlay" with driver "overlay"
Recreating tmp_redis_1 ... error
ERROR: for tmp_redis_1 Cannot start service redis: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
ERROR: for redis Cannot start service redis: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
ERROR: Encountered errors while bringing up the project.
This file was working fine for me on my previous laptop. My docker and docker-compose should be up to date since this is a brand new laptop. Is there some piece of the puzzle I'm missing?
05:01:11::mlissner#gabbro::/tmp
↪ docker --version
Docker version 19.03.1, build 74b1e89
05:01:57::mlissner#gabbro::/tmp
↪ docker-compose --version
docker-compose version 1.24.1, build 4667896b
Any ideas what's going on here? I've been trying to get it to work all day and I'm feeling a little like I'm losing my mind.
Small follow up. The message says:
make sure your network options are correct and check manager logs
I have no idea how to check the manager logs. That might be a useful first step?
Another follow up, per comments. If I try to deploy this I get no logs and it's unable to start up:
05:44:32::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker stack deploy --compose-file /tmp/docker-compose.yml test2
Creating network test2_cl_net_overlay2
Creating service test2_redis
05:44:50::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
5y7o01o5mifn test2_redis replicated 0/1 redis:alpine
05:44:57::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker service ps 5y
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
0kbph0ie8qth test2_redis.1 redis:alpine gabbro Ready Rejected 4 seconds ago "mkdir /var/lib/docker: read-o…"
inr81c3r4un7 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 9 seconds ago "mkdir /var/lib/docker: read-o…"
tl1h6dp90ur2 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 14 seconds ago "mkdir /var/lib/docker: read-o…"
jacv2yvkspix \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 19 seconds ago "mkdir /var/lib/docker: read-o…"
7cm6e8snf517 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 19 seconds ago "mkdir /var/lib/docker: read-o…"
Another idea: Running as root. Same issue.
Do you have the right plugins (see more bellow on the docker info command)?
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
It works on:
$ docker swarm init
$ docker-compose up
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating network "stackoverflow-57701373_cl_net_overlay" with driver "overlay"
Pulling redis (redis:alpine)...
alpine: Pulling from library/redis
9d48c3bd43c5: Pull complete
(...)
redis_1 | 1:M 29 Aug 2019 01:27:31.969 * Ready to accept connection
When:
$ docker --version
Docker version 19.03.1-ce, build 74b1e89e8a
and info:
$ docker info
Client:
Debug Mode: false
Server:
(...)
Server Version: 19.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: ff5mogx0ph4pgmwm2zrbhmjb4
Is Manager: true
ClusterID: vloixv7g75jflw5i1k81neul1
Managers: 1
Nodes: 1
(...)

Spring Boot tries to connect to Mongo localhost

I have a Spring Boot 2.x project using Mongo. I am running this via Docker (using compose locally) and Kubernetes. I am trying to connect my service to a Mongo server. This is confusing to me, but for development I am using a local instance of Mongo, but deployed in GCP I have named mongo services.
here is my application.properties file:
#mongodb
spring.data.mongodb.uri= mongodb://mongo-serviceone:27017/serviceone
#logging
logging.level.org.springframework.data=trace
logging.level.=trace
And my Docker-compose:
version: '3'
# Define the services/containers to be run
services:
service: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3009:3009" #specify ports forwarding
links:
- mongo-serviceone # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- mongo-serviceone
mongo-serviceone: # name of the service
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
When I try docker-compose up . I get the following error:
mongo-serviceone_1 | 2018-08-22T13:50:33.454+0000 I NETWORK
[initandlisten] waiting for connections on port 27017 service_1
| 2018-08-22 13:50:33.526 INFO 1 --- [localhost:27017]
org.mongodb.driver.cluster : Exception in monitor thread
while connecting to server localhost:27017 service_1
| service_1 | com.mongodb.MongoSocketOpenException:
Exception opening socket service_1 | at
com.mongodb.connection.SocketStream.open(SocketStream.java:62)
~[mongodb-driver-core-3.6.3.jar!/:na]
running docker ps shows me:
692ebb72cf30 serviceone_service "java -Djava.securit…" About an hour ago Up 9 minutes 0.0.0.0:3009->3009/tcp, 8080/tcp serviceone_service_1
6cd55ae7bb77 mongo "docker-entrypoint.s…" About an hour ago Up 9 minutes 0.0.0.0:27017->27017/tcp serviceone_mongo-serviceone_1
While I am trying to connect to a local mongo, I thought that by using the name "mongo-serviceone"
Hard to tell what the exact issue is, but maybe this is just an issue because of the space " " after "spring.data.mongodb.uri=" and before "mongodb://mongo-serviceone:27017/serviceone"?
If not, maybe exec into the "service" container and try to ping the mongodb with: ping mongo-serviceone:27017
Let me know the output of this, so I can help you analyze and fix this issue.
Alternatively, you could switch from using docker compose to a Kubernetes native dev tool, as you are planning to run your application on Kubernetes anyways. Here is a list of possible tools:
Allow hot reloading:
DevSpace: https://github.com/covexo/devspace
ksync: https://github.com/vapor-ware/ksync
Pure CI/CD tools for dev:
Skaffold: https://github.com/GoogleContainerTools/skaffold
Draft: https://github.com/Azure/draft
For most of them, you will only need minikube or a dev namespace inside your existing cluster on GCP.
Looks like another application was running on port 27017 on your localhost Similar reported issue
quick way to check on linux/mac:
telnet 127.0.01 27017
check logs files:
docker logs serviceone_service

How to debug "WSREP: SST failed: 1 (Operation not permitted)" with a MariaDB Galera cluster in Docker?

Requirement: CentOS-based Docker container providing a MariaDB 10.x Galera cluster
Host Environment: OX X El Capitan 10.11.6, Docker 1.12.5 (14777)
Docker Container OS: CentOS Linux release 7.3.1611 (Core)
DB: 10.1.20-MariaDB
I found a promising Docker image, but the documentation seems to be obsolete, the commands to start the cluster do not work. At the time of writing the image uses wsrep_sst_method = rsync and so I figured that the following commands should work (replace /Users/Me/somedb with an empty directory on your host):
docker pull dayreiner/centos7-mariadb-10.1-galera
docker run -d --name db1 -h db1host -p 3306:3306 -e CLUSTER_NAME=joe -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
docker run -d --name db2 -h db2host -p 3307:3306 --link db1 -e CLUSTER_NAME=joe -e CLUSTER=db1host,db2host -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
The first container (db1) comes up and seems OK. But the last line that tries to add db2 as a second node to the Galera cluster results in the following error (docker logs db2):
2017-01-10 15:26:10 139742710823680 [Note] WSREP: New cluster view: global state: :-1, view# 0: Primary, number of nodes: 1, my index: 0, protocol version 3
2017-01-10 15:26:10 139742711142656 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2017-01-10 15:26:10 139742711142656 [ERROR] Aborting
I could not figure out what is wrong here and would appreciate ideas on how to analyze this further. Is this a problem of rsync, Galera or even Docker?
That's my image on dockerhub.
I had not tested the cluster (until now) on a single host, only running on multiple hosts. You're right though, running two on a single host seems to abort the second node on start.
This looks to be caused by the default bridge network not behaving nicely. Possibly some issue with handling the ports for state transfer. Not really sure why.
If you modify your commands to first create a custom network for your clustered containers to use on the backend, and then run the cluster members using that network, that seems to work when running two nodes on a single host:
# docker network create mariadb
# docker run -d --network=mariadb -p 3307:3306 --name db1 -e CLUSTER_NAME=test -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db1:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
# docker run -d --network=mariadb -p 3308:3306 --name db2 -e CLUSTER_NAME=test -e CLUSTER=db1,db2 -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db2:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
No errors this time on the second node:
# docker logs db2 -f
...snip
2017-01-12 20:33:08 139726185019648 [Note] WSREP: Signalling provider to continue.
2017-01-12 20:33:08 139726185019648 [Note] WSREP: SST received: 42eaa277-d906-11e6-b98a-3e6b9531c1b7:0
2017-01-12 20:33:08 139725604124416 [Note] WSREP: 1.0 (f170852fe1b6): State transfer from 0.0 (951fdda2454b) complete.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINER -> JOINED (TO: 0)
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Member 1.0 (f170852fe1b6) synced with group.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2017-01-12 20:33:08 139726105180928 [Note] WSREP: Synchronized with group, ready for connections
2017-01-12 20:33:08 139726105180928 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2017-01-12 20:33:08 139726185019648 [Note] mysqld: ready for connections.
Version: '10.1.20-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
Try that, see how it goes. Also, if you run it using docker-compose it will also work without any problems. This is likely because compose creates a dedicated compose container network by default. You can see an example compose file in this gist.
Just make sure to use a different directory for each mariadb instance, and after you have your cluster started, stop db1 and relaunch it as a regular cluster member (otherwise the next time db1 is started it will keep bootstrapping a new cluster).
Works after upgrading the Docker image to MariaDB 10.2.3 (from 10.1.20).
I am not 100% sure whether I have a truly valid cluster now, but at least show status like "wsrep_cluster_size"; produces the following output and the DB is usable:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
Note: I also omitted the -v option and placed the DB files inside the Docker container instead of on an external volume. I don't think that this makes a difference regarding the cluster, but I did not verify 10.2.3 with -v. However, I tried 10.1.20 with both variations (external volume with -v and container-internal files) and both did not work.

cannot connect to container with docker-compose

I'm using docker 1.12 and docker-compose 1.12, on OSX.
I created a docker-compose.yml file which runs two containers:
the first, named spark, builds and runs a sparkjava application
the second, named behave, runs some functional tests on the API exposed by the first container.
version: "2"
services:
behave:
build:
context: ./src/test
container_name: "behave"
links:
- spark
depends_on:
- spark
entrypoint: ./runtests.sh spark:9000
spark:
build:
context: ./
container_name: "spark"
ports:
- "9000:9000"
As recommended by Docker Compose documentation, I use a simple shell script to test if the spark server is ready. This script is name runtest.sh, and runs into the container named "behave". It is launched by docker-compose (see above):
#!/bin/bash
# This scripts waits for the API server to be ready before running functional tests with Behave
# the parameter should be the hostname for the spark server
set -e
host="$1"
echo "runtests host is $host"
until curl -L "http://$host"; do
>&2 echo "Spark server is not ready - sleeping"
sleep 5
done
>&2 echo "Spark server is up - starting tests"
behave
```
The DNS resolution does not seem to work. curl makes a request to spark.com instead of a request to my container named "spark".
UPDATE:
By setting an alias for my link (links: -spark:myserver), I've seen the DNS resolution is not done by Docker: I received an error message from a corporate network equipment (I'm running this from behind a corporate proxy, with Docker for Mac). Here is an extract of the output:
Recreating spark
Recreating behave
Attaching to spark, behave
behave | runtests host is myserver:9000
behave | % Total % Received % Xferd Average Speed Time Time Time Current
behave | Dload Upload Total Spent Left Speed
100 672 100 672 0 0 348 0 0:00:01 0:00:01 --:--:-- 348
behave | <HTML><HEAD>
behave | <TITLE>Network Error</TITLE>
behave | </HEAD>
behave | <BODY>
behave | ...
behave | <big>Network Error (dns_unresolved_hostname)</big>
behave | Your requested host "myserver" could not be resolved by DNS.
behave | ...
behave | </BODY></HTML>
behave | Spark server is up - starting tests
To solve this, I added an environment variable no_proxy for the name of the container I wanted to join.
In the dockerfile for the container behave, I have:
ENV http_proxy=http://proxy.mycompany.com:8080
ENV https_proxy=http://proxy.mycompany.com:8080
ENV no_proxy=127.0.0.1,localhost,spark

Resources