Send Locust HTML report to s3 bucket - load-testing

I am running my locust tests on an ec2 instance and wanted to check if there is a way to send the results over to an s3 bucket once the test run finishes.
Is there a folder on the ec2 instance where locust saves the .html report file or how do we go about this.
The ec2 instance has locust installed and i just run the locust command to execute the test, no docker container involved.

Too generate html:
locust -f script.py --headless -u 1 -r 1 --run-time 15s --html=test1.html
You can do so by running the following commands:
aws s3 cp test1.html s3://your-bucket test2.html --acl bucket-owner-full-control

Related

How do i download files from a minio s3 bucket using curl

I am trying to download contents of a folder from a minio s3 bucket.
i am using the following command.
I am able to download a specific file using
# aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model/requirements.txt .
But the below throws an error if it try to download all the contents of the folder
# aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model/* .
fatal error: An error occurred (404) when calling the HeadObject operation: Key "3/050d4b07b6334997b214713201e41012/artifacts/model/*" does not exist
Any help will be appreciated
I was finally able to get it by running
aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model . --recursive
The one problem i ran into was i had to use the aws-cli using pip install as that's the only way i could the --recursive option to work.
You could also use the minio client and set an alias on minio command. Here is an example taken from the official documentation showing how to achieve this using the docker version of minio client:
docker pull minio/mc:edge
docker run -it --entrypoint=/bin/sh minio/mc
mc alias set <ALIAS> <YOUR-S3-ENDPOINT> [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE]
You can instead install the client on any OS.
Once done, to copy content from s3 you would only have to do this:
{minio_alias} cp {minio_s3_source} {destination}

Running local integration test with Localstack and Docker-Compose gives: NetworkingError: connect ECONNREFUSED 127.0.0.1:3000

Ran a docker-compose.yml that sets up localstack
Ran a script to create an AWS stack
aws cloudformation create-stack --endpoint http://localhost:4581 --region us-east-1 --stack-name localBootstrap --template-body file://localstack-bootstrap-cf.yaml --parameters ParameterKey=Environment,ParameterValue=localstack --capabilities CAPABILITY_NAMED_IAM
Ran Terraform commands to create the AWS resources in Localstack. All good.
Ran serverless offline command to set local AWS NodeJs lambdas. All good.
But then when running the integration tests got errors and below message
NetworkingError: connect ECONNREFUSED 127.0.0.1:3000
What fixed the problem was to run
aws configure
and configure AWS locally, even only dummy values were needed.

How to launch a rails console in a Fargate container

I would like to open a Rails console in a Fargate container to interact with my production installation
However after searching the web and posting in the AWS forum I could not find an answer to this question
Does anyone know how I can do this? This seems like a mandatory thing to have in any production environment and having no easy way to do it is kind of surprising coming from such a respected cloud provider as AWS
Thanks
[Update 2021]: It is now possible to run a command in interactive mode with AWS Fargate!
News: https://aws.amazon.com/fr/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
The command to run is:
aws ecs execute-command \
--cluster cluster-name \
--task task-id \
--container container-name \ # optional
--interactive \
--command "rails c"
Troubleshooting:
Check the AWS doc for IAM permissions: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites
After trying lots of things, I found a way to open a Rails console pointing to my production environment, so I will post it here in case somebody come accross the same issues
To summarise I add a rails application deployed on Fargate connected to a RDS postgres database
What I did is creating a VPN client endpoint to the VPC hosting my Rails app and my RDS database
Then after being connected to this VPN, I simply run my rails production container (with the same environment variables) overriding the container command to run the console startup script (bundle exec rails c production)
Being run on my local machine I can normally attach a TTY to this container and access my production console
I think this solution is good because it allow any developper working on the project to open a console without any costs incurred and a well-though security policy on the AWS end ensure that the console access is secure, plus you don't have to expose your database outside of your VPC
Hope this helped someone
Doing any sort of docker exec is a nightmare with ECS and fargate. Which makes doing things like shells or migrations very difficult.
Thankfully, a fargate task on ECS is really just an AWS server running a few super customized docker run commands. So if you have docker, jq, and the AWS CLI either on EC2 or your local machine, you can fake some of those docker run commands yourself and enter a bash shell. I do this for Django so I can run migrations and enter a python shell, but I'd assume it's the same for rails (or any other container that you need bash in)
Note that this only works if you only care about 1 container spelled out in your task definition running at a time, although I'd imagine you could jerry-rig something more complex easy enough.
For this the AWS CLI will need to be logged in with the same IAM permissions as your fargate task. You can do this locally by using aws configure and providing credentials for a user with the correct IAM permissions, or by launching an EC2 instance that has a role either with identical permissions, or (to keep things really simple) the role that your fargate task is running and a security group with identical access (plus a rule that lets you SSH into the bastion host.) I like the EC2 route, because funneling everything through the public internet and a VPN is... slow. Plus you're always guaranteed to have the same IAM access as your tasks do.
You'll also need to be on the same subnet as your fargate tasks are located on, which can usually be done via a VPN, or by running this code on a bastion EC2 host inside your private subnet.
In my case I store my configuration parameters as SecureStrings within the AWS Systems Manager Parameter Store and pass them in using the ECS task definition. Those can be pretty easily acquired and set to a local environment variable using
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name parameter.name.database_url \
| jq '.Parameter["Value"]' -r)
I store my containers on ECR, so I then need to login my local docker container to ECR
eval $(aws ecr get-login --no-include-email --region $REGION)
Then it's just a case of running an interactive docker container that passes in the DATABASE_URL, pulls the correct image from ECR, and enters bash. I also expose port 8000 so I can run a webserver inside the shell if I want, but that's optional.
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG \
/bin/bash
Once you run that you should see your copy of docker download the image from your container repository then launch you into bash (assuming bash is installed inside your container.) Docker has a pretty solid cache, so this will take a bit of time to download and launch the first time but after that should be pretty speedy.
Here's my full script
#!/bin/bash
REGION=${REGION:-us-west-2}
ENVIRONMENT=${ENVIRONMENT:-staging}
DOCKER_REPO_NAME=${DOCKER_REPO_NAME:-reponame}
TAG=${TAG:-latest}
ACCOUNT_ID=$(aws sts get-caller-identity | jq -r ".Account")
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name projectname.$ENVIRONMENT.database_url \
| jq '.Parameter["Value"]' -r)
eval $(aws ecr get-login --no-include-email --region $REGION)
IMAGE=$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$IMAGE \
/bin/bash
You cannot ssh to the underlying host when you are using the Fargate execution type for ECS. This means that you cannot docker exec into a running container.
I haven't tried this on Fargate, but you should be able to create a fargate task in which the command is rails console.
Then if you configure the task as interactive, you should be able to launch the interactive container and have access to the console via stdin.
Ok, so I ended up doing things a bit differently. Instead of trying to run the console on Fargate, I just run a console on my localhost, but configure it to use RAILS_ENV='production' and let it use my RDS instance.
Of course to make this work, you have to expose your RDS instance through an egress rule in your security group. It's wise to configure it in a way that it only allows your local IP, to keep it a bit more secure.
The docker-compose.yml then looks something like this:
version: '3'
web:
stdin_open: true
tty: true
build: .
volumes:
- ./rails/.:/your-app
ports:
- "3000:3000"
environment: &env_vars
RAILS_ENV: 'production'
PORT: '8080'
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
DATABASE_URL: 'postgresql://username:password#yours-aws-rds-instance:5432/your-db'
When you then run docker-compose run web rails c it uses your local Rails codebase, but makes live changes to your RDS DB (the prime reason why you'd like access to rails console anyway).

Do logs get saved on Google Kubernetes

I am running a deployment which contains three containers the app, nginx and cloud sql instance. I have a lot of print statements in my python based app.
Every time a user interacts with the app, outputs are printed. I want to know if these logs are saved by default at any location.
I am worried that these logs might consume the space on the nodes in the cluster running it. Does this happen ? or Kubernetes deployments by default don't save any logs by default?
The applications run in containers usually under Docker and the stdout/stderr logs are saved for the lifetime of the container in the graph directory (usually /var/lib/docker)
You can look at the logs with either:
$ kubectl logs <pod-name> -c <container-in-pod>
Or:
$ ssh <node>
$ docker logs <container>
If you'd like to know more where they are stored you can go into the /var/lib/docker directory and see the logs stored in JSON format:
$ cd /var/lib/docker/containers
$ find . | grep json.log
./3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d/3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d-json.log
./80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5/80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5-json.log
...
If you'd like to do garbage collection on 'Exited' containers you can read more about it here.
Another way is to set up a cron job that runs periodically on your nodes that does this:
$ docker system prune -a --force

Command to Copy/share jmeter master container results (Docker) generated in by running the script in non-gui mode to EC2 instance

We have used two EC2 instances as master and slave with Docker images (Jmeter master container and Jmeter slave container installed in respective hosts). I am able to run my testplan.jmx file from inside the Jmeter master container's bin folder and able to generate results file. How can I copy this results.scv/jtl file to my EC2 instance so that can pull those to my local machine. Please suggest which command i can use. I have used below command but no use.
./jmeter -n -t testplan.jmx -Djava.rmi.server.hostname=xxxxx -Dclient.rmi.localport=yyy0 -Rxxxxx -l /home/ubuntu/results2.jtl
Here /home/ubuntu/Jmeter is the folder which i have in my local EC2 instance.
Able to accomplish the task by using below command. This command needs to be run from host machine/EC2 instance (not from the jmeter master container)
docker cp :
in my case: docker cp 8xxxxxxx:/jmeter/apache-jmeter-2.13/bin/results.jtl /home/ubuntu/xxx/JMdriver/
ContainerID can get using docker ps -a
Thanks.

Resources