Running local integration test with Localstack and Docker-Compose gives: NetworkingError: connect ECONNREFUSED 127.0.0.1:3000 - docker

Ran a docker-compose.yml that sets up localstack
Ran a script to create an AWS stack
aws cloudformation create-stack --endpoint http://localhost:4581 --region us-east-1 --stack-name localBootstrap --template-body file://localstack-bootstrap-cf.yaml --parameters ParameterKey=Environment,ParameterValue=localstack --capabilities CAPABILITY_NAMED_IAM
Ran Terraform commands to create the AWS resources in Localstack. All good.
Ran serverless offline command to set local AWS NodeJs lambdas. All good.
But then when running the integration tests got errors and below message
NetworkingError: connect ECONNREFUSED 127.0.0.1:3000

What fixed the problem was to run
aws configure
and configure AWS locally, even only dummy values were needed.

Related

MissingEndpoint: 'Endpoint' configuration is required for this service: queue name AWS ecs ec2

I'm Try to deploy my Golang Api-service, using ecs ec2 deployment.
but i'm getting error from this below function
sqsSvc.GetQueueUrl(&sqs.GetQueueUrlInput{
QueueName: &queueName,
})
so my service task is failed with below log as
MissingEndpoint: 'Endpoint' configuration is required for this service:
i have checked the aws configurations from env & those are correct.
NOTE: I have successfully deployment this application normally using docker container in an ec2 instance manually, & that is successfully running.
I am try to do the same with ecs ec2 deployment.
i have check over different solutions from internet, but nothing works.
also i have attached AmazonSQSFullAccess policy to my ecsTaskExecutionRole role as well as ecsInstanceRole role.
also i am able to send message by running the below cmd from this ecs ec2 instance, so my queue is accessible from everywhere.
aws sqs send-message --region ap-south-1 --endpoint-url https://sqs.ap-south-1.amazonaws.com/ --queue-url https://sqs.ap-south-1.amazonaws.com/account-id/my-sqs-queue/ --message-body "Hello from Amazon SQS."
also ping amazon.com works from this ec2 ecs instance
Let me know, how can i debug & solve this issue.
using github.com/aws/aws-sdk-go v1.44.114

Snowflake and Datadog integration

I'm setting up Snowflake and Datadog integration by following this guide.
I installed datadog agent as a docker container. However, when I try to install the snowflake integration by running the following command inside my datadog-agent docker container (via "docker exec -it --user dd-agent dd-agent bash")
datadog-agent integration install datadog-snowflake==2.0.1
I got this error
bash: datadog-agent: command not found
My question is, does datadog-agent docker version support installing integration? If it does, how do I do it? If it doesn't, do I have to install datadog-agent on a VM to do it?
Ok, turns out the datadog-agent inside docker is called agent.
So this allows me to call the datadog-agent command
datadog-agent integration install datadog-snowflake==2.0.1
However, the latest datadog docker container already include the Snowflake integration plugin. So simply configure the configuration file in
/etc/datadog-agent/conf.d/snowflake.d/conf.yaml
is enough to get going.

How to run docker container as privileged within Cloud Run

I have a docker container that needs to run with --privileged to establish a VPN connection once it boots up
I am migrating it into Cloud Run using Cloud Build
I tried --container-privileged but that seems to only work for GCE, I also added the following to the args for the gcloud run deploy call in the cloudbuild.yaml but it complains with error: "Invalid command \"docker run --privileged\": file not found anywhere in PATH
- --command
- docker run --privileged
Google Cloud Run does not use Docker to run containers.
Cloud Run uses gVisor.
Cloud Run does not support privileged containers.

`aws ssm start-session` not working from inside docker container

i have a docker container based off https://github.com/bopen/docker-ubuntu-pyenv/blob/master/Dockerfile
...where i'm installing the aws-cli and would like to use aws ssm to access a remote instance.
i've tried starting the container with docker-compose AND with docker up -- in both cases i've mounted my AWS_PROFILE, and can access all other aws-cli commands (i tested with ec2 describe and even did an aws ssm send-command to the instance!)
BUT when i do aws ssm start-session --target $instance_id from the container, i get nothing. i'm able to run aws ssm start-session from my local shell to this instance so i know that ssm is configured properly.
running it with the --debug flag gives me the exact same output from when i run it locally, minus the Starting session with SessionId: part obviously.
is this a aws-cli issue? or some weird container stdout thing? help pls!
[cross posted here https://github.com/aws/aws-cli/issues/4465]
okayyy so the 'fix' for this was that the Session Manager Plugin on the container was not installed properly.
i guess the plugin isn't actually 'optional' as this says https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html, but is required to start a session with SSM.
i had the wrong plugin installed and session-manager-plugin was returning an error. getting the right one in the container fixed everything!

How to launch a rails console in a Fargate container

I would like to open a Rails console in a Fargate container to interact with my production installation
However after searching the web and posting in the AWS forum I could not find an answer to this question
Does anyone know how I can do this? This seems like a mandatory thing to have in any production environment and having no easy way to do it is kind of surprising coming from such a respected cloud provider as AWS
Thanks
[Update 2021]: It is now possible to run a command in interactive mode with AWS Fargate!
News: https://aws.amazon.com/fr/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
The command to run is:
aws ecs execute-command \
--cluster cluster-name \
--task task-id \
--container container-name \ # optional
--interactive \
--command "rails c"
Troubleshooting:
Check the AWS doc for IAM permissions: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites
After trying lots of things, I found a way to open a Rails console pointing to my production environment, so I will post it here in case somebody come accross the same issues
To summarise I add a rails application deployed on Fargate connected to a RDS postgres database
What I did is creating a VPN client endpoint to the VPC hosting my Rails app and my RDS database
Then after being connected to this VPN, I simply run my rails production container (with the same environment variables) overriding the container command to run the console startup script (bundle exec rails c production)
Being run on my local machine I can normally attach a TTY to this container and access my production console
I think this solution is good because it allow any developper working on the project to open a console without any costs incurred and a well-though security policy on the AWS end ensure that the console access is secure, plus you don't have to expose your database outside of your VPC
Hope this helped someone
Doing any sort of docker exec is a nightmare with ECS and fargate. Which makes doing things like shells or migrations very difficult.
Thankfully, a fargate task on ECS is really just an AWS server running a few super customized docker run commands. So if you have docker, jq, and the AWS CLI either on EC2 or your local machine, you can fake some of those docker run commands yourself and enter a bash shell. I do this for Django so I can run migrations and enter a python shell, but I'd assume it's the same for rails (or any other container that you need bash in)
Note that this only works if you only care about 1 container spelled out in your task definition running at a time, although I'd imagine you could jerry-rig something more complex easy enough.
For this the AWS CLI will need to be logged in with the same IAM permissions as your fargate task. You can do this locally by using aws configure and providing credentials for a user with the correct IAM permissions, or by launching an EC2 instance that has a role either with identical permissions, or (to keep things really simple) the role that your fargate task is running and a security group with identical access (plus a rule that lets you SSH into the bastion host.) I like the EC2 route, because funneling everything through the public internet and a VPN is... slow. Plus you're always guaranteed to have the same IAM access as your tasks do.
You'll also need to be on the same subnet as your fargate tasks are located on, which can usually be done via a VPN, or by running this code on a bastion EC2 host inside your private subnet.
In my case I store my configuration parameters as SecureStrings within the AWS Systems Manager Parameter Store and pass them in using the ECS task definition. Those can be pretty easily acquired and set to a local environment variable using
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name parameter.name.database_url \
| jq '.Parameter["Value"]' -r)
I store my containers on ECR, so I then need to login my local docker container to ECR
eval $(aws ecr get-login --no-include-email --region $REGION)
Then it's just a case of running an interactive docker container that passes in the DATABASE_URL, pulls the correct image from ECR, and enters bash. I also expose port 8000 so I can run a webserver inside the shell if I want, but that's optional.
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG \
/bin/bash
Once you run that you should see your copy of docker download the image from your container repository then launch you into bash (assuming bash is installed inside your container.) Docker has a pretty solid cache, so this will take a bit of time to download and launch the first time but after that should be pretty speedy.
Here's my full script
#!/bin/bash
REGION=${REGION:-us-west-2}
ENVIRONMENT=${ENVIRONMENT:-staging}
DOCKER_REPO_NAME=${DOCKER_REPO_NAME:-reponame}
TAG=${TAG:-latest}
ACCOUNT_ID=$(aws sts get-caller-identity | jq -r ".Account")
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name projectname.$ENVIRONMENT.database_url \
| jq '.Parameter["Value"]' -r)
eval $(aws ecr get-login --no-include-email --region $REGION)
IMAGE=$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$IMAGE \
/bin/bash
You cannot ssh to the underlying host when you are using the Fargate execution type for ECS. This means that you cannot docker exec into a running container.
I haven't tried this on Fargate, but you should be able to create a fargate task in which the command is rails console.
Then if you configure the task as interactive, you should be able to launch the interactive container and have access to the console via stdin.
Ok, so I ended up doing things a bit differently. Instead of trying to run the console on Fargate, I just run a console on my localhost, but configure it to use RAILS_ENV='production' and let it use my RDS instance.
Of course to make this work, you have to expose your RDS instance through an egress rule in your security group. It's wise to configure it in a way that it only allows your local IP, to keep it a bit more secure.
The docker-compose.yml then looks something like this:
version: '3'
web:
stdin_open: true
tty: true
build: .
volumes:
- ./rails/.:/your-app
ports:
- "3000:3000"
environment: &env_vars
RAILS_ENV: 'production'
PORT: '8080'
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
DATABASE_URL: 'postgresql://username:password#yours-aws-rds-instance:5432/your-db'
When you then run docker-compose run web rails c it uses your local Rails codebase, but makes live changes to your RDS DB (the prime reason why you'd like access to rails console anyway).

Resources