MissingEndpoint: 'Endpoint' configuration is required for this service: queue name AWS ecs ec2 - docker

I'm Try to deploy my Golang Api-service, using ecs ec2 deployment.
but i'm getting error from this below function
sqsSvc.GetQueueUrl(&sqs.GetQueueUrlInput{
QueueName: &queueName,
})
so my service task is failed with below log as
MissingEndpoint: 'Endpoint' configuration is required for this service:
i have checked the aws configurations from env & those are correct.
NOTE: I have successfully deployment this application normally using docker container in an ec2 instance manually, & that is successfully running.
I am try to do the same with ecs ec2 deployment.
i have check over different solutions from internet, but nothing works.
also i have attached AmazonSQSFullAccess policy to my ecsTaskExecutionRole role as well as ecsInstanceRole role.
also i am able to send message by running the below cmd from this ecs ec2 instance, so my queue is accessible from everywhere.
aws sqs send-message --region ap-south-1 --endpoint-url https://sqs.ap-south-1.amazonaws.com/ --queue-url https://sqs.ap-south-1.amazonaws.com/account-id/my-sqs-queue/ --message-body "Hello from Amazon SQS."
also ping amazon.com works from this ec2 ecs instance
Let me know, how can i debug & solve this issue.
using github.com/aws/aws-sdk-go v1.44.114

Related

Running local integration test with Localstack and Docker-Compose gives: NetworkingError: connect ECONNREFUSED 127.0.0.1:3000

Ran a docker-compose.yml that sets up localstack
Ran a script to create an AWS stack
aws cloudformation create-stack --endpoint http://localhost:4581 --region us-east-1 --stack-name localBootstrap --template-body file://localstack-bootstrap-cf.yaml --parameters ParameterKey=Environment,ParameterValue=localstack --capabilities CAPABILITY_NAMED_IAM
Ran Terraform commands to create the AWS resources in Localstack. All good.
Ran serverless offline command to set local AWS NodeJs lambdas. All good.
But then when running the integration tests got errors and below message
NetworkingError: connect ECONNREFUSED 127.0.0.1:3000
What fixed the problem was to run
aws configure
and configure AWS locally, even only dummy values were needed.

Local Gitlab cicd failed 'fatal: unable to access...Could not resolve host:...' with linux runner

I am trying to test my python project and run via gitlab. I have installed runner on my ubuntu notebook and complete registered with local
gitlab server.
Thus, got 2 seperate machine one runner and another one is gitlab server. Both machine can communicate each other.
Notebook(192.168.100.10) ---- GitLab(172.16.10.100)
Once I commit test, my job failed with message below;
Reinitialized existing Git repository in /builds/dz/mytest/.git/
fatal: unable to access 'http://gitlab.lab01.ng/dz/mytest.git/': Could not resolve host: gitlab.lab01.ng
Uploading artifacts for failed job
ERROR: Job failed: exit code 1
From my notebook cli, i can ping gitlab server ip but not the host name even curl also doesnt know the hostname.
I believe this is something to do with the dns that cannot resolved.
I add hostname in my notebook /etc/hosts , i can ping hostname but still failed run job with the same message.
I have tried people suggest add below inside gitlab-runner config.toml, thus I add below in config.toml (Not sure if this is correct to add in config.toml)
[[runners]]
dns_search = [""]
Still failed and got the same message could not resilve host.
What can I do on my notebook setting/runner? I dont have admin access to gitlab to check further.
Anyone face the same problem. Appreciate help and support thank you.
--For information I have tried testing the runner on my notebook with public gitlab (gitlab.com) and I can run the job successfully without any error message--
I'm assuming you are using docker as the executor for your GitLab runner since you did not specify it in your question. Docker executor does not share the /etc/hosts of the host machine but you can use extra_hosts parameter inside your config.toml to let the runner container know about the custom hostname:
[runners.docker]
extra_hosts = ["gitlab.lab01.ng:172.16.10.100"]

AWS can't read the credentials file

I'm deploying a Flask app using Docker Machine on AWS. The credentials file is located in ~/.aws/:
[default]
aws_access_key_id=AKIAJ<NOT_REAL>7TUVKNORFB2A
aws_secret_access_key=M8G9Zei4B<NOT_REAL_EITHER>pcml1l7vzyedec8FkLWAYBSC7K
region=eu-west-2
Running it as follows:
docker-machine create --driver amazonec2 --amazonec2-open-port 5001 sandbox
According to Docker docs this should work but getting this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Before you ask, yes, I set permissions in a such a way that Docker is allowed to access the credentials file.
What should I do ?
Solution found here https://www.digitalocean.com/community/questions/ssh-not-available-when-creating-docker-machine-through-digitalocean
Problem was running Docker as a snap (Ubuntu's repo) instead of official build from Docker. As soon as I uninstalled the Docker snap and installed official build Docker was able to find credentials file immediately.

`aws ssm start-session` not working from inside docker container

i have a docker container based off https://github.com/bopen/docker-ubuntu-pyenv/blob/master/Dockerfile
...where i'm installing the aws-cli and would like to use aws ssm to access a remote instance.
i've tried starting the container with docker-compose AND with docker up -- in both cases i've mounted my AWS_PROFILE, and can access all other aws-cli commands (i tested with ec2 describe and even did an aws ssm send-command to the instance!)
BUT when i do aws ssm start-session --target $instance_id from the container, i get nothing. i'm able to run aws ssm start-session from my local shell to this instance so i know that ssm is configured properly.
running it with the --debug flag gives me the exact same output from when i run it locally, minus the Starting session with SessionId: part obviously.
is this a aws-cli issue? or some weird container stdout thing? help pls!
[cross posted here https://github.com/aws/aws-cli/issues/4465]
okayyy so the 'fix' for this was that the Session Manager Plugin on the container was not installed properly.
i guess the plugin isn't actually 'optional' as this says https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html, but is required to start a session with SSM.
i had the wrong plugin installed and session-manager-plugin was returning an error. getting the right one in the container fixed everything!

How to launch a rails console in a Fargate container

I would like to open a Rails console in a Fargate container to interact with my production installation
However after searching the web and posting in the AWS forum I could not find an answer to this question
Does anyone know how I can do this? This seems like a mandatory thing to have in any production environment and having no easy way to do it is kind of surprising coming from such a respected cloud provider as AWS
Thanks
[Update 2021]: It is now possible to run a command in interactive mode with AWS Fargate!
News: https://aws.amazon.com/fr/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
The command to run is:
aws ecs execute-command \
--cluster cluster-name \
--task task-id \
--container container-name \ # optional
--interactive \
--command "rails c"
Troubleshooting:
Check the AWS doc for IAM permissions: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites
After trying lots of things, I found a way to open a Rails console pointing to my production environment, so I will post it here in case somebody come accross the same issues
To summarise I add a rails application deployed on Fargate connected to a RDS postgres database
What I did is creating a VPN client endpoint to the VPC hosting my Rails app and my RDS database
Then after being connected to this VPN, I simply run my rails production container (with the same environment variables) overriding the container command to run the console startup script (bundle exec rails c production)
Being run on my local machine I can normally attach a TTY to this container and access my production console
I think this solution is good because it allow any developper working on the project to open a console without any costs incurred and a well-though security policy on the AWS end ensure that the console access is secure, plus you don't have to expose your database outside of your VPC
Hope this helped someone
Doing any sort of docker exec is a nightmare with ECS and fargate. Which makes doing things like shells or migrations very difficult.
Thankfully, a fargate task on ECS is really just an AWS server running a few super customized docker run commands. So if you have docker, jq, and the AWS CLI either on EC2 or your local machine, you can fake some of those docker run commands yourself and enter a bash shell. I do this for Django so I can run migrations and enter a python shell, but I'd assume it's the same for rails (or any other container that you need bash in)
Note that this only works if you only care about 1 container spelled out in your task definition running at a time, although I'd imagine you could jerry-rig something more complex easy enough.
For this the AWS CLI will need to be logged in with the same IAM permissions as your fargate task. You can do this locally by using aws configure and providing credentials for a user with the correct IAM permissions, or by launching an EC2 instance that has a role either with identical permissions, or (to keep things really simple) the role that your fargate task is running and a security group with identical access (plus a rule that lets you SSH into the bastion host.) I like the EC2 route, because funneling everything through the public internet and a VPN is... slow. Plus you're always guaranteed to have the same IAM access as your tasks do.
You'll also need to be on the same subnet as your fargate tasks are located on, which can usually be done via a VPN, or by running this code on a bastion EC2 host inside your private subnet.
In my case I store my configuration parameters as SecureStrings within the AWS Systems Manager Parameter Store and pass them in using the ECS task definition. Those can be pretty easily acquired and set to a local environment variable using
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name parameter.name.database_url \
| jq '.Parameter["Value"]' -r)
I store my containers on ECR, so I then need to login my local docker container to ECR
eval $(aws ecr get-login --no-include-email --region $REGION)
Then it's just a case of running an interactive docker container that passes in the DATABASE_URL, pulls the correct image from ECR, and enters bash. I also expose port 8000 so I can run a webserver inside the shell if I want, but that's optional.
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG \
/bin/bash
Once you run that you should see your copy of docker download the image from your container repository then launch you into bash (assuming bash is installed inside your container.) Docker has a pretty solid cache, so this will take a bit of time to download and launch the first time but after that should be pretty speedy.
Here's my full script
#!/bin/bash
REGION=${REGION:-us-west-2}
ENVIRONMENT=${ENVIRONMENT:-staging}
DOCKER_REPO_NAME=${DOCKER_REPO_NAME:-reponame}
TAG=${TAG:-latest}
ACCOUNT_ID=$(aws sts get-caller-identity | jq -r ".Account")
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name projectname.$ENVIRONMENT.database_url \
| jq '.Parameter["Value"]' -r)
eval $(aws ecr get-login --no-include-email --region $REGION)
IMAGE=$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$IMAGE \
/bin/bash
You cannot ssh to the underlying host when you are using the Fargate execution type for ECS. This means that you cannot docker exec into a running container.
I haven't tried this on Fargate, but you should be able to create a fargate task in which the command is rails console.
Then if you configure the task as interactive, you should be able to launch the interactive container and have access to the console via stdin.
Ok, so I ended up doing things a bit differently. Instead of trying to run the console on Fargate, I just run a console on my localhost, but configure it to use RAILS_ENV='production' and let it use my RDS instance.
Of course to make this work, you have to expose your RDS instance through an egress rule in your security group. It's wise to configure it in a way that it only allows your local IP, to keep it a bit more secure.
The docker-compose.yml then looks something like this:
version: '3'
web:
stdin_open: true
tty: true
build: .
volumes:
- ./rails/.:/your-app
ports:
- "3000:3000"
environment: &env_vars
RAILS_ENV: 'production'
PORT: '8080'
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
DATABASE_URL: 'postgresql://username:password#yours-aws-rds-instance:5432/your-db'
When you then run docker-compose run web rails c it uses your local Rails codebase, but makes live changes to your RDS DB (the prime reason why you'd like access to rails console anyway).

Resources