How to set AWS region using docker compose up with ECS context - docker

I'm using the new docker compose ECS integration to create an ecs context for deploying as described here and here. I selected during docker context create ecs my-context for it to use an existing AWS profile which has us-west-2 configured as its default region. However, docker compose up always results in it deploying to us-east-1. I tried exporting DEFAULT_AWS_REGION but that didn't work either. Is there a way to set the region in the context? It looks like the older docker ecs setup command asked for the region but that cmd is now deprecated.

I was able to fix this via the AWS CLI directly.
aws configure set default.region eu-central-1
Source: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html

Related

How to setup healthcheck in Dockerfile on AWS ECS

AWS ECS never checks health state although I've added healthcheck command in dockerfile.
I didn't put any additional setting on ECS healthchecker options, knowing it will override the original docker healthcheck command.
Any ideas?
You can configure a container healthcheck via the a container in containerDefinitions in your taskDefintion
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck
I have investigated this today and it seems ECS does not support using HEALTHCHECK defined in the Dockerfile.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck
explicitly says
The Amazon ECS container agent only monitors and reports on the health checks that are specified in the task definition. Amazon ECS doesn't monitor Docker health checks that are embedded in a container image but aren't specified in the container definition.
So you need to add the healthcheck option to the task definition for ECS to use it.

How to supply env file for a docker GCP CloudRun Service

I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).

How to write a file to the host in advance and then start the Docker container?

My task is to deploy a third-party OSRM service on Amazon ECS Fargate.
For OSRM docker at startup, you need to transfer a file containing geodata.
The problem is that Amazon ECS Fargate does not provide access to the host file system and does not provide the ability to attach files and folders during container deployments.
Therefore, I would like to create an intermediate image that, when building, saved the file with geodata, and when starting the container, it would use it when defining volumes.
Thanks!
As I understand it, Amazon ECS is a plain container orchestrator and does not implement docker swarm, so things like docker configs are off the cards.
However, you should be able to do something like this :-
ID=$(docker create --name my-osrm osrm-base-image)
docker cp ./file.ext $ID:/path/in/container
docker start $ID
The solution turned out to be quite simple.
For this Dockerfile, I created an image on my local machine and hosted it on DockerHub:
FROM osrm/osrm-backend:latest
COPY data /data
ENTRYPOINT ["osrm-routed","--algorithm","mld","/data/andorra-latest.osm.pbf"]
After that, without any settings and volumes, I launched this image in AWS ECS

How does Docker and AWS ECS Service interact?

i'm new to amazon web services, and im involved in project where we dicided to user serverless architecture which consist of lambda(nodejs), dynamo DB, cognito etc, as a DevOps engineer, im trying to figure out how to to do CD/CD for the project.
I've read multiple articles, they mentioning fargate and other services which i understand, but when it comes to docker and ECS im bit confused, *
i dont know if we push the image to ECS and write the dockerfile so that my lambda functions can run, or we just push the image to ECS so that the cluster of lambda functions can run?
please anyone with a clear explanantion please assist.
Thank you
i dont know if we push the image to ECS and write the dockerfile so
that my lambda functions can run, or we just push the image to ECS so
that the cluster of lambda functions can run?
You push Docker images to AWS Elastic Container Registry (ECR). ECS can then pull those images to deploy docker containers in either EC2 or Fargate.
ECS, ECR and Docker are totally unrelated to AWS Lambda. You don't run docker containers in AWS Lambda.
Implemented similar using Jenkins.. We have create docker image using jenkins as part of CI CD pipeline.. then we store those docker images in ECR. In ECS we create task definition and container instance, then container instance will refer to the ECR images which we created as part of CICD.
Dokcerfile was part of docker image.

Google Cloud Composer - Deploying Docker Image

Definitely missing something, and could use some quick assistance!
Simply, how do you deploy a Docker image to an Airflow DAG for running jobs? Does anyone have a simple example of deploying a Google container and running it via Airflow/Composer?
You can use the Docker Operator, included in the core Airflow repository.
If pulling an image from a private registry, you'll need to set a connection config with the relevant credentials and pass it to the docker_conn_id param.

Resources