How to setup healthcheck in Dockerfile on AWS ECS - docker

AWS ECS never checks health state although I've added healthcheck command in dockerfile.
I didn't put any additional setting on ECS healthchecker options, knowing it will override the original docker healthcheck command.
Any ideas?

You can configure a container healthcheck via the a container in containerDefinitions in your taskDefintion
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck

I have investigated this today and it seems ECS does not support using HEALTHCHECK defined in the Dockerfile.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck
explicitly says
The Amazon ECS container agent only monitors and reports on the health checks that are specified in the task definition. Amazon ECS doesn't monitor Docker health checks that are embedded in a container image but aren't specified in the container definition.
So you need to add the healthcheck option to the task definition for ECS to use it.

Related

Run a new task in Airflow Kubernetes with it's own isolated Dockerfile?

How do I run a new pipeline with it's own isolated Dockerfile in Airflow Kubernetes ?
I've been using Dagster and I can run new pipelines on their own Dockerfile, but can't figure out how to do this in Airflow
If you want to run a docker container task on Kubernetes using Airflow, regardless the executor you are using and how you deployed Airflow server, you can use KubernetesPodOperator.
You can specify the docker image by providing the argument image, you can also override the image entrypoint and provide extra args (cmds and arguments). And you can configure your pod as you need (labesl, volumes, secrets, configMaps, ...).

How to set AWS region using docker compose up with ECS context

I'm using the new docker compose ECS integration to create an ecs context for deploying as described here and here. I selected during docker context create ecs my-context for it to use an existing AWS profile which has us-west-2 configured as its default region. However, docker compose up always results in it deploying to us-east-1. I tried exporting DEFAULT_AWS_REGION but that didn't work either. Is there a way to set the region in the context? It looks like the older docker ecs setup command asked for the region but that cmd is now deprecated.
I was able to fix this via the AWS CLI directly.
aws configure set default.region eu-central-1
Source: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html

How to write a file to the host in advance and then start the Docker container?

My task is to deploy a third-party OSRM service on Amazon ECS Fargate.
For OSRM docker at startup, you need to transfer a file containing geodata.
The problem is that Amazon ECS Fargate does not provide access to the host file system and does not provide the ability to attach files and folders during container deployments.
Therefore, I would like to create an intermediate image that, when building, saved the file with geodata, and when starting the container, it would use it when defining volumes.
Thanks!
As I understand it, Amazon ECS is a plain container orchestrator and does not implement docker swarm, so things like docker configs are off the cards.
However, you should be able to do something like this :-
ID=$(docker create --name my-osrm osrm-base-image)
docker cp ./file.ext $ID:/path/in/container
docker start $ID
The solution turned out to be quite simple.
For this Dockerfile, I created an image on my local machine and hosted it on DockerHub:
FROM osrm/osrm-backend:latest
COPY data /data
ENTRYPOINT ["osrm-routed","--algorithm","mld","/data/andorra-latest.osm.pbf"]
After that, without any settings and volumes, I launched this image in AWS ECS

How can i setup aws cloudwatch logs with docker ecs container

I am using Amazon ECS and docker image is using php application.
Everything is running fine.
In the entry point i am using supervisord in foreground and those logs are currently send to cloudwatch logs.
In my docker image i have logs send to files
/var/log/apache2/error.log
/var/log/apache2/access.log
/var/app/logs/dev.log
/var/app/logs/prod.log
Now i want to send those logs to aws cloudwatch. whats the best way for that.
Also i have multiple containers for single app so example all foour containers will be having these logs.
Initially i thought to install aws logs agent in container itself but i have to use same docke rimage for local and ci and nonprod environments so i dont want to use cloudwatch logs there.
Is there any other way for this?
In your task definition, specify the logging configuration as the following:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "LogGroup",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "Prefix"
}
}
awslogs-stream-prefix is optional for EC2 launch type but required for Fargate
In the UserData section when you launch a new instance, register the instance to the cluster and make sure you specify the logging of type awslogs as well:
#!/bin/bash
echo 'ECS_CLUSTER=ClusterName' > /etc/ecs/ecs.config
echo ECS_AVAILABLE_LOGGING_DRIVERS='[\"json-file\", \"awslogs\"]' >> /etc/ecs/ecs.config
start ecs
More Info:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
You have to do two things:
Configure the ECS Task Definition to take logs from the container output and pipe them into a CloudWatch logs group/stream. To do this, you add a LogConfiguration property to each ContainerDefinition property in your ECS task definition. You can see the docs for this here, here, and here.
Instead of writing logs to a file in the container, instead write them to /dev/stdio or /dev/stdout / /dev/stderr. You can just use these paths in your Apache configuration and you should see the Apache log messages outputted to the container's log.
You can use the awslogs logging driver of Docker
Refer to the documentation on how to set it up
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
Given your defined use case:
Collect logs from 4 different files from within a container
Apply docker log driver awslog for the task
In previous answers you already have seen, that awslog applies the stdout as logging mechanism. Further, it has been stated, that awslog is applied per container, which means one aws cloud logging stream per running container.
To fulfill your goal when switching to stdout for all logging is not a choice of yours:
You apply a seperate container as logging mechanism (remember one log stream per container) for the main container
this leads to a seperate container, which applies the awslogs driver and reads the files from the other container sequentially (also async possible, more complex) and pushes them into a seperate aws cloud log stream of your choice
this way, you have seperate logging streams or groups if you like, for every file
Prerequisites:
The main container and a seperate logging container with access to a volume of the main container or the HOST
See this question how shared volumes between containers are realized
via docker compose:
Docker Compose - Share named volume between multiple containers
The logging container needs to talk to the host docker daemon. Running docker inside docker is not recomended and also not needed here!
here is a link to see how you can make the logging container talking to the host docker daemon https://itnext.io/docker-in-docker-521958d34efd
Create the logging docker container with a Dockerfile like this:
FROM ubuntu
...
ENTRYPOINT ["cat"]
CMD ["loggingfile.txt"]
You can apply this container as a function with input parameter logging_file_name to write to stdout and directly into aws Cloudwatch:
docker run -it --log-driver=awslogs
--log-opt awslogs-region= region
--log-opt awslogs-group= your defined group name
--log-opt awslogs-stream= your defined stream name
--log-opt awslogs-create-group=true
<Logging_Docker_Image> <logging_file_name>
With this setup you have a seperate docker logging container, which talks to the docker host and spins up another docker container to read the logging files of the main container and pushes them to aws Cloudwatch fully costumized by you.

How to run a Docker image from the ECS repo?

I have managed to push a Docker image to the ECS repo (I also pushed it to docker hubs repo).
I have created a cluster and an EC2 instance with public IP.
What now? How do you run the server? Do you have to push it from the repo somewhere? Will it just run automatically now? Do I have to setup a script somewhere?
You specify the repo to pull your container from as part of the container setup step inside the task definition.
From here, you'll see a prompt for you to specify where the container should be pulled from:
Once that's all complete, you need to make sure your EC2 instance is part of your cluster (which you also should create). As part of your cluster configuration, you can launch your task on a host that belongs to that cluster (or setup a service to manage the launching for you). When a task is launched on a host, all of the containers specified in that task get pulled and started via whatever entrypoint script you've defined in your dockerfile (or, alternatively, in your task definition).

Resources