I am using Amazon ECS and docker image is using php application.
Everything is running fine.
In the entry point i am using supervisord in foreground and those logs are currently send to cloudwatch logs.
In my docker image i have logs send to files
/var/log/apache2/error.log
/var/log/apache2/access.log
/var/app/logs/dev.log
/var/app/logs/prod.log
Now i want to send those logs to aws cloudwatch. whats the best way for that.
Also i have multiple containers for single app so example all foour containers will be having these logs.
Initially i thought to install aws logs agent in container itself but i have to use same docke rimage for local and ci and nonprod environments so i dont want to use cloudwatch logs there.
Is there any other way for this?
In your task definition, specify the logging configuration as the following:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "LogGroup",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "Prefix"
}
}
awslogs-stream-prefix is optional for EC2 launch type but required for Fargate
In the UserData section when you launch a new instance, register the instance to the cluster and make sure you specify the logging of type awslogs as well:
#!/bin/bash
echo 'ECS_CLUSTER=ClusterName' > /etc/ecs/ecs.config
echo ECS_AVAILABLE_LOGGING_DRIVERS='[\"json-file\", \"awslogs\"]' >> /etc/ecs/ecs.config
start ecs
More Info:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
You have to do two things:
Configure the ECS Task Definition to take logs from the container output and pipe them into a CloudWatch logs group/stream. To do this, you add a LogConfiguration property to each ContainerDefinition property in your ECS task definition. You can see the docs for this here, here, and here.
Instead of writing logs to a file in the container, instead write them to /dev/stdio or /dev/stdout / /dev/stderr. You can just use these paths in your Apache configuration and you should see the Apache log messages outputted to the container's log.
You can use the awslogs logging driver of Docker
Refer to the documentation on how to set it up
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
Given your defined use case:
Collect logs from 4 different files from within a container
Apply docker log driver awslog for the task
In previous answers you already have seen, that awslog applies the stdout as logging mechanism. Further, it has been stated, that awslog is applied per container, which means one aws cloud logging stream per running container.
To fulfill your goal when switching to stdout for all logging is not a choice of yours:
You apply a seperate container as logging mechanism (remember one log stream per container) for the main container
this leads to a seperate container, which applies the awslogs driver and reads the files from the other container sequentially (also async possible, more complex) and pushes them into a seperate aws cloud log stream of your choice
this way, you have seperate logging streams or groups if you like, for every file
Prerequisites:
The main container and a seperate logging container with access to a volume of the main container or the HOST
See this question how shared volumes between containers are realized
via docker compose:
Docker Compose - Share named volume between multiple containers
The logging container needs to talk to the host docker daemon. Running docker inside docker is not recomended and also not needed here!
here is a link to see how you can make the logging container talking to the host docker daemon https://itnext.io/docker-in-docker-521958d34efd
Create the logging docker container with a Dockerfile like this:
FROM ubuntu
...
ENTRYPOINT ["cat"]
CMD ["loggingfile.txt"]
You can apply this container as a function with input parameter logging_file_name to write to stdout and directly into aws Cloudwatch:
docker run -it --log-driver=awslogs
--log-opt awslogs-region= region
--log-opt awslogs-group= your defined group name
--log-opt awslogs-stream= your defined stream name
--log-opt awslogs-create-group=true
<Logging_Docker_Image> <logging_file_name>
With this setup you have a seperate docker logging container, which talks to the docker host and spins up another docker container to read the logging files of the main container and pushes them to aws Cloudwatch fully costumized by you.
Related
Is it possible to output container logs to a file per container using fluentd?
I installed fluentd ( by running a fluentd official image) and am running a multiple application containers on the host.
I was able to output all of containers logs to one file, but I’d like to create a log file per container.
I’m thinking about using “match” directive, but have no idea.
My task is to deploy a third-party OSRM service on Amazon ECS Fargate.
For OSRM docker at startup, you need to transfer a file containing geodata.
The problem is that Amazon ECS Fargate does not provide access to the host file system and does not provide the ability to attach files and folders during container deployments.
Therefore, I would like to create an intermediate image that, when building, saved the file with geodata, and when starting the container, it would use it when defining volumes.
Thanks!
As I understand it, Amazon ECS is a plain container orchestrator and does not implement docker swarm, so things like docker configs are off the cards.
However, you should be able to do something like this :-
ID=$(docker create --name my-osrm osrm-base-image)
docker cp ./file.ext $ID:/path/in/container
docker start $ID
The solution turned out to be quite simple.
For this Dockerfile, I created an image on my local machine and hosted it on DockerHub:
FROM osrm/osrm-backend:latest
COPY data /data
ENTRYPOINT ["osrm-routed","--algorithm","mld","/data/andorra-latest.osm.pbf"]
After that, without any settings and volumes, I launched this image in AWS ECS
Is there any way to use Docker's --rm option that auto-removes the container once it exits but allow the container's logs to persist?
I have an application that creates containers to process jobs, and then once all jobs are complete, the container exits and is deleted to conserve space. However, in case a bug caused the container's process to exit prematurely, I'd like to persist the log files so I can confirm it exited cleanly or diagnose a faulty exit.
However, the --rm option appears to remove the container's logs along with the container.
Log to somewhere outside of the container.
You could mount a directory of the host in your container, so logs will be written to the host directory and kept after rm.
Or you can mount a volume on your container; which will be persisted after rm
Or you can setup rsyslog - or some similar log collection agent - to export your logs to a remote service. See https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ for more on this solution.
The first 2 are hacks but easier to get up and running on your workstation/server. If this is all cloud hosted there might be a decent log offloading option (Cloudwatch on AWS) which saves you the hassle of configuring rsyslog
I have two containers within my pod, one container is an app, that write logs (stdout/stderr), and second container is to read the logs of the app container, and to write it logs.txt file.
My question is, how can I tell the second container to read the logs of the app container?
When I'm on the host (with k8s), I can run:
$kubectl logs -f my_pod >> logs.txt
and get the logs of pod.
But how can I tell one container to read the logs of another container inside same pod?
I did something similar with dockers:
I mounted the docker.sock of the docker of the host, and did $docker logs -f app_container >> logs.txt
But I can't (as far as I know) mount kubectl to container in pod, to run kubectl commands.
So how do you think I can do it?
you could mount a volume on both containers, write a logfile on that volume on your "app" container, then read from that logfile in the other container.
the preferred solution is probably to use some kind of logshipper or sidecar to scrape the logs from stdout
(side note: you should probably never mount docker.sock into a container or try to run kubectl from one inside a container to the cluster, because you essentially give that container control over your cluster, which is a security no-no imho.
if you still have to do that for whatever reason, make sure the cluster security is REALLY tight)
You can use http request to obtain other pod logs from inside. Here are the docs
Basically you can use curl to do the following request:
GET /api/v1/namespaces/{namespace}/pods/{name}/log
I really enjoy the concept of having a cluster of docker machines available to execute docker services. I also like the additional features not available to singular docker containers (such as docker secret).
But I really have no need for long-standing services. My use case is to simply execute a bash script to use the docker swarm to take in an arbitrary number of finite commands, and execute each as a running docker container on the same docker image, while using the secrets loaded up with docker swarm's secrets.
Can I do this?
I do not want to have this container be "long running". I want it to run, and then exit with the output when the bash script loaded into the container is finished.
You can apply the ideas presented in "One-shot containers on Docker Swarm" from alex ellis.
You still neeeds to create a service, but with the right restart policy.
For instance, for a quick web server:
docker service create --restart-condition=none --name crawler1 -e url=http://blog.alexellis.io -d crawl_site alexellis2/href-counter
(--restart-condition, not --restart-policy, as commented by ethergeist)
So by setting a restart condition of 0, the container will be scheduled somewhere in the swarm as a (task). The container will execute and then when ready - it will exit.
If the container fails to start for a valid reason then the restart policy will mean the application code never executes. It would also be ideal if we could immediately return the exit code (if non-zero) and the accompanying log output, too.
For the last part, use his tool: alexellis/jaas.
Run your first one-shot container:
# jaas -rm -image alexellis2/cows:latest
The -rm flag removes the Swarm service that was used to run your container.
The exit code from your container will also be available, you can check it with echo $?.