The log is created in the path /var/lib/docker/containers/~/*, and linked in the path /var/log/container/*.
I wonder how the log of each POD occurs in the /var/lib/docker/containers/~/* path.
Also, I am wondering if it is right to use the json-file driver in an environment that collects logs with fluentD.
json-file is a logging driver supplied with Docker (usually the default Docker daemon setup)
For any container (CID) Docker will create a file in /var/lib/docker/containers/CID/CID.log for stdout and stderr. You can see this when you docker run something.
This logging is completely independent of Kubernetes.
Kubernetes
Kubernetes manages the symlinks in /var/log/container/* when Pod containers start or stop to point to the logfile of the underlying container runtime.
When using Docker, Kubernetes relies on the specific json-file Docker log path setup to create functional symlinks. If you use other custom logging solutions in Docker those Kubernetes symlinks won't be functional.
The recommended setup in the kubernetes logging architecture is to have Docker rotate log file at 10MB.
kube-up.shs GCE config is the defacto recommended setup for container run time configuration. json-file is used and rotated at 10MB and 5 old files are kept.
CRI-O
The alternate container runtime to Docker is cri-o.
cri-o also logs to a local json file, in a similar format to Docker.
kubelet will rotate cri-o log files in a similar manner to Docker.
Log Collection
Any kubernetes log collectors will rely on the Kubernetes symlinks to json files. There should be an expectation that those files will be rotated underneath the collection. fluentd also supports this.
If your having an issue with your fluentd setup, I would recommend adding the specific detail of the issue you are seeing, with examples of the data you see in the log files and the data being received on the log collection end to your other question or the same detail as an issue against the fluentd project you used to setup your k8s log collection.
Related
Is it possible to output container logs to a file per container using fluentd?
I installed fluentd ( by running a fluentd official image) and am running a multiple application containers on the host.
I was able to output all of containers logs to one file, but I’d like to create a log file per container.
I’m thinking about using “match” directive, but have no idea.
I have a Java/Spring-Boot container application running on Amazon ECS with EC2 as the underlying service for the container cluster. A datadog-agent container (v7.18.1-jmx) is also running within the cluster to feed the logs/metrics back to datadog servers. The logs are flowing through to the datadog webapp as expected but I see the same log line 3 times in the UI. The following environment variables have been set -
DD_API_KEY=<API-KEY>
DD_APM_ENABLED=true
DD_APM_ENV=dev
DD_APM_NON_LOCAL_TRAFFIC=true
DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
DD_DOGSTATSD_ORIGIN_DETECTION=true
DD_DOGSTATSD_TAGS=["env:dev"]
DD_LOG_LEVEL=error
DD_LOGS_CONFIG_COMPRESSION_LEVEL=1
DD_LOGS_CONFIG_USE_COMPRESSION=true
DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true
DD_LOGS_CONFIG_USE_HTTP=true
DD_LOGS_ENABLED=true
DD_TAGS=environment:dev
DD_URL=<Datadog-url>
Following mount points are present in the datadog-agent container definition in ECS -
Container Path Source Volume Read only
/var/run/docker.sock docker_sock true
/host/proc/ proc true
/host/sys/fs/cgroup cgroup true
/opt/datadog-agent/run pointdir
/etc/passwd passwd true
I tried replicating this by setting up the application and the datadog-agent on my Docker Desktop and there seems to be no issue with that setup. Is it happening because the same log content is getting captured at multiple mount points? Any help would be great!
The problem was unrelated to Datadog or Docker. It was an issue with how log4j was configured in the application. There were multiple appenders getting initialized programatically which led to this problem.
I want to create some docker images that generates text files. However, since images are pushed to Container Registry in GCP. I am not sure where the files will be generated to when I use kubectl run myImage. If I specify a path in the program, like '/usr/bin/myfiles', would they be downloaded to the VM instance where I am typing "kubectl run myImage"? I think this is probably not the case.. What is the solution?
Ideally, I would like all the files to be in one place.
Thank you
Container Registry and Kubernetes are mostly irrelevant to the issue of where a container will persist files it creates.
Some process running within a container that generates files will persist the files to the container instance's file system. Exceptions to this are stdout and stderr which are both available without further ado.
When you run container images, you can mount volumes into the container instance and this provides possible solutions to your needs. Commonly, when running Docker Engine, it's common to mount the host's file system into the container to share files between the container and the host: docker run ... --volume=[host]:[container] yourimage ....
On Kubernetes, there are many types of volumes. An seemingly obvious solution is to use gcePersistentDisk but this has a limitation in that it these disks may only be mounted for write on one pod at a time. A more powerful solution may be to use an NFS-based solution such as nfs or gluster. These should provide a means for you to consolidate files outside of the container instances.
A good solution but I'm unsure whether it is available, would be to write your files as Google Cloud Storage objects.
A tenet of containers is that they should operate without making assumptions about their environment. Your containers should not make assumptions about running on Kubernetes and should not make assumptions about non-default volumes. By this I mean, that your containers will write files to container's file system. When you run the container, you apply the configuration that e.g. provides an NFS volume mount or GCS bucket mount etc. that actually persists the files beyond the container.
HTH!
I've configured filebeat instance, and when it was running without errors, I've figured out, it does nothing.
I've found in log the following line:
INFO log/input.go:138 Configured paths: [/var/lib/docker/containers/*/*.log]
Quick check and I've found out, that the difference between openshift and pure docker is, that in docker the directories under /var/lib/docker/containers contains log files and under openshift they don't.
How should I configure filebeat to work under openshift?
AFAIK OpenShift also log out container logs as /var/lib/docker/containers/<hash>/*-json.log format, refer Viewing available container logs
for more details. If you can not find out at the directory, your docker log driver might be configured as journald, it can check from /etc/sysconfig/docker.
OPTIONS=' --selinux-enabled --log-driver=journald --signature-verification=False'
Then you should change journald to json-file for logging into /var/lib/docker/containers/<hash>/*-json.log.
OPTIONS=' --selinux-enabled --log-driver=json-file --signature-verification=False'
you need to restart the docker.service for taking effect.
I am using Amazon ECS and docker image is using php application.
Everything is running fine.
In the entry point i am using supervisord in foreground and those logs are currently send to cloudwatch logs.
In my docker image i have logs send to files
/var/log/apache2/error.log
/var/log/apache2/access.log
/var/app/logs/dev.log
/var/app/logs/prod.log
Now i want to send those logs to aws cloudwatch. whats the best way for that.
Also i have multiple containers for single app so example all foour containers will be having these logs.
Initially i thought to install aws logs agent in container itself but i have to use same docke rimage for local and ci and nonprod environments so i dont want to use cloudwatch logs there.
Is there any other way for this?
In your task definition, specify the logging configuration as the following:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "LogGroup",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "Prefix"
}
}
awslogs-stream-prefix is optional for EC2 launch type but required for Fargate
In the UserData section when you launch a new instance, register the instance to the cluster and make sure you specify the logging of type awslogs as well:
#!/bin/bash
echo 'ECS_CLUSTER=ClusterName' > /etc/ecs/ecs.config
echo ECS_AVAILABLE_LOGGING_DRIVERS='[\"json-file\", \"awslogs\"]' >> /etc/ecs/ecs.config
start ecs
More Info:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
You have to do two things:
Configure the ECS Task Definition to take logs from the container output and pipe them into a CloudWatch logs group/stream. To do this, you add a LogConfiguration property to each ContainerDefinition property in your ECS task definition. You can see the docs for this here, here, and here.
Instead of writing logs to a file in the container, instead write them to /dev/stdio or /dev/stdout / /dev/stderr. You can just use these paths in your Apache configuration and you should see the Apache log messages outputted to the container's log.
You can use the awslogs logging driver of Docker
Refer to the documentation on how to set it up
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
Given your defined use case:
Collect logs from 4 different files from within a container
Apply docker log driver awslog for the task
In previous answers you already have seen, that awslog applies the stdout as logging mechanism. Further, it has been stated, that awslog is applied per container, which means one aws cloud logging stream per running container.
To fulfill your goal when switching to stdout for all logging is not a choice of yours:
You apply a seperate container as logging mechanism (remember one log stream per container) for the main container
this leads to a seperate container, which applies the awslogs driver and reads the files from the other container sequentially (also async possible, more complex) and pushes them into a seperate aws cloud log stream of your choice
this way, you have seperate logging streams or groups if you like, for every file
Prerequisites:
The main container and a seperate logging container with access to a volume of the main container or the HOST
See this question how shared volumes between containers are realized
via docker compose:
Docker Compose - Share named volume between multiple containers
The logging container needs to talk to the host docker daemon. Running docker inside docker is not recomended and also not needed here!
here is a link to see how you can make the logging container talking to the host docker daemon https://itnext.io/docker-in-docker-521958d34efd
Create the logging docker container with a Dockerfile like this:
FROM ubuntu
...
ENTRYPOINT ["cat"]
CMD ["loggingfile.txt"]
You can apply this container as a function with input parameter logging_file_name to write to stdout and directly into aws Cloudwatch:
docker run -it --log-driver=awslogs
--log-opt awslogs-region= region
--log-opt awslogs-group= your defined group name
--log-opt awslogs-stream= your defined stream name
--log-opt awslogs-create-group=true
<Logging_Docker_Image> <logging_file_name>
With this setup you have a seperate docker logging container, which talks to the docker host and spins up another docker container to read the logging files of the main container and pushes them to aws Cloudwatch fully costumized by you.