How to add configuration to Logging Agent from Docker Container? - docker

I'm trying to run a docker container on Compute Engine, everything works fine, my PHP app is correctly returning all data but i want to Increase log verbosity.
For now I've added two config files for fluentd inside a container config dir:
This one for nginx:
<source>
#type tail
format nginx
path /var/log/feedbacks/nginx-access.log
pos_file /var/lib/google-fluentd/pos/nginx-access.pos
read_from_head true
tag nginx-access
</source>
<source>
#type tail
format none
path /var/log/feedbacks/nginx-error.log
pos_file /var/lib/google-fluentd/pos/nginx-error.pos
read_from_head true
tag nginx-error
</source>
And this one for PHP log output :
<source>
#type tail
format /^\[(?<time>[\d\-]+ [\d\:]+)\] (?<channel>.+)\.(?<level>(DEBUG|INFO|NOTICE|WARNING|ERROR|CRITICAL|ALERT|EMERGENCY))\: (?<message>[^\{\}]*) (?<context>(\{.+\})|(\[.*\])) (?<extra>(\{.+\})|(\[.*\]))\s*$/
path /var/log/feedbacks/structured.log
pos_file /var/lib/google-fluentd/pos/feedbacks.pos
read_from_head true
tag feedbacks
</source>
I've mounted this 2 config files as follow with the corresponding logs files:
container path: /usr/src/app/var/logs/, host path: /var/log/feedbacks/, mode: r/w
container path: /usr/src/app/docker/runnable/fluentd/, host path: /etc/google-fluentd/config.d/, mode: r/w
But when I /bin/bash to these directories inside the stackdriver-logging-agent there is nothing inside, maybe i'm missing something ...
Thanks for helping !

stackdriver-logging-agent reads a container's logs through the equivalent of docker logs [container]. This provides a consistent API for processes on the host OS to gather container logs.
By default, the container's stdout|stderr are sent to docker logs and it's this stream that the stackdriver-logging-agent is collecting and onsending to the Stackdriver service.
IIUC correctly, you'd need to ensure that your PHP app is generating the richer logs and that these are being sent to stdout|stderr.
If you were to use Nginx's stock Docker image, it does this:
lrwxrwxrwx 1 root root 11 May 8 03:01 access.log -> /dev/stdout
lrwxrwxrwx 1 root root 11 May 8 03:01 error.log -> /dev/stderr
See Docker's documentation here:
https://docs.docker.com/config/containers/logging/
I was unable to find a good explanation for this for Container OS on Google's site.

Related

Logspout container in Docker

I am trying to deploy logspout container in docker, but keep running into an issue which I have searched in this website and github but to no avail, so hoping someone knows.
I followed the following commands as per the Readme here: https://github.com/gliderlabs/logspout
(1) docker pull gliderlabs/logspout:latest (also tried with logspout:master, same results)
(2) docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock --publish=127.0.0.1:8000:80 gliderlabs/logspout (also tried with -v /var/run/docker.sock:/var/run/docker.sock, same results)
The container gets created but stops immediately. When I check the container logs (docker container logs logspout), I only see the following entries:
2021/12/19 06:37:12 # logspout v3.2.14 by gliderlabs
2021/12/19 06:37:12 # adapters: raw syslog tcp tls udp multiline
2021/12/19 06:37:12 # options :
2021/12/19 06:37:12 persist:/mnt/routes
2021/12/19 06:37:12 # jobs : pump routes http[health,logs,routes]:80
2021/12/19 06:37:12 # routes : none
2021/12/19 06:37:12 pump ended: Get http://unix.sock/containers/json?: dial unix /var/run/docker.sock: connect: no such file or directory
I checked docker.sock as ls -la /var/run/docker.sock results in srw-rw---- 1 root docker 0 Dec 12 09:49 /var/run/docker.sock. So docker.sock does exist, which adds to the confusion as to why the container can't find it.
I am new to linux/docker, but my understanding is that using -v or --version would automatically mount the location to the container, but does not seem to be happening here. So I am wondering if anyone has any suggestion on what needs to be done so that the logspout container can find the docker.sock.
System Info: Docker version 20.10.11, build dea9396; Raspberry Pi 4 ARM 64, OS: Debian GNU/Linux 11 (bullseye)
EDIT: added comment about -v tag in step (2) above
The container must be able to access the Docker Unix socket to mount it. This is typically a problem when namespace remapping is enabled. To disable remapping for the logspout container, pass the --userns=host flag to docker run, .. create, etc.

Fluentd file output does not output to file

On Ubuntu 18.04, I am running td-agent v4 which uses Fluentd v1.0 core. First I configured it with TCP input and stdout output. It receives and outputs the messages fine. I then configure it to output to file with a 10s flush interval, yet I do not see any output files generated in the destination path.
This is my file output configuration:
<match>
#type file
path /var/log/td-agent/test/access.%Y-%m-%d.%H:%M:%S.log
<buffer time>
timekey 10s
timekey_use_utc true
timekey_wait 2s
flush_interval 10s
</buffer>
</match>
I perform this check every 10s to see if log files are generated, but all I see is a directory with a name that still has the placeholders that I set for the path param:
ls -la /var/log/td-agent/test
total 12
drwxr-xr-x 3 td-agent td-agent 4096 Feb 5 23:14 .
drwxr-xr-x 6 td-agent td-agent 4096 Feb 6 00:17 ..
drwxr-xr-x 2 td-agent td-agent 4096 Feb 5 23:14 access.%Y-%m-%d.%H:%M:%S.log
From following the Fluentd docs, I was expecting this should be fairly straight forward since the file output and buffering plugins are bundled with Fluentd's core.
Am I missing something trivial here?
I figured it out, and it works now. I had two outputs, one to file and another to stdout. Apparently that won't work if they're both defined separately in the config file with their own <match> ... </match>. I believe output to stdout was read first in the config, so Fluentd outputted to that and not to file. They should both instead be nested under the copy output like this:
<match>
#type copy
<store>
#type file
...
</store>
<store>
#type stdout
</store>
</match>

Specify file inside container as Fluent bit's input

If I have a container writing its log to a file e.g /var/log/app.log, how can I configure Fluent bit to read the container's log from that file?I have this configuration inside my K8S ConfigMap:
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/*_kube-system_*.log, /var/log/containers/*fluent-bit*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10

Unable to Run a Simple Python Script on Fluentd

I have a python script called script.py. When I run this script, it creates a logs folder on the Desktop and downloads all the necessary logs from a website and writes them as .log files in this logs folder. I want Fluentd to run this script every 5 minutes and do nothing more. The next source I have on the config file does the real job of sending this log data to another place. If I already have the logs folder on the Desktop, this log files are uploaded correctly to the next destination. But the script never runs. If I delete my logs folder locally, this is the output fluentd gives:
2020-07-27 10:20:42 +0200 [trace]: #0 plugin/buffer.rb:350:enqueue_all: enqueueing all chunks in buffer instance=47448563172440
2020-07-27 10:21:09 +0200 [trace]: #0 plugin/buffer.rb:350:enqueue_all: enqueueing all chunks in buffer instance=47448563172440
2020-07-27 10:21:36 +0200 [debug]: #0 plugin_helper/child_process.rb:255:child_process_execute_once: Executing command title=:exec_input spawn=[{}, "python /home/zohair/Desktop/script.py"] mode=[:read] stderr=:discard
This never gives a logs folder on my Desktop which the script normally does output if run locally like python script.py
If I already have the logs folder, I can see the logs on the stdout normally. Here is my config file:
<source>
#type exec
command python /home/someuser/Desktop/script.py
run_interval 5m
<parse>
#type none
keys none
</parse>
<extract>
tag_key none
</extract>
</source>
<source>
#type tail
read_from_head true
path /home/someuser/Desktop/logs/*
tag sensor_1.log-raw-data
refresh_interval 5m
<parse>
#type none
</parse>
</source>
<match sensor_1.log-raw-data>
#type stdout
</match>
I just need fluentd to run the script and do nothing else, and let the other source take this data and send it to somewhere else. Any solutions?
Problem was solved by creating another #type exec for pip install -r requirements.txt which fulfilled the missing module error which was not being shown on the fluentd error log (Was running fluentd as superuser).

Fluentd gives the error: Log file is not writable, when starting the server

Here's my td-agent.conf file
<source>
#type http
port 8888
</source>
<match whatever.access>
#type file
path /var/log/what.txt
</match>
But when I try to start the server using
sudo /etc/init.d/td-agent start
it gives the following error:
'2016-02-01 10:45:49 +0530 [error]: fluent/supervisor.rb:359:rescue in >main_process: config error file="/etc/td-agent/td-agent.conf" error="out_file: >/var/log/what.txt.20160201_0.log is not writable"
Can someone explain what's wrong?
If you installed td-agent v2, it creates its own user and group called td-agent. I believe that when you run the td-agent service, it switches to this user and hence it expects the directory to have write permissions for this user. I faced the same issue and did something like: (Use sudo if needed for below commands.)
mkdir /logs
chown td-agent:td-agent /logs
and update your section to:
<match whatever.access>
#type file
path /logs/what.txt
</match>
I think when you try to start td-agent, you do not have permission to access /var/log/, using ls -l to check its permission mode and change it with chmod.
I got the same problem, after change directory's access permission, td-agent can be started.

Resources