Injest logs as JSON in Container Optimized OS - docker

I am able to injest logs to Google Log Viewer with the help of stackdriver logging agent from Container Optimized OS as JSON.
It injests logs as a value to message, but not as json payload with the default configuration
What I have tried?
I have changed the fluentd config in /etc/stackdriver/logging.config.d/fluentd-lakitu.conf to the following:
<source>
#type tail
format json
path /var/lib/docker/containers/*/*.log
<parse>
#type json
</parse>
pos_file /var/log/google-fluentd/containers.log.pos
tag reform_contain
read_from_head true
</source>
But its unable to send logs to Log viewer
OS: Container Optimized OS cos-81-12871-1196-0

I've found this issue on Google's Public Issue Tracker which discusses the same problem you mentioned in your use case. Google Product team has been notified about this limitation and they are working on it. You just have to go there and click on the star next to the title so you get updates on the issue and you give the issue more visibility.

As #Kamelia Y mentioned about the https://issuetracker.google.com/issues/137517429
There is a mention on workaround used
<filter cos_containers.**>
#type parser
format json
key_name message
reserve_data false
emit_invalid_record_to_error false
</filter>
The above snippet parses the logs into JSON and injest to Cloud Logging.
In this discussion in Google Groups on Stackdriver, we have discussed on how to use it with startup-script.
Here is the snippet for startup script.
cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save
# Shorter version of the above: cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf{,-save}
(
head -n -2 /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save; cat <<EOF
<filter cos_containers.**>
#type parser
format json
key_name message
reserve_data false
emit_invalid_record_to_error false
</filter>
EOF
) > /etc/stackdriver/logging.config.d/fluentd-lakitu.conf
sudo systemctl start stackdriver-logging
This image can be used to generate random JSON logs.
https://hub.docker.com/repository/docker/patelathreya/json-random-logger

Related

OpenSearch Dashboard time field

I have Fluentd + OpenSearch + OpenSearch Dashboard stack for working with logs. The problem is my time field in Opensearch Dashboard is string, so my filter by time doesn't work.
Any body knows what's wrong with my configuration?
Fluentd parser:
<source>
#type tail
path /opt/liferay/logs/*.json.log
pos_file /var/log/td-agent/test1_gpay.pos
read_from_head true
follow_inodes true
refresh_interval 10
tag gpay1
<parse>
#type json
time_type string
time_format %Y-%m-%d %H:%M:%S.%L
time_key time
keep_time_key true
</parse>
</source>
My log format is:
{"time":"2023-02-07 14:00:00.039", "level":"DEBUG", "thread":"[liferay/scheduler_dispatch-3]", "logger":"[GeneralListener:82]", "message":"Found 0 tasks for launch."}
And what I have in OpenSearch Dashboard:
I tried to use scripted fields in OpenSearch Dashboard, but my filter for time doesn't work.

How to parse kubernetes logs with Fluentd

I use few services in EKS cluster. I want the logs from 1 of my services to be parsed
kubectl logs "pod_name" --> this are the logs when I check directly in the pod service
2022-09-21 10:44:26,434 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level.
2022-09-21 10:44:36,316 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Before cleanup stats (total=10, active=0, idle=10, waiting=0)
This service has java based login (Apache Commons logging) and in kibana at the moment is displayed whole log message with date and time + Log Level + message :
Is it possible this whole log to be parsed into the separate fields (time and date + Log Level + message) and displayed in the Kibana like that.
This is my fluentd config file for the source and pattern:
<source>
#type tail
path /var/log/containers/*background-executor*.log
pos_file fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
#type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{3})\s+(?<level>[^\s]+)\s+(?<pid>\d+).*?\[\s+(?<thread>.*)\]\s+(?<class>.*)\s+:\s+(?<message>.*)/
time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
keep_time_key false
</pattern>
</parse>
</source>
You have to just update the filter as per need
<filter **>
#type record_transformer
enable_ruby
<record>
foo "bar"
KEY "VALUE"
podname "${record['tailed_path'].to_s.split('/')[-3]}"
test "passed"
time "${record['message'].match('[0-9]{2}\\/[A-Z][a-z]{2}\\/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2}\\s
+[0-9]{4}').to_s}"
</record>
</filter>
you have to parse the record & message with data something like if between [0-9] or [A-Z] same way show in above example.
Edit the filter.conf
You can create your own Key and value, in value you have to parse the filed and flutenD will populate the value.

Fluentd - How to parse logs whose messages are JSON formatted parsed AND whose messages are in text; as is without getting lost due to parse error

I have certain log messages from certain services that are in JSON format; and then this fluentd filter is able to parse that properly. However with this; it discards all other logs from other components whose message field is not proper JSON.
<source>
#type tail
#id in_tail_container_logs
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}"
exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}"
read_from_head true
#https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-752813739
#<parse>
# #type "#{ENV['FLUENT_CONTAINER_TAIL_PARSER_TYPE'] || 'json'}"
# time_format %Y-%m-%dT%H:%M:%S.%NZ
#</parse>
#https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-831801690
<parse>
#type cri
<parse> # this will parse the neseted feilds properly - like message in JSON; but if mesage is not in json then this is lost
#type json
</parse>
</parse>
#emit_invalid_record_to_error # when nested logging fails, see if we can parse via JSON
#tag backend.application
</source>
But all other messages which do not have proper JSON format are lost;
If I comment out the nested parse part inside type cri; then I get all logs; but logs whose messages are in JSON format are not parsed further. Espcially severity field.See last two lines in the screen shot below
<parse>
#type cri
</parse>
To overcome this ; I try to use the LABEL #ERROR, if nested parsing fails for some logs; whose message is not in JSON format- I need to still see the pod name and other details and message as text in Kibana; However with the below config, it is only able to parse logs whose message is proper JSON format
<source>
#type tail
#id in_tail_container_logs
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}"
exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}"
read_from_head true
#https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-752813739
#<parse>
# #type "#{ENV['FLUENT_CONTAINER_TAIL_PARSER_TYPE'] || 'json'}"
# time_format %Y-%m-%dT%H:%M:%S.%NZ
#</parse>
#https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-831801690
<parse>
#type cri
<parse> # this will parse the neseted feilds properly - like message in JSON; but if mesage is not in json then this is lost
#type json
</parse>
</parse>
#emit_invalid_record_to_error # when nested logging fails, see if we can parse via JSON
#tag backend.application
</source>
<label #ERROR> # when nested logs fail this is not working
<filter **>
#type parser
key_name message
<parse>
#type none
</parse>
</filter>
<match kubernetes.var.log.containers.elasticsearch-kibana-**> #ignore from this container
#type null
</match>
</label>
How do I get logs whose messages are JSON formatted parsed; and whose messages are in text; as is without getting lost ?
Config here (last there commits) https://github.com/alexcpn/grpc_templates.git
One way to solve this issue is to prepare the logs before parsing them with cir plugin, to do so you need to perform the following steps
collect container logs and tag them with a given tag.
classify the logs to JSON and none JSON logs using rewrite_tag_filter. and regex.
parse JSON logs with cri
parse none JSON Logs
example of configs (not tested)
## collect row logs from files
<source>
#type tail
#id in_tail_container_logs
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}"
read_from_head true
format json
</source>
# add metadata to the records (container_name, image etc..)
<filter kubernetes.**>
#type kubernetes_metadata
</filter>
# classify the logs to different categories
<match kubernetes.**>
#type rewrite_tag_filter
<rule>
key message
pattern /^\{.+\}$/
tag json.${tag}
</rule>
<rule>
key message
pattern /^\{.+\}$/
tag nonejson.${tag}
invert true
</rule>
</match>
# filter or match logs that match the json tag
<filter json.**>
</filter>
<match json.**>
</match>
# filter or match logs that match the none json tag
<filter nonejson.**>
</filter>
<match nonejson.**>
</match>

Multiple filter and formatting issue

I'm trying to send logs from td-agent to Datadog using the below configuration. My expectation is filtering some keywords and formating that logs with using CSV format type. How can I do this?
I tried to grep and format plugin in the filter section as below but it doesn't work as expected. The current and expected situation as below picture. How can I solve this situation?
<source>
#type syslog
port 8888
tag rsyslog
</source>
<filter rsyslog.**>
#type grep
<regexp>
key message
pattern /COMMAND/
</regexp>
<format>
#type csv
fields hostname,from,to
</format>
</filter>
<match rsyslog.**>
#type datadog
#id awesome_agent
api_key xxxxxxxxxx
</match>
current
expected

FluentD, how to grep only spcific logs

2019/08/13 13:13:17 [DEBUG] Hello, world!
2019/08/13 13:13:17 [INFO] Ignore me
2019/08/13 13:13:17 [INFO] SPECIFIC_LOG :{"name": "mark"}
I have like above logs, and I need to grep only the logs which contains 'SPECIFIC_LOG' and I want to ignore the others.
I tried to set the config like this,
<source>
#type tail
path ./sample.log
tag debug.sample
<parse>
#type regexp
expression /\[\w+\] SPECIFIC_LOG\s:(?<message>.*)$/
</parse>
</source>
<filter debug.**>
#type parser
key_name message
format json
</filter>
And it is working for the matched log with pattern, but for the not matched log, I got warning which says
#0 pattern not matched: "2019/08/13 13:13:17 [DEBUG] Hello, world!"
How can I grep only the log which is matched the pattern, so that I can resolve the warning?
It is just a warning that pattern did not match and in my opinion, it can be ignored.
To ignore such warnings, you can set emit_invalid_record_to_error false option.
<filter debug.**>
#type parser
key_name message
format json
emit_invalid_record_to_error false
</filter>
More info on this flag -
https://docs.fluentd.org/filter/parser#emit_invalid_record_to_error
Earlier versions had suppress_parse_error_log flag and now it has been replaced with emit_invalid_record_to_error.
suppress_parse_error_log is missing. What are the alternatives?
Since v1, parser filter doesn't support suppress_parse_error_log parameter because parser filter uses #ERROR feature instead of internal logging to rescue invalid records. If you want to simply ignore invalid records, set emit_invalid_record_to_error false.
See also emit_invalid_record_to_error parameter.

Resources