Using fluentd in docker to get nginx logs - docker

I have a scenario where nginx is running in one container and fluentd is in another container, i mapped nginx logs to var/logs/nginx directory, but i was unable to retrieve logs to elastic search using fluentd, please help me regarding this:
fluentd.conf
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<source>
#type tail
path /var/log/nginx/access_in_log
#pos_file /var/log/td-agent/nginx-access.log.pos
tag nginx.access
format nginx
</source>
<match nginx.access>
#type elasticsearch
logstash_format true
host elasticsearchkibana
port 9200
index_name nginxindex
type_name nginxlogtype
</match>
Please let me know what am I missing.

I solved this issue by using the nginx syslog driver (http://nginx.org/en/docs/syslog.html).
In my nginx.conf inside the nginx container I have these settings:
http {
...
access_log syslog:server=<FLUENTD_HOST>:<FLUENTD_PORT>,tag=nginx_access;
error_log syslog:server=<FLUENTD_HOST>:<FLUENTD_PORT>,tag=nginx_error info;
In my fluent.conf inside my Fluentd container I have this config:
<source>
#type syslog
port 5141
tag "syslog"
</source>
The logs then look like this:

Related

How to add severity, facility field to Kibana with fluentd parse?

I use rsyslog with the default config, traditional template. Rsyslog sends all syslog to fluentd.
My fluentd config:
<source>
#type syslog
port 5140
tag rsyslog
</source>
<match rsyslog.*.*>
#type elasticsearch
host localhost
port 9200
logstash_format true
</match>
Kibana:
How to add severity and facility fields to Kibana?
You can configure severity_key (https://docs.fluentd.org/input/syslog#severity_key) and facility_key (https://docs.fluentd.org/input/syslog#facility_key) config to extract severity and facility.
So something like this should work:
<source>
#type syslog
port 5140
tag rsyslog
severity_key severity
facility_key facility
</source>

how to set hotsname in index name in fluentd conf file

I want to set hostname in index_name of fluentd conf file. I am setting like this but it is not working
<match output.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
ssl_version TLSv1_2
ssl_verify false
type_name _doc
port 443
scheme https
flush_interval 10s
index_name abc-${hostname}
</store>
<store>
#type stdout
</store>
</match>
How can I achieve that?
Your question is not very clear but let me try to answer anyway.
You can achive it in your source part.
Example
<source>
type tail
#format json
path path_to_the_file
pos_file /var/log/td-agent/buffer/somename
tag hostname #in plain text(there are other methods too)
</source>
Now add
include_tag_key true
logstash_prefix ${tag}
logstash_format true
to your <match> part after host and remove index_name..
It's not a solution to your problem but it will give you a direction hopefully.

Kibana health prob fails when elasticsearch host is added to fluentd-forwarder-cm

I am trying to setup EFK stack in a aws cluster using helm.
These are the steps I followed.
Created a separate namespace logging
Installed elastic search
helm install elasticsearch elastic/elasticsearch -f values.yml -n logging
values.yml
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: default
resources:
requests:
storage: 1Gi
Installed kibana
helm install kibana elastic/kibana -n logging
Installed fluentdb
helm install fluentd bitnami/fluentd -n logging
Created ingress for kibana
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service-api
namespace: logging
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: "letsencrypt-example-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- logs.example.in
secretName: mySecret
rules:
- host: logs.example.in
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kibana
port:
number: 5601
At this point everything works.
I can go to logs.example.in to view the kibana dashboard.
I can also exec into any pod and run,
curl elasticsearch-master.logging.svc.cluster.local
...and it gives response.
When I update fluent-forwarder-cm ConfigMap and provide elasticsearch host like below
{
"fluentd-inputs.conf": "# HTTP input for the liveness and readiness probes
<source>
#type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
#type tail
path /var/log/containers/*.log
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
#type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
#type kubernetes_metadata
</filter>
",
"fluentd-output.conf": "# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
#type stdout
</match>
# Forward all logs to the aggregators
<match **>
#type elasticsearch
include_tag_key true
host \"elasticsearch-master.logging.svc.cluster.local\"
port \"9200\"
logstash_format true
<buffer>
#type file
path /opt/bitnami/fluentd/logs/buffers/logs.buffer
flush_thread_count 2
flush_interval 5s
</buffer>
</match>
",
"fluentd.conf": "# Ignore fluentd own events
<match fluent.**>
#type null
</match>
#include fluentd-inputs.conf
#include fluentd-output.conf
",
"metrics.conf": "# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
#type prometheus
port 24231
</source>
# input plugin that collects metrics from MonitorAgent
<source>
#type prometheus_monitor
<labels>
host #{hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
#type prometheus_output_monitor
<labels>
host #{hostname}
</labels>
</source>
# input plugin that collects metrics for in_tail plugin
<source>
#type prometheus_tail_monitor
<labels>
host #{hostname}
</labels>
</source>
"
}
I get errors.
1st Error,
kubectl describe pod kibana-kibana-7f47d4b8c5-7r8x7 -n logging
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24m default-scheduler Successfully assigned logging/kibana-kibana-7f47d4b8c5-7r8x7 to ip-172-20-32-143.ap-south-1.compute.internal
Normal Pulled 24m kubelet Container image "docker.elastic.co/kibana/kibana:7.12.0" already present on machine
Normal Created 24m kubelet Created container kibana
Normal Started 24m kubelet Started container kibana
Warning Unhealthy 22m kubelet Readiness probe failed: Error: Got HTTP code 000 but expected a 200
Warning Unhealthy 4m28s (x25 over 24m) kubelet Readiness probe failed: Error: Got HTTP code 503 but expected a 200
2nd Error,
GET https://logs.example.in/
503 Service Temporarily Unavailable
3rd Error,
Doing
curl elasticsearch-master.logging.svc.cluster.local:9200
from inside any pod give timedout error

fluentd localtime is working for stdout, but not elasticsearch

I'm tailing a syslog file which doesn't have the timezone. By default fluentd (incorrectly) assumes the timezone is UTC, so it shifts the time off by several hours.
I can fix this for stdout, using 'localtime true', but I can't find a setting to do the same thing for elasticsearch:
<source>
#type tail
# read_from_head true
<parse>
#type syslog
</parse>
path /tmp/syslog
pos_file /tmp/var_log_syslog.pos
tag syslog.file
</source>
<match syslog.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
utc_index false
</store>
<store>
#type stdout
localtime true
</store>
</match>
It looks like the desired behavior is the default behavior. Fluentd seems to use the localtime zone, but I was running it in a docker container and I forgot to set the container's timezone.

Using a single source in fluentd with different match types

So I am trying to capture the output from docker containers running on a host but after a change by the developers to use json as a logging output for the containers I am missing out on the containers start up message that are happening in the entrypoint.sh. I can see that someone has added a new filter section in the config file which works really nicely to capture json output but only json output.
Here is the template in use:
<source>
#type forward
port 24224
bind 0.0.0.0
tag GELF_TAG
</source>
<filter GELF_TAG.**>
#type parser
key_name log
reserve_data false
<parse>
#type json
</parse>
</filter>
<match GELF_TAG.**>
#type copy
<store>
#type gelf
host {{ graylog_server_fqdn }}
port 12201
protocol tcp
flush_interval 5s
</store>
<store>
#type stdout
</store>
</match>
How do I set up the config to be able to capture the entrypoint.sh output and the json output from the containers after they start?
EDIT.
The filter is rejecting messages sent to the docker containers stdout up until the application starts logging in json.
[warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data
So I tried to capture everything that was being drooped into the ERROR tag and I can see the missing messages but they still fail to parse using this config:
# Ansible
<source>
#type forward
port 24224
bind 0.0.0.0
tag GELF_TAG
</source>
<filter GELF_TAG.**>
#type parser
emit_invalid_record_to_error true
key_name log
reserve_data false
<parse>
#type json
</parse>
</filter>
<match {GELF_TAG.**,#ERROR}>
#type copy
<store>
#type gelf
host {{ graylog_server_fqdn }}
port 12201
protocol tcp
flush_interval 5s
</store>
<store>
#type stdout
</store>
</match>
Install the multi-format parser:
td-agent-gem install fluent-plugin-multi-format-parser -v 1.0.0
# Ansible
<source>
#type forward
port 24224
bind 0.0.0.0
tag GELF_TAG
</source>
<filter GELF_TAG.**>
#type parser
key_name log
reserve_data false
<parse>
#type multi_format
<pattern>
format json
time_key timestamp
</pattern>
<pattern>
format none
</pattern>
</parse>
</filter>
<match GELF_TAG.**>
#type copy
<store>
#type gelf
host {{ graylog_server_fqdn }}
port 12201
protocol tcp
flush_interval 5s
</store>
<store>
#type stdout
</store>
</match>
You can also use the 'rewrite_tag_filter' which is an output plugin. Using that you can change the tag for the different patterns, and then use the parsers/filters.

Resources