Kong - Structured Json Logs - fluentd

I can not find an example to output Kongs logs as JSONto system out. I am currently using Fluentd to ingest logs from my Kubernetes cluster but I have no idea how to send those logs to Fluentd as structured JSON.

For anyone who is struggling with this, I made the following updates to the kong helm chart values.
env:
admin_access_log: '/dev/stdout structured_logs'
proxy_access_log: '/dev/stdout structured_logs'
nginx_http_log_format: |
structured_logs escape=json '{"remote_addr": "$remote_addr", "remote_user": "$remote_user", "host": "$host"...}

Have you looked at the file-log plugin? https://docs.konghq.com/hub/kong-inc/file-log/
It lets you log to /dev/stdout and use lua to remove/add fields if necessary.

Related

Error 403: "Flux query service disabled." But flux-enabled=true has been set in influxdb.conf

I have been using InfluxDB (server version 1.7.5) with the InfluxQL language for some time now. Unfortunately, InfluxQL does not allow me to perform any form of joins, so I need to use InfluxDB's new scripting language Flux instead.
The manual states that I have to enable Flux in /etc/influxdb/influxdb.conf by setting flux-enabled=true which I have done. I restarted the server to make sure I got the new settings and started the Influx Command Line tool with "-type=flux".
I then do get a different user interface than when I use InfluxQL. So far so good. I can also set and read variables etc. So I can set:
> dummy = 1
> dummy
1
However, when I try to do any form of query of the tables such as: from(bucket:"db_OxyFlux-test/autogen")
I always get
Error: Flux query service disabled. Verify flux-enabled=true in the [http] section of the InfluxDB config.
: 403 Forbidden
I found the manual for Fluxlang rather lacking in basic details of Schema exploration and so I am not sure if this is just an issue with my query raising this error or if something else is going wrong. I tested this both on my own home machine and on our remote work server and I get the same results.
Re: Vilix
Thank you. This lead me in the right direction.
I realised that InfluxDB does not automatically read the config file (which is not very intuitive). But your solution also forces me to start the deamon by hand each time. After some more googling I used:
"sudo influxd config -config /etc/influxdb/influxdb.conf"
So hopefully now the daemon will start automatically each time on startup rather than me having to do this by hand.
I have the same issue and solution is to start influxd with -config option:
influxd -config /etc/influxdb/influxdb.conf

Query on custom metrics exposed via prometheus node exporter textfile collector fails

I am new to prometheus/alertmanager.
I have created a cron job which executes shell script every minute. This shell script generates "test.prom" file (with a gauge metric in it) in the same directory which is assigned to --textfile.collector.directory argument (to node-exporter). I verified (using curl http://localhost:9100/metrics) that the node-exporter exposes that custom metric correctly.
When I tried to run a query against that custom metric in prometheus dashboard, it does not show up any results (it says no data found).
I could not figure out why the query against the metric exposed via node-exporter textfile collector fails. Any clues what I missed ? Also please let me know how to check and ensure that prometheus scraped my custom metric 'test_metric` ?
My query in prometheus dashboard is test_metric != 0 (in prometheus dashboard) which did not give any results. But I exposed test_metric via node-exporter textfile.
Any help is appreciated !!
BTW, the node-exporter is running as docker container in Kubernetes environment.
I had a similar situation, but it was not a configuration problem.
Instead, my data included timestamps:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 53.87 1541426242
network_connectivity_rtt{host="hop_1"} 58.8 1541426242
network_connectivity_rtt{host="hop_2"} 21.93 1541426242
network_connectivity_rtt{host="hop_3"} 71.69 1541426242
PNE was picking them up without any problem once I reloaded it. As prometheus is running under systemd, I had to check the logs like this:
journalctl --system -u prometheus.service --follow
There I read this line:
msg="Error on ingesting samples that are too old or are too far into the future"
Once I removed the timestamps, values started appearing. This lead me to read more in detail about the timestamps, and I found out they have to be in miliseconds. So this format now is ok:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 50.47 1541429581376
network_connectivity_rtt{host="hop_1"} 3.38 1541429581376
network_connectivity_rtt{host="hop_2"} 11.2 1541429581376
network_connectivity_rtt{host="hop_3"} 20.72 1541429581376
I hope it helps someone else.
Its my bad. I did not included scrape instructions for node-exporter in prometheus.yaml file. It worked after including them.
This issue is happening because of stale metrics.
Lets say you have written you metric in file at 13.00
by default after 5min prometheus will consider you metric stale and it might disappear from there at the time you are making query.

docker-compose logs in queryable format

I use docker-compose. The promblem: docker-compose logs returns me logs in format:
service_1 | {"timestamp":"2017-11-28T15:31:47.065Z","correlationId":"NO_CORRELATION_ID","tags":....
which is hard to query with simple scripts. Is there are a way to change the format? ideally to something like:
{"name": "service_1", "data": "{"timestamp":"2017-11-28T15:31:47.065Z","correlationId":"NO_CORRELATION_ID","tags":...." }
The goal is to filter a chain of log messages from different microservices which have the same request id.

Is there a way to customize Docker's log?

We are collecting the logs of our applications. Since we containerize our applications, the way to collect logs needs a little bit changes.
We log via the Docker Logging Driver:
Application output the logs to container’s stdout and stderr
Using json-file logging driver, docker output logs to json file on
the host machine
Service on the host machine forwards the log files.
But the logs from Docker has additional information which unnecessary and make the forward step complicated because we need to remove those additional information before forward.
For example, the log from Docker is as below, but all we want is the value of log field. Is there a way to customize log format and only output the information wanted by override some Docker's configurations?
{
“log”: "{“level”: “info”,“message”: “data is correct”,“timestamp”: “2017-08-01T11:35:30.375Z”}\r\n",
“stream”: “stdout”,
“time”: “2017-08-03T07: 58: 02.387253289Z”
}
I don't know of any way to customize the output of the json-file docker log plugin. However docker supports the gelf plugin which allows you to send logs to logstash. Using logstash you can output logs in many different ways (by using output plugins) and at the same time customize the format.
For instance to output logs to a file (without any other metadata) you can use something like the following:
output {
file {
path => "/path/to/logfile"
codec => line { format => "%{message}"}
}
}
If you don't want to add complexity to your logging logic, you can keep using the json-file driver and use an utility such as jq to parse the file and extract only the relevant information. For instance with jq you can do: jq -r .log </path/to/logfile>
This will read each line of the specified file as a json object and output only the log field.

How to change the timestamp to UTC for the logs that a fluent-bit docker container receives via stdin?

My Fluent Bit Docker container is adding a timestamp with the local time to the logs that received via STDIN; otherwise all the logs received via rsyslog or journald seem to have a UTC time format.
I have a basic EFK stack where I am running Fluent Bit containers as remote collectors which are forwarding all the logs to a FluentD central collector, which is pushing everything into Elasticsearch.
I've added a filter to the Fluent Bit config file where I have experimented with many ways to modify the timestamp, to no avail. It seems like I am overthinking it; it should be much easier to modify the timestamp.
These are all the ways I've tried to modify the timestamp with the fluent-bit.conf filter
[FILTER]
Name record_modifier
Match_Regex ^(?!log.*).*$ ## only match the input received via stdin
Tag log.stdout ## tag to mark input received via stdin
Add sourcetype timestamp ## tried to add timestamp from lua script
Parser docker ## tried to use docker parser for timestamp
Time_key utc ## tried to add timestamp as a key
script test.lua ## sample lua script from fluentbit docs
call cb_print ## call a function from within lua script
What is the de facto method to make all the timestamps uniform to UTC? Any help or suggestion is appreciated.
The way it works is that the docker parser extracts the content of 'log' and respect the timestamp defined by docker.
One quick workaround would be to modify your parsers.conf and make sure the docker parser does not resolve the timestamp, on that way Fluent Bit will assign the current time in UTC for you.

Resources