How to get GELF-Logs from local Docker daemon to Loki? - docker

tl;dr:
Loki-docker-log-driver -> Loki : ✅ works.
Loki-docker-log-driver -> JSON Decode -> Loki : How?
For my local development, I run several services which log in GELF Format. To get a better overview and time-ordered log stream with filter functionality, I use the loki docker log driver.
The JSON log messages (GELF style) are successfully sent to loki, but I want to get them further processed so that labels are extracted. How can I achieve that?

If you have already sent the logs in JSON format to Loki, all you need to do is to select the desired log stream and pipe it to the "json" parser, like in the following example:
{filename="/var/log/nginx/access.log"} | json
Then, you can use the labels as you wish, like this:
{filename="/var/log/nginx/access.log"} | json | remote_addr="147.741.001.047"

Related

What is the difference between local and json logging drivers?

Both, JSON and local logging drivers, seem to store logs per container, locally.
In JSON driver docs, I see the extra options of labels and env (because JSON can have attributes?). Also, local driver documentation says that it uses "internal storage". But I could not find what the fundamental difference is.
In documentation:
local Logs are stored in a custom format designed for minimal
overhead.
json-file The logs are formatted as JSON. The default
logging driver for Docker.
Explanation:
local => log saved as is writed.
json-file => formated to:
{
"log":"log message",
"stream":"stdout",
"time":"2019-10-12T12:44:45.931849055Z"
}

OpenTSDB Plotting internal stats

I'm fairly new to OpenTSDB but I managed to set it up inside a docker container and connect Grafana to it.
Now I'm looking for a way to keep track of it's health. In particular, I would like to plot some of the metrics that come from the internal stats (e.g. tsd.rpc.received).
When I try to use them as a regular metric in the Graph panel of OpenTSDB I get a "java.lang.RuntimeException: Unexpected exception".
I know I could connect the http api (/api/stats) to another tool to then send the metrics to cloudwatch or a similar app. But I was hoping for something that didn't involved adding more pieces to the solution.
In the documentation I found: "The Telnet style API also supports the "stats" command for fetching over CLI. These can easily be published right back into OpenTSDB at any interval you like."
Is this the recommended way to keep track of those internal metrics? Read from the stats api and then feed them back to OpenTSDB?
After looking different alternatives I found that the best way to get the internal stats is either using the tcollector util or injecting the output of the stats command back into opentsdb and using grafana to visualize the data.
Since in my particular case I don't want to install another component like tcollector, I'll feed the stats back to opentsdb as metrics in every node.
This is a small script I wrote to feed the stats back via the telnet api.
#!/bin/bash
while true; do
sleep 5
STATSINPUT=$(echo "stats" | nc 0 4242 -w1)
while IFS= read -r line
do
echo "Feed: $line"
INPUT="put $line"
echo $INPUT | nc 0 4242 -w0
done < <(printf '%s\n' "$STATSINPUT")
done

Why mongodb source always return the entire documents but not new added document

this is the DSL using to create mongodb source ingestion to log:
stream create --name mongodb7hdfs7 --definition
"mongodb120
--database=sourcedb
--host=172.20.74.91
--fixed-delay=5
--cron='*/10 * * * * *'
--initial-delay=5
--max-messages=1
--collection=sourceCol | log" --deploy
When monitoring the log, it can always see the entire documents of the collection are ingested to the sink. But what we expect is to see only new added document will be ingested to the sink. In the monitored log, we would see all the documents of the collection being printed in the log every 10 seconds. But the expected result is supposed to only see the latest added document being printed.
Can someone help on this? Much appreciated!
BTW, the http source is always only ingesting new added data to the sink.

change trace log format in emqtt message broker

I am using emqtt message broker for mqtt.
I am not a erlang developer and has zero knowledge on that.
I have used this erlang based broker, because after searching many open source broker online and suggestions from people about the advantage of erlang based server.
Now i am kind of stuck with the out put of the emqttd_cli trace command.
Its not json type and if i use a perl parser to convert to json type i am getting delayed output.
I want to know, in which file i could change the trace log output format.
I looked on the trace code of the broker and found a file src/emqttd_protocol.erl. An exported function named trace/3 has the code that you need.
Second argument of this function, named Packet, has the information of receive & send data via broker. You can fetch required data from it and format according to how you want to print.
Edit : Sample modified code added
trace(recv, Packet, ProtoState) ->
PacketHeader = Packet#mqtt_packet.header,
HostInfo = esockd_net:format(ProtoState#proto_state.peername),
%% PacketInfo = {ClientId, Username, ClientIP, ClientPort, Payload, QoS, Retain}
PacketInfo = {ProtoState#proto_state.client_id, ProtoState#proto_state.username, lists:nth(1, HostInfo), lists:nth(3, HostInfo), Packet#mqtt_packet.payload, PacketHeader#mqtt_packet_header.qos, PacketHeader#mqtt_packet_header.retain},
?LOG(info, "Data Received ~s", [PacketInfo], ProtoState);

Write to the system's standard error in Progress

I am writing a small program in Progress that needs to write an error message to the system's standard error. What ways, simple if at all possible, can I use to print to standard error?
I am using OpenEdge 11.3.
When on Windows (10.2B+) you can use .NET:
System.Console:Error:WriteLine ("This is an error message") .
together with
prowin32 2> stderr.out
Progress doesn't provide a way to write to stderr - the easiest way I can think of is to output-through an external program that takes stdin and echoes it to stderr.
You could look into LOG-MANAGER:WRITE-MESSAGE. It won't log to standard output or standard error, but to a client-specific log. This log should be monitored in any case (specifically if the client is an application server).
From the documentation:
For an interactive or batch client, the WRITE-MESSAGE( ) method writes the log entries to the log file specified by the LOGFILE-NAME attribute or the Client Logging (-clientlog) startup parameter. For WebSpeed agents and AppServer servers, the WRITE-MESSAGE() method writes the log entries to the server log file. For DataServers, the WRITE-MESSAGE() method writes the log entries to the log file specified by the DataServer Logging (-dslog) startup parameter.
LOG-MANAGER:WRITE-MESSAGE("Got here, x=" + STRING(x), "DEBUG1").
Will write this in the log:
[04/12/05#13:19:19.742-0500] P-003616 T-001984 1 4GL DEBUG1 Got here, x=5
There are quite a lot of options regarding the LOG-MANAGER system, what messages to display, where the file is placed, etc.
There is no easy way, but in Unixen you can always do something like this using OUTPUT THROUGH (untested):
output through "cat >&2" no-echo unbuffered.
Alternatively -- and this is tested -- if you just want error messages from a batch-mode program to go to standard out then
output through "tee" ...
...definitely works.

Resources