I am writing out json structured log messages to stdout with exactly one time field, called origin_timestamp.
I collect the log messages using Fluent Bit with the tail input plugin, which uses the parser docker. The parser is configured with the Time_Key time.
The documentation about Time_Key says:
If the log entry provides a field with a timestamp, this option
specify the name of that field.
Since time != origin_timestamp, I would have thought no time fields will be added by Fluent Bit, however the final log messages ending up in Elasticsearch have the following time fields:
(origin_timestamp within the field log that contains the original log message)
origin_timestamp
time
#timestamp (sometimes even multiple times).
The #timestamp field is probably added by the es output plugin I am using in Fluent Bit, but where the heck is the time field coming from?
I came across the following issue in the Fluent-bit issue tracker, Duplicate #timestamp fields in elasticsearch output, which sounds like it might be related to your issue in question.
I've deep linked to a particular comment from one of the contributors, which outlines two possible solutions depending on whether you are using their Kubernetes Filter plugin, or are ingesting the logs into Elasticsearch directly.
Hope this helps.
The time field being added by the docker json plugin. Docker logging plugin takes logs from your stdout and logs to a file in following format by default:
{"log":"Log line is here\n","stream":"stdout","**time**":"2019-01-01T11:11:11.111111111Z"}
So, you might observe three timestamps in your final log:
Added by you (origin_timestamp)
Added by docker driver (time)
Added by fluent bit plugin (#timestamp)
Ref - https://docs.docker.com/config/containers/logging/json-file/
Related
There appears to be a years-long-standing issue with large ( longer than 16KB) messages getting split into parts and appearing on Kibana in multiple lines. Such long messages typically include Java exception stack traces. The splitting makes parsing and therefore indexing impossible and messes things up completely for developers who need to read the logs.
By "message" I'm referring to the field with the label "message" that appears as part of the log entry that, of course, starts with the "timestamp" field.
As much as I have searched I have not found a filter that can concatenate those parts and make them appear as a whole on a single log entry, where the JSON block can be properly parsed and indexed. I have tried a few filters of my own, with little success.
Please help if you are aware of a solution.
Thanks
I am trying to set up json logging for log4j2 described here:
https://logging.apache.org/log4j/2.x/manual/json-template-layout.html
However, the format of the output is not matching what I expect. This is what I am getting:
{"#version":1,"source_host":"localhost","message":"hello world","thread_name":"main","#timestamp":"2021-08-17T15:44:54.948-04:00","level":"INFO","logger_name":"com.logging.test.LoggingTest"}
At first I created my own template but this wasn't working so I set it to the logstash one described in the docs:
<JsonTemplateLayout eventTemplateUri="classpath:LogstashJsonEventLayoutV1.json"/>
I am not getting the line number in the output or a lot of other fields. I know it is picking up the eventTemplateURI field because if I set it to a value I know doesn't exist then I get an exception on start up.
I am using log4j-slf4j-impl, does anything special need to be done to make it work with this?
Thanks
I am pretty new to Cumulocity and I am trying to get data into the platform from my own device using mqtt and the smartrest templates. I can get data in using the static templates but they only support certain data types. I am struggling to create the appropriate smartrest template in the UI and the documentation doesn't go into much detail.
I get that the template name goes in the MQTT topic (or selected on login as part of the username) in s/ut/template_name and the messageId of the messages in the template get matched to the first CSV field of the MQTT publish payload. What I don't get is the template terminology. In the UI I choose API->Measurement and Method->POST and I am presented with required values $.type and $.time. My questions:
Is $.type the "measurement fragment type" name or do I have to make it "c8y_CustomMeasurement"? Can I call it whatever I want?
$.time has a value field. Is this the default value if one is not supplied in the publish?
I assume I need to add a numerical value in the optional API values. To link it to the value of the data point should I make the key "c8y_CustomMeasurement.custom.value"?
Am I way off base here?
Every time I publish to my own smartrest template the server drops the connection so I assume its an error in my template setup but I don't see a way of accessing debug messages (also nothing is published back to me on s/e or s/dt).
For the sake of an example, lets say I wish to publish a unitless, timestamped pulse count with payload format "mId,ts,value" with example data "p01,'2017-07-17 12:34:00',1234"
What you wrote so far is mostly correct just to be a bit more precise:
The topic is s/uc/template_id (not the template name, this is just a label)
The $.type refers to the 'type' fragment in the measurement JSON. It is a free text field
In 99% of cases you want to leave the $.time empty. If you set something here it is not the default but fixed to that timestamp and you cannot change it when using the template. If you leave it empty and still not send something in
Example: p01,2017-07-17T12:34:00,1234 (no quotes arounf timestamp and ISO8601 format
Example without sending time: p01,,1234 (sending empty string as time results in server time beeing set. The template is the same)
Hope these points help you to find you issue
I use log parser 2.2 to read the IIS log and copy the log into a database. Initially IIS log was having the default fields and I was able to copy the log in to database. Now I included one more field in IIS log but the log parser does not return the details of new column. Can anyone help to make log parser to read the additional fields along with the old log files?
Following query is used to read IIS log.
select * from C:\inetpub\logs\LogFiles\W3SVC3\*.*
If it's newly added, then I think log parser stops checking the newly defined field after the first 50 entries it finds (perhaps total or per log) try using just the IIS log that has the new field in it to determine if it's working or not. Also, make sure that the first 3 lines reflect the # Fields: stuff entire you're looking for.
ie:
select * from C:\inetpub\logs\LogFiles\W3SVC3\todays.log
I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).