apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:
fluent.conf: |
<match fluent.**>
#type null
<match kubernetes.var.log.containers.dashboard-metrics-scraper**.log>
#type null
</match>
<match kubernetes.var.log.containers.**fluentd**.log>
#type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
#type null
</match>
<match kubernetes.var.log.containers.**kibana**.log>
#type null
</match>
<source>
#type tail
path /var/log/containers/*.log
pos_file fluentd-docker.pos
tag kubernetes.*
<parse>
#type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
keep_time_key false
</pattern>
</parse>
</source>
<filter kubernetes.**>
#type grep
<exclude>
key url
pattern \/health/
</exclude>
</filter>
<filter kubernetes.**>
#type kubernetes_metadata
#id filter_kube_metadata
</filter>
<filter kubernetes.var.log.containers.**>
#type parser
<parse>
#type json
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</parse>
key_name log
replace_invalid_sequence true
emit_invalid_record_to_error true
reserve_data true
</filter>
<match **>
#type elasticsearch
#id out_es
#log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'dapr'}"
logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'dapr'}"
type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
<buffer>
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</match>
/health is not getting excluded
Related
I have old application that uses SLF4J with Log4J-1.2-17.jar. I am upgrading it to use Log4J 2.17 version. The code was updated to use newer packages however properties file seems to output to console only and NOT to my rolling log file. What is wrong with my log4j2.properties file?
### Future Log4J v 2.17
# Log files location
#property.basePath = /appllogs/mds/
# change log file name as per your requirement
property.filename = /appllogs/mds/application.log
appenders = rolling
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${filename}
appender.rolling.filePattern = ${filename}-backup-%d{MM-dd-yy-HH-mm-ss}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d [%t] %-5p %c - %m%n
appender.rolling.policies.type = Policies
# To change log file every day
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
# To change log file after 1Kb size
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=1Kb
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
loggers = rolling
logger.rolling.name = com.yyy.zzz.components.mds
logger.rolling.level = info
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} [%X{userId}] %5p %c{1}:%L - %m%n
# Root logger option
log4j.rootLogger=error, file
I'm trying to send this very simple JSON string to Telegraf to be saved into InfluxDB:
{ "id": "id_123", "value": 10 }
So the request would be this: curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary '{"id": "id_123","value": 10}'
When I make that request, I get the following answer: HTTP/1.1 204 No Content Date: Tue, 20 Apr 2021 13:02:49 GMT but when I check what was written to database, there is only value field:
select * from http_listener_v2
time host influxdb_database value
---- ---- ----------------- -----
1618923747863479914 my.host.com my_db 10
What am I doing wrong?
Here's my Telegraf config:
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
# OUTPUTS
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "telegraf"
username = "xxx"
password = "xxx"
[outputs.influxdb.tagdrop]
influxdb_database = ["*"]
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "httplistener"
username = "xxx"
password = "xxx"
[outputs.influxdb.tagpass]
influxdb_database = ["httplistener"]
# INPUTS
## system
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.mem]]
[[inputs.swap]]
[[inputs.system]]
## http listener
[[inputs.http_listener_v2]]
service_address = ":8080"
path = "/telegraf"
methods = ["POST", "PUT"]
data_source = "body"
data_format = "json"
[inputs.http_listener_v2.tags]
influxdb_database = "httplistener"
Use json_string_fields = ["id"]
I am trying to take my docker container logs with fluentd. both application and fluentd process start through supervisord and both are in the same container but fluentd only taking half of the application logs. I need to fetch the logs from the beginning. adding fluentd conf below:-
<source>
type tail
path /var/log/*
path_key path
format none
read_from_head true
<parse>
#type grok
<grok>
pattern (?<logtm>%{MONTHDAY}-%{MONTH}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}) %{LOGLEVEL:loglevel} \[%{DATA:thread}] %{GREEDYDATA:message}
</grok>
<grok>
pattern %{GREEDYDATA:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:message}
</grok>
<grok>
pattern %{URIHOST:remote_host} - - \[%{HTTPDATE:request_time}] "%{WORD:method} %{NOTSPACE:request_page} %{GREEDYDATA:message}/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:bytes} %{INT:time_taken} %{QS:referrer} %{QS:user_agent}
</grok>
</parse>
keep_time_key true
# time_format yyyy-MM-dd HH:mm:ss.SSSZ
tag graylog2.*
</source>
<filter **>
#type record_transformer
enable_ruby
<record>
logtm ${record["logtm"]}
thread ${record["thread"]}
instance "#{Socket.gethostname}"
namespace "#{ENV.fetch('INSTANCE_PREFIX'){'default'}}"
app "#{ENV.fetch('APPZ_APP_NAME'){'wordpress'}}"
level ${if record["loglevel"] == "EMERG";record["level"] = "0" ; record["loglevel"] == "ALERT";record["level"] = "1";elsif record["loglevel"] == "CRIT" || record["loglevel"] == "SEVERE"; record["level"]= "2" ;elsif record["loglevel"] == "ERROR" ; record["level"]= "3" ;elsif record["loglevel"] == "WARN" || record["loglevel"] == "WARNING" ; record["level"]= "4" ;elsif record["loglevel"] == "NOTICE" ; record["level"]= "5";elsif record["loglevel"] == "INFO" || record["loglevel"] == nil || record["loglevel"] == 0 ; record["level"]= "6";else record["loglevel"] == "DEBUG" || record["loglevel"] == "debug"; record["level"]= "7";end}
</record>
</filter>
<match **>
type graylog
host "#{ENV.fetch('LOG_HOST'){'GL'}}"
port "#{ENV.fetch('LOG_PORT'){12201}}"
# BufferedOutput config
flush_interval 5s
num_threads 2
# ...
</match>
note:- when i change /var/log/* to /var/log/log_file_name.log its working fine.
Could you try with a POS file within in_tail
path /var/log/httpd-access.log
pos_file /var/log/td-agent/httpd-access.log.pos
I'm trying to write error to log file in my Java desktop app but it's not working.
The error is:
2018-11-02 21:10:27,975 AWT-EventQueue-0 ERROR Unable to create file
C:UsersNhanDesktopPRJ311JDBCsrc hanloggerlogging.log
java.io.IOException: The filename, directory name, or volume label
syntax is incorrect
Here is my configuration:
status = error
name = PropertiesConfig
property.filename = C:\Users\Nhan\Desktop\PRJ311\JDBC
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = rolling
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${filename}
appender.rolling.filePattern = debug-backup-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=10MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
loggers = rolling
logger.rolling.name = nhan.views
logger.rolling.level = debug
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile
A backslash has a special meaning in a properties file per the Java Properties class. You need to escape your backslashes by adding another backslash to each of them:
C:\\Users\\Nhan\\Desktop\\PRJ311\\JDBC
I'm tying to create a logger that logs to a rolling file and my console at the same time. Each by itself works perfectly fine, but in combination only the rolling works. Maybe I'm doing something wrong and did not propertly understand log4j2.
I hope someone can help me.
My properties file is the following:
status = error
name = PropertiesConfig
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = console, rolling
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %-4r %-5p [%t] %c - %m%n
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = mypathtofilehere
appender.rolling.filePattern = CrashDesigner-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=10MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 40
loggers = rolling.file
logger.rolling.file.name = com.myapp
logger.rolling.file.level = debug
logger.rolling.file.additivity = false
logger.rolling.file.appenderRefs = rolling
logger.rolling.file.appenderRef.rolling.ref = RollingFile
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
I see two possibilities here. Either
Change following line to true: logger.rolling.file.additivity = true
or
Add second AppenderRef logger.rolling.file.appenderRef.stdout.ref = STDOUT
Also you should remove lines with .appenderRefs. There is no such appenderRefs entity in log4j2.
Each logger can write to more than one appender. For that it should have reference to all those appenders, which can be achieved using appenderRefs.
logger.rolling.file.appenderRefs = rolling, stdout
logger.rolling.file.appenderRef.rolling.ref = RollingFile
logger.rolling.file.appenderRef.stdout.ref = STDOUT
This will make every logevent triggered by logger 'rolling.file', to be appended to appenders STDOUT and RollingFle.