I am trying to take my docker container logs with fluentd. both application and fluentd process start through supervisord and both are in the same container but fluentd only taking half of the application logs. I need to fetch the logs from the beginning. adding fluentd conf below:-
<source>
type tail
path /var/log/*
path_key path
format none
read_from_head true
<parse>
#type grok
<grok>
pattern (?<logtm>%{MONTHDAY}-%{MONTH}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}) %{LOGLEVEL:loglevel} \[%{DATA:thread}] %{GREEDYDATA:message}
</grok>
<grok>
pattern %{GREEDYDATA:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:message}
</grok>
<grok>
pattern %{URIHOST:remote_host} - - \[%{HTTPDATE:request_time}] "%{WORD:method} %{NOTSPACE:request_page} %{GREEDYDATA:message}/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:bytes} %{INT:time_taken} %{QS:referrer} %{QS:user_agent}
</grok>
</parse>
keep_time_key true
# time_format yyyy-MM-dd HH:mm:ss.SSSZ
tag graylog2.*
</source>
<filter **>
#type record_transformer
enable_ruby
<record>
logtm ${record["logtm"]}
thread ${record["thread"]}
instance "#{Socket.gethostname}"
namespace "#{ENV.fetch('INSTANCE_PREFIX'){'default'}}"
app "#{ENV.fetch('APPZ_APP_NAME'){'wordpress'}}"
level ${if record["loglevel"] == "EMERG";record["level"] = "0" ; record["loglevel"] == "ALERT";record["level"] = "1";elsif record["loglevel"] == "CRIT" || record["loglevel"] == "SEVERE"; record["level"]= "2" ;elsif record["loglevel"] == "ERROR" ; record["level"]= "3" ;elsif record["loglevel"] == "WARN" || record["loglevel"] == "WARNING" ; record["level"]= "4" ;elsif record["loglevel"] == "NOTICE" ; record["level"]= "5";elsif record["loglevel"] == "INFO" || record["loglevel"] == nil || record["loglevel"] == 0 ; record["level"]= "6";else record["loglevel"] == "DEBUG" || record["loglevel"] == "debug"; record["level"]= "7";end}
</record>
</filter>
<match **>
type graylog
host "#{ENV.fetch('LOG_HOST'){'GL'}}"
port "#{ENV.fetch('LOG_PORT'){12201}}"
# BufferedOutput config
flush_interval 5s
num_threads 2
# ...
</match>
note:- when i change /var/log/* to /var/log/log_file_name.log its working fine.
Could you try with a POS file within in_tail
path /var/log/httpd-access.log
pos_file /var/log/td-agent/httpd-access.log.pos
Related
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:
fluent.conf: |
<match fluent.**>
#type null
<match kubernetes.var.log.containers.dashboard-metrics-scraper**.log>
#type null
</match>
<match kubernetes.var.log.containers.**fluentd**.log>
#type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
#type null
</match>
<match kubernetes.var.log.containers.**kibana**.log>
#type null
</match>
<source>
#type tail
path /var/log/containers/*.log
pos_file fluentd-docker.pos
tag kubernetes.*
<parse>
#type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
keep_time_key false
</pattern>
</parse>
</source>
<filter kubernetes.**>
#type grep
<exclude>
key url
pattern \/health/
</exclude>
</filter>
<filter kubernetes.**>
#type kubernetes_metadata
#id filter_kube_metadata
</filter>
<filter kubernetes.var.log.containers.**>
#type parser
<parse>
#type json
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</parse>
key_name log
replace_invalid_sequence true
emit_invalid_record_to_error true
reserve_data true
</filter>
<match **>
#type elasticsearch
#id out_es
#log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'dapr'}"
logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'dapr'}"
type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
<buffer>
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</match>
/health is not getting excluded
I have old application that uses SLF4J with Log4J-1.2-17.jar. I am upgrading it to use Log4J 2.17 version. The code was updated to use newer packages however properties file seems to output to console only and NOT to my rolling log file. What is wrong with my log4j2.properties file?
### Future Log4J v 2.17
# Log files location
#property.basePath = /appllogs/mds/
# change log file name as per your requirement
property.filename = /appllogs/mds/application.log
appenders = rolling
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${filename}
appender.rolling.filePattern = ${filename}-backup-%d{MM-dd-yy-HH-mm-ss}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d [%t] %-5p %c - %m%n
appender.rolling.policies.type = Policies
# To change log file every day
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
# To change log file after 1Kb size
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=1Kb
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
loggers = rolling
logger.rolling.name = com.yyy.zzz.components.mds
logger.rolling.level = info
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} [%X{userId}] %5p %c{1}:%L - %m%n
# Root logger option
log4j.rootLogger=error, file
mypy generates following warning Item "None" of "Optional[IO[bytes]]" has no attribute "close" for the p1.stdout.close()line. How can I fix this error?
#!/usr/bin/env python3
filename = "filename.py"
p1 = subprocess.Popen(["ls", "-ln", filename,], stdout=subprocess.PIPE,)
p2 = subprocess.Popen(["awk", "{print $1}"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close() # <== `Item "None" of "Optional[IO[bytes]]" has no
# attribute "close"`
close() is an abstractmethod() method in the typing.py file
#abstractmethod
def close(self) -> None:
pass
I'm tying to create a logger that logs to a rolling file and my console at the same time. Each by itself works perfectly fine, but in combination only the rolling works. Maybe I'm doing something wrong and did not propertly understand log4j2.
I hope someone can help me.
My properties file is the following:
status = error
name = PropertiesConfig
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = console, rolling
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %-4r %-5p [%t] %c - %m%n
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = mypathtofilehere
appender.rolling.filePattern = CrashDesigner-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=10MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 40
loggers = rolling.file
logger.rolling.file.name = com.myapp
logger.rolling.file.level = debug
logger.rolling.file.additivity = false
logger.rolling.file.appenderRefs = rolling
logger.rolling.file.appenderRef.rolling.ref = RollingFile
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
I see two possibilities here. Either
Change following line to true: logger.rolling.file.additivity = true
or
Add second AppenderRef logger.rolling.file.appenderRef.stdout.ref = STDOUT
Also you should remove lines with .appenderRefs. There is no such appenderRefs entity in log4j2.
Each logger can write to more than one appender. For that it should have reference to all those appenders, which can be achieved using appenderRefs.
logger.rolling.file.appenderRefs = rolling, stdout
logger.rolling.file.appenderRef.rolling.ref = RollingFile
logger.rolling.file.appenderRef.stdout.ref = STDOUT
This will make every logevent triggered by logger 'rolling.file', to be appended to appenders STDOUT and RollingFle.
I am trying to run the following mplayer command in rails using session:
mplayer -identify -vo null -ao null -frames 0 text.mov
I use require "session" and the following code works great in an individual ruby file.
mb = "mplayer"
mi = "-identify -vo null -ao null -frames 0"
dimensions_bitrate = Hash.new
stdout, stderr = '', ''
shell = Session::Shell.new
shell.execute "#{mb} #{mi} #{filename}", :stdout => stdout, :stderr => stderr
vars = (stdout.split(/\n/).collect! { |o| o if o =~ /^ID_/ } ).compact!
vars.each { |v|
a, b = v.split("=")
eval "##{a.to_s.downcase} = \"#{b}\""
if a == "ID_VIDEO_WIDTH"
dimensions_bitrate[0] = b.to_i
elsif a == "ID_VIDEO_HEIGHT"
dimensions_bitrate[1] = b.to_i
elsif a == "ID_VIDEO_BITRATE"
dimensions_bitrate[2] = b.to_i
end
}
HOWEVER, I am unable to load the session gem into ROR. I am not sure what the problem is. If I add require "session", I get the following error:
no such file to load -- session
I figure I am missing something relatively straightforward.
Any ideas?
I could not get this to work, so I did the following:
stdout = %x["mplayer" "-identify" "-vo" "-ao" "null" "-frames" "0" "#{filename}"]
vars = (stdout.split(/\n/).collect! { |o| o if o =~ /^ID_/ } ).compact!
vars.each { |v|
a, b = v.split("=")
eval "##{a.to_s.downcase} = \"#{b}\""
if a == "ID_VIDEO_WIDTH"
dimensions_bitrate[0] = b.to_i
elsif a == "ID_VIDEO_HEIGHT"
dimensions_bitrate[1] = b.to_i
elsif a == "ID_VIDEO_BITRATE"
dimensions_bitrate[2] = b.to_i
end
}
and it worked great. hope this is of use to someone running command line from ROR. The key is to set each parameter as a string.