I have checked these questions:
Logging from a storm bolt - where is it going?
And the solution is not working anymore.
In theory, the system variable of storm.log.dir is set when we launch storm jar. As the solution suggests, you can use ps aux | grep storm.log.dir to search the argument's value.
It shows:
java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/opt/apache-storm-1.1.1 -Dstorm.log.dir=/opt/apache-storm-1.1.1/logs .....
But, when I go there, I have:
[root#xxx ~]# ls -la /opt/apache-storm-1.1.1/logs
total 58272
drwxr-xr-x 3 root root 4096 Jan 15 01:21 .
drwxr-xr-x 14 root root 4096 Oct 31 10:09 ..
-rw-r--r-- 1 root root 0 Nov 3 10:46 access-logviewer.log
-rw-r--r-- 1 root root 0 Oct 31 10:09 access-nimbus.log
-rw-r--r-- 1 root root 0 Oct 31 10:09 access-supervisor.log
-rw-r--r-- 1 root root 0 Nov 23 14:36 access-ui.log
-rw-r--r-- 1 root root 8916 Nov 27 17:34 access-web-logviewer.log
-rw-r--r-- 1 root root 0 Oct 31 10:09 access-web-nimbus.log
-rw-r--r-- 1 root root 0 Oct 31 10:09 access-web-supervisor.log
-rw-r--r-- 1 root root 31661 Nov 27 17:40 access-web-ui.log
-rw-r--r-- 1 root root 2247 Nov 3 11:34 logviewer.log
-rw-r--r-- 1 root root 0 Nov 3 10:46 logviewer.log.metrics
-rw-r--r-- 1 root root 20690 Oct 31 10:09 nimbus.log
-rw-r--r-- 1 root root 0 Oct 31 10:09 nimbus.log.metrics
-rw-r--r-- 1 root root 46713727 Feb 6 17:13 supervisor.log
-rw-r--r-- 1 root root 3143062 Jan 10 08:23 supervisor.log.1.gz
-rw-r--r-- 1 root root 3104009 Jan 11 22:06 supervisor.log.2.gz
-rw-r--r-- 1 root root 3103550 Jan 13 11:43 supervisor.log.3.gz
-rw-r--r-- 1 root root 3103899 Jan 15 01:21 supervisor.log.4.gz
-rw-r--r-- 1 root root 0 Oct 31 10:09 supervisor.log.metrics
-rw-r--r-- 1 root root 401456 Nov 27 17:40 ui.log
-rw-r--r-- 1 root root 0 Nov 23 14:36 ui.log.metrics
drwxr-xr-x 48 root root 4096 Feb 12 18:08 workers-artifacts
Entering workers-artifacts, I have dirs named after nodes, and inside, I have worker.yaml.
[root#mq1-acustats-process KafkaStormRadius-1-1518455061]# cd 1027
[root#mq1-acustats-process 1027]# ls -la
total 4
drwxr-xr-x 2 root root 24 Feb 12 18:04 .
drwxr-xr-x 8 root root 72 Feb 12 18:04 ..
-rw-r--r-- 1 root root 109 Feb 12 18:04 worker.yaml
[root#mq1-acustats-process 1027]# cat worker.yaml
worker-id: 33e6e77d-8a7b-48b2-b2d4-1be0af726c52
logs.users: []
logs.groups: []
topology.submitter.user: root
So, no logs here. Where are they?
I have this worker.xml under {STORM_DIR}/log4j2:
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration monitorInterval="60" shutdownHook="disable">
<properties>
<property name="pattern">%d{yyyy-MM-dd HH:mm:ss.SSS} %c{1.} %t [%p] %msg%n</property>
<property name="patternNoTime">%msg%n</property>
<property name="patternMetrics">%d %-8r %m%n</property>
</properties>
<appenders>
<RollingFile name="A1"
fileName="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}"
filePattern="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<RollingFile name="radius"
fileName="${sys:storm.log.dir}/radius.log"
filePattern="${sys:storm.log.dir}/radius.log.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="50 MB"/> <!-- Or every 20 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<RollingFile name="STDOUT"
fileName="${sys:storm.log.dir}/radius.out"
filePattern="${sys:storm.log.dir}/radius.out.%i.gz">
<PatternLayout>
<pattern>${patternNoTime}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="50 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="4"/>
</RollingFile>
<RollingFile name="STDERR"
fileName="${sys:storm.log.dir}/radius.err"
filePattern="${sys:storm.log.dir}/radius.err.%i.gz">
<PatternLayout>
<pattern>${patternNoTime}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="50 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="4"/>
</RollingFile>
<RollingFile name="METRICS"
fileName="${sys:storm.log.dir}/radius.metrics"
filePattern="${sys:storm.log.dir}/radius.metrics.%i.gz">
<PatternLayout>
<pattern>${patternMetrics}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="2 MB"/>
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<Syslog name="syslog" format="RFC5424" charset="UTF-8" host="localhost" port="514"
protocol="UDP" appName="[${sys:storm.id}:${sys:worker.port}]" mdcId="mdc" includeMDC="true"
facility="LOCAL5" enterpriseNumber="18060" newLine="true" exceptionPattern="%rEx{full}"
messageId="[${sys:user.name}:${sys:logging.sensitivity}]" id="storm" immediateFail="true" immediateFlush="true"/>
</appenders>
<loggers>
<root level="info"> <!-- We log everything -->
<appender-ref ref="A1"/>
<appender-ref ref="syslog"/>
</root>
<Logger name="org.apache.storm.metric.LoggingMetricsConsumer" level="info" additivity="false">
<appender-ref ref="METRICS"/>
</Logger>
<Logger name="com.joestelmach.natty" level="error" additivity="false">
<appender-ref ref="radius"/>
</Logger>
<Logger name="STDERR" level="INFO">
<appender-ref ref="STDERR"/>
<appender-ref ref="syslog"/>
</Logger>
<Logger name="STDOUT" level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="syslog"/>
</Logger>
</loggers>
</configuration>
But, I don't see radius.log created under ${sys:storm.log.dir}.
Why?
By the way:
difference between cluster.xml and worker.xml?
My guess would be that the storm.log.dir variable isn't being correctly set for the worker JVM. Remember that the JVM you start when running storm jar isn't the same JVM that will run your topology.
In order to set VM options for your workers, you can use the worker.childopts variable in storm.yaml to set options globally (you might want to make sure to copy over the defaults from https://github.com/apache/storm/blob/v1.1.1/conf/defaults.yaml#L171 if you do this), or topology.worker.childopts in your topology config to set them per topology.
For example, I get logs printed to E:\testLogs with the following config:
storm.yaml
worker.childopts: "-Dstorm.log.dir=E:\\testLogs"
worker.xml
...
<RollingFile name="A1"
fileName="${sys:storm.log.dir}/worker.log"
filePattern="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
...
where the rest of the worker.xml is the default one shipping with Storm 1.1.1.
Regarding cluster.xml and worker.xml, cluster.xml is the logging configuration for the Storm daemons (Nimbus, Supervisor, UI etc.), and worker.xml is the logging configuration for the worker processes (the ones running your topology components)
I have read the documentation left by another team and found these lines.... I have a service of storm not launched.
systemctl start storm-supervisor
Checking the service definition under /etc/systemd/system/ (I am with CentOS so I am using systemd), and it reads:
[Unit]
Description=Nimbus Service
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=0s
ExecStart=/opt/apache-storm/bin/storm supervisor
[Install]
WantedBy=multi-user.target
After launching that, I now see log files created under the correct dir.
Worth to mention that I also changed two other things:
Use slf4j instead of log4j.
Change the log4j2.properties to use an absolute path (under /tmp) instead of reading the system variable like {sys:storm.log.dir} in order to discard a variable factor.
Related
why can a the level of a appender ref not be lower than the root logger level?
From log4j2.xml:
...
<Loggers>
<Root level="ERROR">
<AppenderRef ref="RollingFile" level="INFO" />
</Root>
</Loggers>
...
INFO is not shown in the rollingfile, just error or higher.
The opposite way is working, when root logger has higher level and the referenced appender reduce the level:
...
<Loggers>
<Root level="DEBUG">
<AppenderRef ref="RollingFile" level="ERROR" />
</Root>
</Loggers>
...
Additinally I have the following problem: the root level=ERROR is default setting in a commercial product.
So I wan't change the level here due to support/update reasons.
(Finally, I will manage the appender ref (add/remove) programmatically.)
Any hint?
Uwe
Java 1.8, WebSphere Liberty 19.0.0.3 running in localhost, log4j v.2.17.1, Maven v.3.5.2
I have read some posts of similar issues, but I have not seen a solution that works for my case.
I cannot get anything to write to the log files. Presently, I am focusing on the root logger, as that writes to both the console and to file.
Pom file:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.17.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.17.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-web</artifactId>
<version>2.17.1</version>
</dependency>
Here is the configuration for the root logger:
<Root>
<!-- change level to EROR -->
<Level value="TRACE"/>
<AppenderRef ref="APS-FILE"/>
<AppenderRef ref="STDOUT"/>
</Root>
The root logger is correctly calling the STDOUT appender and writing to the console. Note the pair of asterisks. Those match the output I am seeing in the console.
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout>
<Pattern>"** %d{DATE} %5p %c{1}:%L - %m%n **"</Pattern>
</PatternLayout>
</Console>
console output:
"** 07 Jan 2022 09:50:42,331 INFO WSWebSsoFilter:44 - Exiting WSWebSsoFilter.doFilter **"
"** 07 Jan 2022 09:50:42,331 INFO WSWebSsoFilter:44 - Exiting WSWebSsoFilter.doFilter **"
This is the appender configuration for root logger to write to file.
<RollingFile name="APS-FILE" fileName="/logs/aps/${company-code}/aps-A.log"
filePattern="logs/aps/${company-code}/aps-1.log">
<PatternLayout>
<Pattern>"%d{DATE} %5p %c{1}:%L - %m%n"</Pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy minSize="0"/>
<!--SizeBasedTriggeringPolicy size="10 MB"/-->
<SizeBasedTriggeringPolicy size="1 KB"/>
</Policies>
<DefaultRolloverStrategy max="10"/>
</RollingFile>
The OnStartupTriggeringPolicy is firing, as you can see from this file:
LastWriteTime Length Name
------------------ ------ ---------
1/7/2022 9:49 AM 0 aps-A.log
Any ideas about how to fix this? Thanks.
Try to change the STDOUT target to your rolling file appender
<Console name="STDOUT" target="**APS-FILE**">
<PatternLayout>
<Pattern>"** %d{DATE} %5p %c{1}:%L - %m%n **"</Pattern>
</PatternLayout>
</Console>
I am trying to write to Syslog from Log4J2 and I am having problems connecting to Syslog-ng. I believe the port is the problem, but I could not find anywhere in the syslog-ng.conf file what is the port.
This is my Log4j2 XML file:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" packages="com.payon.logging.v2">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{ABSOLUTE} [%x][%X{MASKEDSERVLETPATH}] %5p %c{1}: %k%n"/>
</Console>
<Syslog name="Syslog" host="localhost" port="514" protocol="TCP">
<PatternLayout pattern="%d{ABSOLUTE} [%x][%X{MASKEDSERVLETPATH}] %5p %c{1}: %m%n"/>
</Syslog>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console"/>
<AppenderRef ref="Syslog"/>
</Root>
</Loggers>
</Configuration>
Syslog-ng is running:
service syslog-ng status
● syslog-ng.service - System Logger Daemon
Loaded: loaded (/lib/systemd/system/syslog-ng.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-02-15 10:08:28 CET; 31min ago
Docs: man:syslog-ng(8)
Main PID: 745 (syslog-ng)
Tasks: 1 (limit: 4915)
Memory: 11.9M
CGroup: /system.slice/syslog-ng.service
└─745 /usr/sbin/syslog-ng -F
However, I am getting this error: ERROR TcpSocketManager (TCP:localhost:514) caught exception and will continue: java.io.IOException: Unable to create socket for localhost at port 514 using ip addresses and ports
What am I missing in the configuration? With Log4j1, I did not have to provide a port
<Syslog name="Syslog" host="localhost" port="514" protocol="TCP"> requires a network source that needs to be specified in the syslog-ng configuration, for example:
source { network(port(514)); };
Alternatively, default-network-drivers() can be used, which sets good defaults (TCP/UDP 514 and 601):
log {
source { default-network-drivers(); };
# ...
};
I am having trouble configuring the "DefaultRolloverStrategy" for log4j2.xml to do the following :-
Ensure ONLY the last 4 log files are kept and older ones should get deleted.
So just be clear, the last 4 log files could be over a number days or on the same day, therefore,
the last 4 log files could be with the same date or span over different dates.
Below is the contents of log4j2.xml
<?xml version="1.0" encoding="UTF-8" ?>
<Configuration>
<Appenders>
<!-- Console Appender -->
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{DEFAULT} [%t] %-5level %logger{36} - %msg%n" />
</Console>
<!-- Rolling File Appender -->
<RollingFile name="File" fileName="app_log.log"
filePattern="app_log-%d{yyyy-MM-dd}.%i.log">
<PatternLayout pattern="%d{DEFAULT} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="2 KB" />
</Policies>
<DefaultRolloverStrategy>
<Delete basePath="" maxDepth="1">
<IfFileName glob="app_log*.txt">
<IfAny>
<IfAccumulatedFileSize exceeds="5 KB" />
<IfAccumulatedFileCount exceeds="4" />
</IfAny>
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="com.app.utilities" level="info" additivity="true">
<AppenderRef ref="File" />
</Logger>
<Root level="debug">
<AppenderRef ref="Console" />
</Root>
</Loggers>
</Configuration>
I run up my application as shown below
java -Dlog4j.configurationFile=./app-log4j2.xml -jar application.jar
The log is generated in the same directory from where the above command is invoked from.
Below is a sample history of log files :-
File Name Date Modified
app_log.log 8/27/2018 2:25 PM
app_log-2018-08-27.2.log 8/27/2018 2:25 PM
app_log-2018-08-27.1.log 8/27/2018 2:11 PM
app_log-2018-08-26.5.log 8/26/2018 2:01 PM
app_log-2018-08-26.4.log 8/26/2018 2:00 PM
app_log-2018-08-26.3.log 8/26/2018 1:58 PM
app_log-2018-08-26.2.log 8/26/2018 1:57 PM
app_log-2018-08-26.1.log 8/26/2018 1:56 PM
It seems the "DefaultRolloverStrategy" is having no effect.
I presume my configuration is wrong. However, I would very much appreciate for suggestions
to correct this please.
Also, if the requirenment was to change such that log files greater 20 days should be deleted.
How could that be acheived.
Thank you very much in advance for you help
Pete
Take a look at the following line:
<IfFileName glob="app_log*.txt">
But your log files don't end with .txt! See:
File Name Date Modified
app_log.log 8/27/2018 2:25 PM
You likely need to change it to:
<IfFileName glob="app_log*.log">
That's what really jumps out to me. There might be another few tweaks you have to make, but try that first.
I configured Karaf 4.0.5 in order to fix this issue, but log output from my bundles is shown only in karaf console, not in the file. It works in Karaf 4.0.3.
Any ideas why the output from my bundles is present only in Karaf console? The changes that I made to configure log4j2:
startup.properties (corresponding jars are in ${karaf.system} folder):
mvn:org.ops4j.pax.logging/pax-logging-api/1.8.5 = 8
(this line is commented) mvn:org.ops4j.pax.logging/pax-logging-service/1.8.5 = 8
mvn:org.ops4j.pax.logging/pax-logging-log4j2/1.8.5 = 8
mvn:com.lmax/disruptor/3.3.2 = 8
org.ops4j.pax.logging.cfg:
org.ops4j.pax.logging.log4j2.config.file = ${karaf.etc}/log4j2.xml
org.ops4j.pax.logging.log4j2.async = true
system.properties
log4j.configurationFile=file:${karaf.etc}/log4j2.xml
org.ops4j.pax.logging.DefaultServiceLog.level = DEBUG
Log4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
log4j2.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="ALL">
<Appenders>
<RollingRandomAccessFile name="oapiserver" fileName="data/log/log4j2.log" filePattern="data/log/oapi-%d_%i.log.gz" immediateFlush="false">
<ThresholdFilter level="DEBUG"/>
<PatternLayout>
<pattern>%level{length=1} %date{MMdd-HHmm:ss,SSS} %logger{1.} %message [%thread]%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="50 MB"/>
</Policies>
<DefaultRolloverStrategy max="10000"/>
</RollingRandomAccessFile>
</Appenders>
<Loggers>
<Root level="DEBUG">
<AppenderRef ref="oapiserver"/>
</Root>
</Loggers>
For karaf-4.0.4 :
see Release note for 4.0.5 - [KARAF-4278] - clean not working.
So Either delete data directory from karaf server after the configuration for log4j2.
For karaf-4.0.5:
Run karaf clean after configuration