how to configure timebased logs and its compression policy - log4j2

I have a requirement to write log files based on the hour.
Apart from the current log, rest of the logs should be compressed (gz), like below
-rw-r--r-- 1 karthick karthick 58546 Aug 31 19:00 console.20200831-19.log.gz
-rw-r--r-- 1 karthick karthick 58546 Aug 31 20:00 console.20200831-20.log.gz
-rw-r--r-- 1 karthick karthick 58546 Aug 31 21:00 console.20200831-21.log
Tried below snippet, but didn't worked as expected.
<RollingRandomAccessFile
name="myFile"
fileName="console.%d{yyyyMMdd-HH}.log"
filePattern="console.%d{yyyyMMdd-HH}.log.gz"
append="true" immediateFlush="true">
<PatternLayout pattern="%d [%t] %-5p %c[1] %m%n">
<Policies>
<TimeBasedTriggerPolicy>
<Policies>
</RollingRandomAccessFile>

You cannot have %{d} in the fileName, it is only used to create the target file at rollover. You could leave the fileName attribute off, however you should be aware that the file will not be compressed at shutdown and if the time is different at restart it won't be compressed then either.

Related

log4j2 rollover strategy not working as expected

I have a Log4j2.xml defined as:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
<Properties>
<Property name="log-path">E:/MLM/MLMDomain/servers/${sys:weblogic.Name}/logs</Property>
</Properties>
<Appenders>
<RollingFile name="RollingFile" fileName="${log-path}/MLMServices.log" filePattern="${log-path}/MLMServices-%d{yyyy-MM-dd}-%i.log" >
<PatternLayout>
<pattern>%d{dd/MMM/yyyy HH:mm:ss.SSS} [%-5level] [%c{1}] %m%n</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1 MB" />
</Policies>
<DefaultRolloverStrategy max="30"/>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="root" level="debug" additivity="false">
<appender-ref ref="RollingFile" level="debug"/>
</Logger>
<Root level="debug" additivity="false">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
From what I understand, my log file should roll over to new one when it reaches 1MB. The number of files to roll over should be 30. However, if you look at my logs below, there are over 40 of them, and all the latest ones are all close to 30MB. The current log file MLMServices.log has entries from 13 Apr 2016 till now. In fact, the last few log files, MLMServices-2016-05-24-4.log, MLMServices-2016-05-24-3.log, etc, have entries from 13 Apr 2016. When a new log file is created, it duplicates the entries from the previous one, and then append more entries. So progressively each new log file will be slightly bigger than the previous one.
04/28/2016 04:26 PM 1,050,290 MLMServices-2016-04-28-1.log
04/28/2016 06:02 PM 1,188,994 MLMServices-2016-04-28-2.log
04/29/2016 12:11 PM 1,315,487 MLMServices-2016-04-29-1.log
04/29/2016 12:21 PM 1,364,634 MLMServices-2016-04-29-2.log
04/29/2016 12:30 PM 1,413,781 MLMServices-2016-04-29-3.log
04/29/2016 05:02 PM 1,472,373 MLMServices-2016-04-29-4.log
05/03/2016 04:16 PM 2,521,056 MLMServices-2016-05-03-1.log
05/04/2016 04:35 PM 3,379,593 MLMServices-2016-05-04-1.log
05/05/2016 01:47 PM 3,715,698 MLMServices-2016-05-05-1.log
05/05/2016 02:47 PM 3,858,833 MLMServices-2016-05-05-2.log
05/06/2016 02:13 PM 4,908,446 MLMServices-2016-05-06-1.log
05/06/2016 02:46 PM 4,927,119 MLMServices-2016-05-06-2.log
05/06/2016 03:04 PM 5,068,610 MLMServices-2016-05-06-3.log
05/06/2016 05:07 PM 5,267,743 MLMServices-2016-05-06-4.log
05/10/2016 03:16 PM 8,598,426 MLMServices-2016-05-10-1.log
05/10/2016 03:16 PM 11,280,054 MLMServices-2016-05-10-2.log
05/10/2016 03:16 PM 12,328,667 MLMServices-2016-05-10-3.log
05/10/2016 03:16 PM 13,377,298 MLMServices-2016-05-10-4.log
05/10/2016 03:16 PM 14,425,881 MLMServices-2016-05-10-5.log
05/10/2016 03:16 PM 15,474,464 MLMServices-2016-05-10-6.log
05/10/2016 03:16 PM 16,523,059 MLMServices-2016-05-10-7.log
05/10/2016 03:16 PM 17,571,640 MLMServices-2016-05-10-8.log
05/10/2016 03:53 PM 18,620,566 MLMServices-2016-05-10-9.log
05/11/2016 02:37 PM 19,002,926 MLMServices-2016-05-11-1.log
05/11/2016 02:44 PM 19,088,104 MLMServices-2016-05-11-2.log
05/11/2016 03:50 PM 19,375,771 MLMServices-2016-05-11-3.log
05/14/2016 01:51 PM 20,424,415 MLMServices-2016-05-14-1.log
05/16/2016 12:52 PM 21,473,018 MLMServices-2016-05-16-1.log
05/17/2016 07:01 PM 22,521,671 MLMServices-2016-05-17-1.log
05/18/2016 09:57 AM 23,570,365 MLMServices-2016-05-18-1.log
05/18/2016 02:03 PM 24,619,048 MLMServices-2016-05-18-2.log
05/18/2016 08:05 PM 25,667,655 MLMServices-2016-05-18-3.log
05/19/2016 09:18 AM 25,786,502 MLMServices-2016-05-19-1.log
05/19/2016 02:00 PM 26,259,036 MLMServices-2016-05-19-2.log
05/19/2016 05:52 PM 26,593,795 MLMServices-2016-05-19-3.log
05/19/2016 06:26 PM 26,671,744 MLMServices-2016-05-19-4.log
05/20/2016 03:30 PM 27,191,829 MLMServices-2016-05-20-1.log
05/20/2016 05:27 PM 28,240,467 MLMServices-2016-05-20-2.log
05/23/2016 06:10 PM 29,204,271 MLMServices-2016-05-23-1.log
05/24/2016 09:55 AM 29,338,523 MLMServices-2016-05-24-1.log
05/24/2016 10:31 AM 29,441,164 MLMServices-2016-05-24-2.log
05/24/2016 12:04 PM 29,556,676 MLMServices-2016-05-24-3.log
05/24/2016 12:05 PM 29,577,736 MLMServices-2016-05-24-4.log
05/20/2016 05:27 PM 29,734,763 MLMServices.log
This is not the behavior that I want. I just want each log file to be limited to 1MB, and I want to keep a maximum of 30 log files. Where in the configuration did I go wrong?
Thanks in advance
Edited: I have actually specified log4j2 in weblogic.xml
<?xml version="1.0" encoding="UTF-8"?>
<wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-web-app
http://xmlns.oracle.com/weblogic/weblogic-web-app/1.7/weblogic-web-app.xsd">
<wls:context-root>XXXXXX</wls:context-root>
<wls:library-ref>
<wls:library-name>jax-rs</wls:library-name>
<wls:specification-version>2.0</wls:specification-version>
<wls:exact-match>false</wls:exact-match>
</wls:library-ref>
<wls:container-descriptor>
<wls:prefer-application-packages>
<wls:package-name>org.slf4j</wls:package-name>
<wls:package-name>log4j</wls:package-name>
</wls:prefer-application-packages>
</wls:container-descriptor>
</wls:weblogic-web-app>
You should check to see if something besides your application is accessing MLMServices.log. On Windows, file renames and deletes will fail if an exclusive lock cannot be obtained.

Log4j 2 rollover not happening at expected time

I am using log4j2 Rolling File appender for logging. The traffic for the logger is quite sparse. I want the logger to rollover the file at the appropriate time. How can i do that ?
This is the output of ls -lrth command on the log files.
-rw-r--r-- 1 root root 136 Apr 27 15:51 logfile-04-20150427-07-00.log.gz
-rw-r--r-- 1 root root 133 Apr 27 23:18 logfile-04-20150427-15-00.log.gz
-rw-r--r-- 1 root root 151 Apr 28 04:40 logfile-04-20150427-23-00.log.gz
-rw-r--r-- 1 root root 161 Apr 28 05:14 logfile-04-20150428-04-00.log.gz
-rw-r--r-- 1 root root 134 Apr 28 06:45 logfile-04-20150428-05-00.log.gz
-rw-r--r-- 1 root root 125 Apr 28 08:46 logfile-04-20150428-06-00.log.gz
-rw-r--r-- 1 root root 191 Apr 28 09:27 logfile-04-20150428-08-00.log.gz
-rw-r--r-- 1 root root 281 Apr 28 10:43 logfile-04-20150428-09-00.log.gz
Clearly the logger is not rotating the logfile at the apt time. (see the modification time of the file and the logfile timestamp).
The following is my log4j2.xml configuration for the logger.
<RollingFile name="userlogfileAppender"
fileName="${sys:catalina.home}/webapps/miscLogs/uData/logfile/logfile.log"
filePattern="${sys:catalina.home}/webapps/miscLogs/uData/logfile/logfile-${logfileId}-%d{yyyyMMdd-HH}-00.log.gz"
immediateFlush="true"
bufferedIO="false">
<PatternLayout>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
</RollingFile>
<logger additivity="false" name="userlogfileLogger" level="debug">
<AppenderRef ref="userlogfileAppender"/>
Rollover is not driven by a timer in log4j2, but by log events. The rollover appender will compare the timestamp of the log event with the scheduled rollover time, and if the rollover time has been exceeded, then the file is rolled over.
This means that if traffic is sparse, there will often not be a log event that triggers a rollover at the scheduled rollover time. Rollover will occur at the next event after the scheduled rollover time, which may be hours later.
Log4j2 currently does not have a timer mechanism to force rollover. Depending on how mission-critical this is, you can create a timer in your application that logs an event once an hour to force a rollover.

Jenkins Valgrind plugin appears to sum errors across tests

I am really happy to see that a Valgrind plugin exists for Jenkins. I use it for C/C++ code at work.
I have set it up in Jenkins (Linux Ubuntu 14.04 - the valgrind plugin version 0.22) to dump xml files.
My config is like this
I can see that I do get my memcheck files out in the xml directory
-rw------- 1 jenkins jenkins 1379 Oct 25 18:21 main.18996.memcheck
-rw------- 1 jenkins jenkins 1379 Oct 25 18:22 main.19100.memcheck
-rw------- 1 jenkins jenkins 2452 Oct 25 18:27 main.19489.memcheck
-rw------- 1 jenkins jenkins 2453 Oct 25 18:28 main.19605.memcheck
-rw------- 1 jenkins jenkins 2453 Oct 25 18:28 main.19692.memcheck
-rw------- 1 jenkins jenkins 2453 Oct 25 18:28 main.19774.memcheck
-rw------- 1 jenkins jenkins 1379 Oct 25 18:29 main.19963.memcheck
I can see that the memcheck files look fine with some "dirty underwear" such as
<error>
<unique>0xb</unique>
<tid>1</tid>
<kind>InvalidWrite</kind>
<what>Invalid write of size 4</what>
<stack>
<frame>
<ip>0x80483EB</ip>
<obj>/home/jenkins/workspace/DemoValgrind/main</obj>
<fn>main</fn>
<dir>/home/jenkins/workspace/DemoValgrind</dir>
<file>main.c</file>
<line>12</line>
</frame>
</stack>
<auxwhat>Address 0x41ae21c is not stack'd, malloc'd or (recently) free'd</auxwhat>
</error>
My problem is that the Valgrind plugin counts the sum of all errors in all of the main.*.memcheck files.
I expected a view more like this one:
https://wiki.jenkins-ci.org/download/attachments/60918012/valgrind-trend-graph.jpg?version=1&modificationDate=1336573302000
where the number of errors go up and down.
I must be configuring the Valgrind "Publish Valgrind Results" wrongly.
Is there a syntax where I can get the overview diagram (the URL just above) to match the number of errors according to the build, i.e.
not accumulate?

I'm having huge Rails log is it normal?

I'm having a huge rails application log. is it normal ?'
768 megs for the production log !!
root#demo3:/home/canvas/canvas/log# ls -lh
total 960M
-rw-r--r-- 1 canvas canvas 192M Sep 28 12:37 delayed_job.log
-rw-rw-r-- 1 canvas canvas 265 Sep 22 08:57 development.log
-rw-r--r-- 1 canvas canvas 910K Sep 28 12:36 newrelic_agent.log
-rw-r--r-- 1 canvas canvas 768M Sep 28 12:37 production.log
-rw-r--r-- 1 root root 26K Sep 28 11:00 super_delayed_job_err.log
-rw-r--r-- 1 root root 113K Sep 22 14:07 super_delayed_job.log
Snippet from the log file:
[- 1e1f92f0-293e-0132-2906-00163c067c2e] Cache hit: _account_lookup2/1 ({})
[- 1e1f92f0-293e-0132-2906-00163c067c2e] Cache hit: settings_for_plugin2/sessions ({})
[- 208bd370-293e-0132-2906-00163c067c2e]
Processing UsersController#user_dashboard (for 54.248.250.232 at 2014-09-28 14:06:04) [GET]
[- 208bd370-293e-0132-2906-00163c067c2e] Parameters: {"controller"=>"users", "action"=>"user_dashboard"}
[- 208bd370-293e-0132-2906-00163c067c2e] Redirected to http://subdomain.example.com/login
[- 208bd370-293e-0132-2906-00163c067c2e] Filter chain halted as [:require_user] rendered_or_redirected.
[- 208bd370-293e-0132-2906-00163c067c2e] Completed in 3ms (DB: 0) | 302 Found [http://demo3.iqraalms.com/]
[- 208bd370-293e-0132-2906-00163c067c2e] [STAT] 903612 903612 0 903612 0.010000000000000231 0
Any idea how to optimise it ?
You could raise the log level to get less data (warn, error or fatal) in your config file as described in rails guide on debugging. Or as sjaime pointed out in his comment, logrotate is a utility that will solve this problem for you (compress your log every day/week/month or when it reaches a certain size, delete/email/keep old archives, ...).
One thing that will blow up your log tremendously are for example asset errors (missing fonts is a classic there). Make sure you have non of these. Other than that, with log level info and a couple of users on your site, your log will grow quickly.

Why can't I use <date> selector on an ant <dirset>?

I have three files and three directories in a directory, with varying dates.
$ cd mydir
$ ls -ltr
-rw-rw-r-- 1 skiphoppy users 0 Nov 14 00:00 file.old.20121114
drwxrwxrwx 2 skiphoppy users 4096 Nov 14 00:00 dir.old.20121114
drwxrwxrwx 2 skiphoppy users 4096 Dec 5 12:05 dir.old.20121205
drwxrwxrwx 2 skiphoppy users 4096 Dec 5 12:05 dir
-rw-rw-r-- 1 skiphoppy users 0 Dec 5 12:16 file.old.20121205
-rw-rw-r-- 1 skiphoppy users 0 Dec 5 12:16 file
I want to build a dirset that includes all directories older than 2012-12-01. If I am reading right, the selector can be used to limit the files returned. But it appears this doesn't work for a dirset, even though the dirset documentation says you can use nested patternsets and selectors.
If I use the date selector on a fileset, I get just the one old file that I would expect; but with the same syntax on a dirset, I get all directories:
<fileset id="old.files" dir="mydir">
<date datetime="12/01/2012 12:00 AM" when="before"/>
</fileset>
<echo message="Files: ${toString:old.files}"/>
<dirset id="old.dirs" dir="mydir">
<date datetime="12/01/2012 12:00 AM" when="before"/>
</dirset>
<echo message="Dirs: ${toString:old.dirs}"/>
Output:
[echo] Files: file.old.20121114
[echo] Dirs: ;dir;dir.old.20121114;dir.old.20121205
What is going on here that the date selector does not work?
You probably need the checkdirs attribute set for the date selector. The default is 'false', i.e. select everything.
<date datetime="12/01/2012 12:00 AM" when="before" checkdirs="true" />

Resources