logging my application's classes at DEBUG level, all others at WARN - grails

I've configured my Grails app to read the log4j config from /conf/log4j.properties file instead of the more usual DSL in Config.groovy, by adding the following Spring bean:
log4jConfigurer(MethodInvokingFactoryBean) {
targetClass = "org.springframework.util.Log4jConfigurer"
targetMethod = "initLogging"
arguments = ["/conf/log4j.properties", 1000 * 60] // 2nd arg is refresh interval in ms
}
My goal is to log all the classes in the app itself at the DEBUG level, and all others at the WARN level. /conf/log4j.properties contains the following:
log4j.logger.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app=DEBUG, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%-5p myapp %c{3} %d{HH:mm:ss,SSS} %t : %m%n
It seems the namespace com.myapp is used for regular classes in my app (e.g. those under src/groovy), whereas the namespace grails.app is used for Grails artefacts (controllers, services, taglibs, etc.). However the grails.app namespace also includes artefacts from plugins, which I don't want to log at the DEBUG level.
Is there a way to enable DEBUG logging only for the classes in my application?

You append your package unto the grails.app.controllers to get just your application.
info 'grails.app.controllers.mypackage'

Adding the following solved the problem
log4j.logger.grails.app.conf.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app.filters.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app.taglib.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app.services.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app.controllers.com.myapp=DEBUG, CONSOLE
log4j.logger.grails.app.domain.com.myapp=DEBUG, CONSOLE
Presumably if a Grails app has other types of artefacts that you want to log at the DEBUG level, an additional logger should be configured for each one.

Related

How to override logging in dataflow with my logback.xml file?

We are trying to use our logback.xml that we use in GCP Cloud run which has amazing filtering features. Our logback.xml contains this for cloud run
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.orderlyhealth.api.logging.logback.GCPCloudLoggingJSONLayout">
<pattern>${CONSOLE_PATTERN}</pattern>
</layout>
</encoder>
</appender>
And our GCPCloudLoggingJSONLayout does a great job at setting all the things we need like clientId, customerRequestId, etc. etc. and we can filter across many many microservices on one customer or one customer request. We lose this in dataflow currently though. We tried adding logback.xml to src/main/resources and deploying the project seems to use it in the shell like so
{"message":"[main][-][:] o.a.b.r.d.DataflowRunner Template successfully created.\n",
"logger":"org.apache.beam.runners.dataflow.DataflowRunner",
"transactionId":null,"socket":null,"clntSocket":null,
"version":null,
"timestamp":{"seconds":1619694798,"nanos":4000000},
"thread":"main",
"severity":"INFO",
"instanceId":null,
"headers":{},
"messageInfo":{"message":"Message short enough. Displayed top level"}
}
thanks for any ideas on modifying dataflow logging.
Currently we see this instead which is not nearly as useful for tracing the customer request through systems
I don't think you can change how Dataflow logs to Cloud logging.
Instead, you can change how/what you log and let Dataflow pass them through to cloud logging. See Logging pipeline messages.
Or you can use cloud logging client libraries in your pipeline directly: https://cloud.google.com/logging/docs/reference/libraries.
Please take a look at How to override Google DataFlow logging with logback? for the latest version of this answer
I copied the current answer there to make it easier for folks who want to look:
Dataflow relies on using java.util.logging (aka JUL) as the logging backend for SLF4J and adds various bridges ensuring that logs from other libraries are output as well. With this kind of setup, we are limited to adding any additional details to the log message itself only.
This also applies to any runner executing a portable job since the container with the SDK harness has a similar logging configuration. For example Dataflow Runner V2.
To do this we want to create a custom formatter to apply to the root JUL logger. For example:
public class CustomFormatter extends SimpleFormatter {
public String formatMessage(LogRecord record) {
// implement whatever logic the is needed to add details to the message portion of the log statement
return super.formatMessage(record);
}
}
And then during start-up of the worker we need to update the root logger to use this formatter. We can achieve this using a JvmInitializer and implement the beforeProcessing method like so:
#AutoService(JvmInitializer.class)
public class LoggerInitializer implements JvmInitializer {
public void beforeProcessing(PipelineOptions options) {
LogManager logManager = LogManager.getLogManager();
Logger rootLogger = logManager.getLogger("");
for (Handler handler : rootLogger.getHandlers()) {
handler.setFormatter(new CustomFormatter());
}
}
}

Log4j2 (and SLF4j 2.0.0-alpha1) and JsonTemplateLayout--how to serialize Messages as JSON

I'm exploring Log4j 2.14.0 and SLF4j 2.0 and trying to generate structured messages.
I've got my Appender set up with a slightly modified LogstashJsonEventLayoutV1.json,
<JsonTemplateLayout eventTemplateUri="classpath:LogstashJsonEventLayoutV1-test.json" properties="true" />
where I've removed the timestamp and hostname(I'm doing this as part of a unit test) and modified the config for "message" like so:
"message": {
"$resolver": "message",
"fallbackKey": "formattedMessage"}
When I log something
log4jLogger.atInfo().log(new MapMessage(Map.of("hello", "world")));
It's obviously generating JSONified log messages:
{"#version":1,"message":{"hello":"world"},"thread_name":"Test worker","level":"INFO","logger_name":"java.lang.Integer"}
In production my shop generally uses Log4J via SLF4J. I'd be willing to use the 2.0.0-alpha1 release of SLF4J to achieve this goal. How would I achieve the same thing via SLF4J's fluent API via addKeyValue?
logger.atDebug().addKeyValue("oldT", oldT).addKeyValue("newT", newT).log("Temperature changed.");
At the end of the day I just wrapped log4j--for this use case, there was no manna to be had for wrapping Slf4j when I could just target log4j.

How to Disable Logging from Different Modules using Log4J Such as Preventing Logs for 'Resources'

I am working on Grails application with resources plugin and Log4j for logging. I want to disable logging from resources plugin as it's not useful for me and just filling my logging file with useless information. Such as:
File [file] changed. Applying changes to application.
Scheduling reload of resource files due to change of file [file]
Performing a changed file reload
Loading declared resources...
Finished changed file reload
I don't want such information in my logs file as Its not useful for me and this is making difficult too reading other useful info in my log files.
Is there any way that I may disable logging from specific modules such as resource ?
I have tried the following Solution:
off 'grails.app.services.org.grails.plugin.resource',
'grails.app.taglib.org.grails.plugin.resource',
'grails.app.resourceMappers.org.grails.plugin.resource'
But this didn't help me.
You have tried various package/class name combinations without success.
Since your sample log outputs do not include the class name, you don't know what you should target. If you modify the format used for log messages, you can see the exact name you need to block.
An example:
//time, log level, thread, class, message, newline
console name: 'myAppender', layout: pattern(conversionPattern: '%d | %p | %t | %c | %m%n')
root {
info 'myAppender'
}
Once you do that, you should see output like this:
2015-02-17 10:53:06,536 | INFO | localhost-startStop-1 | org.hibernate.tool.hbm2ddl.SchemaUpdate | schema update complete
To suppress that line of output, you'd target
org.hibernate.tool.hbm2ddl.SchemaUpdate

Sending log messages from Grails BootStrap.groovy and plugin descriptors

When I was introducing the Fixture module into my Grails application, I had trouble finding out how to send log messages from the application's main BootStrap.groovy and from the initialization code of my plugins.
I use the following log4j config in Config.groovy
log4j = {
appenders {
console name: 'consoleAppender', layout: pattern(conversionPattern: '%d{dd-MM-yyyy HH:mm:ss,SSS} %5p %c{2} - %m%n')
}
root {
// define the root logger's level and appenders, these will be inherited by all other loggers
error 'consoleAppender'
}
// change the default log level for classes in our app to DEBUG
def packageRoot = 'com.example.myapp'
def appNamespaces = [
packageRoot,
"grails.app.conf.$packageRoot",
"grails.app.filters.$packageRoot",
"grails.app.taglib.$packageRoot",
"grails.app.services.$packageRoot",
"grails.app.controllers.$packageRoot",
"grails.app.domain.$packageRoot",
"grails.app.conf.BootStrap"
]
// statements from the app should be logged at DEBUG level
appNamespaces.each { debug it }
}
The only change you should need to make is to set packageRoot to the root package of your app. The name/namespace of the logger that is assigned to BootStrap.groovy is grails.app.conf.BootStrap, so including this in appNamespaces ensures that it will log at the default level for the application (debug in the example above).
You don't have to do anything to get a logger instance in BootStrap.groovy, one is already provided by Grails with the name log, e.g.
class BootStrap {
def init = { servletContext ->
log.debug 'hello bootstrap'
}
}
In Grails 2.2.4:
The "log" logger is injected into the application's main BootStrap.groovy and into the plugin's descriptor (e.g.: FooGrailsPlugin.groovy)
The logger in the app's BootStrap.groovy has a name like "grails.app.BootStrap" so by enabling the appending of the "grails.app" logger in the configuration will allow displaying the messages sent through this logger.
The logger in the plugin descriptors has no package prefix, and named exactly as the descriptor class but without the groovy extension. E.g.: "FooGrailsPlugin", so it is not so easy to enable the log messages by the default injected logger. It doesn't help if you add a package definition into the top of plugin descriptor, it will not be used in the composition of the name of the logger.
Naturally, you can manually define a logger in the plugin descriptor (using a package name according to your needs) like this:
private static final log = LogFactory.getLog("yourapp.foo.FooGrailsPlugin")
After this, you can enable the "yourapp.foo" logger in the application and you will see the messages sent through the plugin descriptor's manually defined logger.

Grails conversionPattern change at runtime

Using a standard log4j configuration for my grails app, with a custom conversion pattern like that :
log4j = {
appenders {
console name:'stdout', layout:pattern(conversionPattern: '[%-7p][%d{dd/MM/yyyy HH:mm:ss,SSS}] %C %m%n')
}
root {
warn 'stdout'
additivity = true
}
error 'org.grails.plugins.springsecurity'
error 'org.codehaus.groovy.grails.web.servlet' // controllers
// ...
warn 'org.mortbay.log',
'org.apache.tomcat',
'org.apache.tomcat.util.digester'
debug 'grails.app'
}
My grails app start as expected .. with the good conversionPattern ... but only during few log lines ... to finally fallback to the default grails conversionPattern ... :-/
Any idea ?
I don't code in Grails but I do know log4j very well.
On the surface, it seems you need to inspect those lines that aren't formatted as expected. Chances are, they are not caught by the logger that uses your stdout appender.
From what I can piece together, it looks to me like maybe your warning logger is the only one that uses your stdout appender. Meaning anything other than warnings would not format as expected. Further, it's also possible that loggers present in your libraries catch some log statements but not others.
Basically, the best way to solve this is to modify your pattern to give you good information on your loggers (I would suggest replacing the %C pattern, which is very slow to execute, with %c to see the exact category that was used by the logger). Then, look at the differences between what's properly formatted and everything else.
Do they have a common level (info, debug, error, etc)?
Do they stem from the same package?
Are they 3rd party library calls?
Figure out what they have in common and you will find your error.

Resources