Why Logback/Slf4j logs wrong file and line numbers in Groovy? - grails

I have noticed that sometimes Logback/Slf4j logs wrong file and line numbers in Groovy.
I have a lot of bad file/line number logs in my Grails application (more than 50% of all logs)
Is there any workaround?
Simplest example:
logback.groovy
appender("STDOUT", ConsoleAppender) {
encoder(PatternLayoutEncoder) {
pattern = '%d{HH:mm:ss.SSS} [%-5level] %msg \\(%file:%line\\)%n'
}
}
root(DEBUG, ["STDOUT"])
Test.groovy
#Slf4j
class Test {
static void main(String[] args) {
log.info("${'Wrong file and line number!'}")
}
}
Output
23:24:23.894 [INFO ] 0 Wrong file and line number! (NativeMethodAccessorImpl.java:-2)
Example of my grails log output with problem
10:16:44.881 [DEBUG] [org.grails.plugin.resource.ResourceProcessor] -------------------------------------------------- (null:-1)

The problem occurs when a GString is logged (any normal String logs the correct line number). I have no clue why it works like this but I have found two workarounds: Either convert GString to String by calling the toString() method (ugly) or use template format with parameters
import groovy.util.logging.Slf4j;
#Slf4j
class Test {
static void main(String[] args) {
def x = 1
log.info("Does not work: ${x}")
log.info("Works ${x}".toString())
log.info("Works {}", x)
}
}

Related

Jenkins scripted Pipeline: How to apply #NonCPS annotation in this specific case

I am working on a scripted Jenkins-Pipeline that needs to write a String with a certain encoding to a file as in the following example:
class Logger implements Closeable {
private final PrintWriter writer
[...]
Logger() {
FileWriter fw = new FileWriter(file, true)
BufferedWriter bw = new BufferedWriter(fw)
this.writer = new PrintWriter(bw)
}
def log(String msg) {
try {
writer.println(msg)
[...]
} catch (e) {
[...]
}
}
}
The above code doesn't work since PrintWriter ist not serializable so I know I got to prevent some of the code from being CPS-transformed. I don't have an idea on how to do so, though, since as far as I know the #NonCPS annotation can only be applied to methods.
I know that one solution would be to move all output-related code to log(msg) and annotate the method but this way I would have to create a new writer every time the method gets called.
Does someone have an idea on how I could fix my code instead?
Thanks in advance!
Here is a way to make this work using a log function that is defined in a shared library in vars\log.groovy:
import java.io.FileWriter
import java.io.BufferedWriter
import java.io.PrintWriter
// The annotated variable will become a private field of the script class.
#groovy.transform.Field
PrintWriter writer = null
void call( String msg ) {
if( ! writer ) {
def fw = new FileWriter(file, true)
def bw = new BufferedWriter(fw)
writer = new PrintWriter(bw)
}
try {
writer.println(msg)
[...]
} catch (e) {
[...]
}
}
After all, scripts in the vars folder are instanciated as singleton classes, which is perfectly suited for a logger. This works even without #NonCPS annotation.
Usage in pipeline is as simple as:
log 'some message'

Getting closed before endTest call in Selenium using Extent Reports

BaseTest.java:
private static ReportService reportService; // Calling report service interface
#BeforeSuite:
reportService = new ExtentReportService(getConfig()); // New instance of ExtentReportService.
#BeforeMethod:
reportService.startTest(testname); // Starting the test and passing the name and description of the test.
#AfterMethod:
reportService.endTest(); // Ending the test
#AfterSuite:
reportService.close(); // Closing the test
**ExtentReportService.java:** // Contains different extent API methods. (These are designed to be generic.)
protected static ExtentReports extent; // static instance of ExtentReports
protected static ExtentTest test; //static instance of ExtentTTest
#Override // StartTest method
startTest(Method method) {
testMetaData = getTestMetaData(method);
test=extent.startTest(testMetaData.getId(),testMetaData.getSummary());
}
#Override //End test method
endTest() {
extent.endTest(test);
extent.flush();
}
The above is my selenium code.
When I am executing my suite file with parallel="methods" and thread count="3", I am getting the following error: "com.relevantcodes.extentreports.ExtentTestInterruptedException: Close was called before test could end safely using EndTest.".
While debugging, I found that even before all endTest() in AfterMehtod were executed, AfterSuite was being called.
I tried different variations such that the code works, such as, removing static, calling endTest() in the test itself rather than after method, removing close() call from AfterSuite and many other variations. But still getting the same error.
I tried all the possible solutions given on the internet, but to no use.
Attaching a hierarchy file for the ExtentReport used in my project
I also the following solution given in StackOverflow:
Extent report :com.relevantcodes.extentreports.ExtentTestInterruptedException: Close was called before test could end safely using EndTest
Unsynchronized output
XMF file for parallel test.
ExtentReports Intialized in ExtentManager class using Singleton().
public class ExtentManager {
private static ExtentReports extent;
public static ExtentReports getInstance() {
if(extent == null) {
extent = new ExtentReports(System.getProperty("user.dir")+"\target\surefire-reports\html\extent.html", true, DisplayOrder.OLDEST_FIRST);
extent.loadConfig(new File(System.getProperty("user.dir")+"src\test\resources\extentconfig\ReportsConfig.xml"));
}
return extent;
}
}
Declared in TestBase class as global.
public ExtentReports repo= ExtentManager.getInstance();
public static ExtentTest test
Call startTest in public void onTestStart(ITestResult result)
test = repo.startTest(result.getName().toUpperCase());
Call endTest in CustomListener Class both in a)public void onTestFailure(ITestResult result); b)public void onTestSuccess(ITestResult result).
repo.endTest(test)
Call close() OR flush() in #AfterSuite in TestBase class but NOT both!
//repo.close();
repo.flush();
Note: I have ExtentReports ver-2.41.2, and TestNg ver-7.1.0.
After the above steps, error 'Getting closed before endTest call in Selenium using Extent Reports' got resolved.
Extent report generates each test successfully in the report.
Try it out!

using log4j2 configuration builder to initialize logger after startup

I created a custom log4j configuration using ConfigurationBuilder and want to initialize this configuration and start using log4j afterwards, in otherwards without having a configuration file when the project initializes...
according to this page under Reconfigure Log4j Using ConfigurationBuilder with the Configurator, it says -
An alternative to a custom ConfigurationFactory is to configure with the Configurator. Once a Configuration object has been constructed, it can be passed to one of the Configurator.initialize methods to set up the Log4j configuration. Using the Configurator in this manner allows the application control over when Log4j is initialized. However, should any logging be attempted before Configurator.initialize() is called then the default configuration will be used for those log events.
So this should be possible.
This is my code - its almost exactly as it is on that page with a few adjustments -
ConfigurationBuilder<BuiltConfiguration> builder = ConfigurationBuilderFactory.newConfigurationBuilder();
builder.setStatusLevel(DEBUG);
builder.setConfigurationName("RollingBuilder");
//create a console appender
AppenderComponentBuilder appenderBuilder = builder.newAppender("Stdout", "CONSOLE")
.addAttribute("target", ConsoleAppender.Target.SYSTEM_OUT);
appenderBuilder.add(builder.newLayout("PatternLayout"))
.addAttribute("pattern", "%d{dd/MMM/yyyy HH:mm:ss,SSS}- %c{1}: %m%n");
builder.add(appenderBuilder);
//create a rolling file appender
LayoutComponentBuilder layoutBuilder = builder.newLayout("PatternLayout")
.addAttribute("pattern", "%d [%t] %-5level: %msg%n");
ComponentBuilder triggeringPolicy = builder.newComponent("Policies")
.addComponent(builder.newComponent("TimeBasedTriggeringPolicy")
.addAttribute("interval", "1")
.addAttribute("modulate", "true"));
ComponentBuilder rolloverStrategy = builder.newComponent("DefaultRolloverStrategy")
.addAttribute("max", "4");
appenderBuilder = builder.newAppender("RollingFile", "RollingFile")
.addAttribute("fileName", "logs/app-info.log")
.addAttribute("filePattern", "logs/app-info-%d{yyyy-MM-dd}--%i.log")
.add(layoutBuilder)
.addComponent(triggeringPolicy)
.addComponent(rolloverStrategy);
builder.add(appenderBuilder);
//create a new logger
builder.add(builder.newLogger("root", Level.DEBUG)
.add(builder.newAppenderRef("RollingFile"))
.addAttribute("additivity", false));
builder.add(builder.newRootLogger(Level.DEBUG)
.add(builder.newAppenderRef("RollingFile")));
LoggerContext ctx = Configurator.initialize(builder.build());
However, when I call that code, then do log statements right after, I get the error -
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
So obviously it thinks I dont have a configuration file... does anyone know how I can get my Logger to recognize this configuration file I created in code?
from Log4j 2 docs
Here is how Log4j finds the available ConfigurationFactories:
A system property named "log4j.configurationFactory" can be set with the name of the ConfigurationFactory to be used.
ConfigurationFactory.setConfigurationFactory(ConfigurationFactory)
can be called with the instance of the ConfigurationFactory to be
used. This must be called before any other calls to Log4j.
A ConfigurationFactory implementation can be added to the classpath
and configured as a plugin in the "ConfigurationFactory" category.
The Order annotation can be used to specify the relative priority
when multiple applicable ConfigurationFactories are found.
In a test I've arranged this:
public class Log4j2Example {
static {
System.setProperty("log4j.configurationFactory", CustomConfigurationFactory.class.getName());
}
private static final Logger LOG = LogManager.getLogger(Log4j2Example.class);
public static void main(String[] args) {
LOG.debug("This Will Be Printed On Debug");
LOG.info("This Will Be Printed On Info");
LOG.warn("This Will Be Printed On Warn");
LOG.error("This Will Be Printed On Error");
LOG.fatal("This Will Be Printed On Fatal");
LOG.info("Appending string: {}.", "Hello, World");
}
}
The ConfigurationFactory implemented:
#Plugin(name = "CustomConfigurationFactory", category = ConfigurationFactory.CATEGORY)
#Order(50)
public class CustomConfigurationFactory extends ConfigurationFactory {
---8<----
}
If you set the value of log4j.configurationFactory property before log4j2 start, it's done.
If you prefer using the Configurator instead of using a custom ConfigurationFactory, you have to make sure that your initialization is applied before any call to LogManager.getLogger(...). For instance, you could use a static initialization block
public class MyApplication {
public static final Logger log;
static {
createCustomConfiguration();
log = LogManager.getLogger(MyApplication.class);
}
public static void createCustomConfiguration() {
// your initialization code
}
public static void main(String[] args) {
log.info("Log and roll!");
// do stuff
}
}

log4j2 evaluation of user-created Lookup plugins is inconsistent and does not work at all when used programmatically

I'm observing some behaviour with log4j2 Plugins that I can only interpret as inconsistent.
I'm trying to write a plugin to be used with RollingFileAppender that will let me change the header of each rolled file. I've decided to use a Lookup plugin for this (call it dynamicheader). One additional complication is that the RollingFileAppender and its associated PatternLayout are created programatically. The problems I observe are detailed below, but in summary:
The header property of the PatternLayout is never evaluated: calling getHeader() on the PatternLayout always returns "${dynamicheader:key} instead of the value associated with the key. If I replace "${dynamicheader:key}" with "${sys:key}" the header IS evaluated and getHeader() returns whatever value I set for "key".
If I configure a FileAppender with my plugin in a configuration file ("${dynamicheader:key}", see below), calling getHeader() on the associated PatternLayout DOES return the value associated with the key, but it is only evaluated once. If I change the value(header) associated with a key, subsequent calls to getHeader() do not re-evaluate the header and return the original key. However if I use 2 $ when I define the PatternLayout ("$${dynamicheader:key}"), the header IS evaluated on every call to getHeader(). This isn't surprising since (I gather) it is the expected behaviour, but, if I again replace my plugin with the SystemPropertyLookup, I need only a single $ for the header to be evaluated on every call to getHeader().
My plugin, configuration file and unit test to demonstrate the behaviour follow:
Plugin
package my.custom.plugins;
#Plugin(name = "dynamicheader", category = StrLookup.CATEGORY)
public class DynamicHeader extends AbstractLookup {
private static Map<String, String> headerByAppender = Collections.synchronizedMap(new HashMap<>());
#Override
public String lookup(final LogEvent event, final String key) {
return get(key);
}
public void put(String key, String val) {
headerByAppender.put(key, val);
}
public String get(String key) {
return headerByAppender.get(key);
}
}
Note that my plugin is very very similar to the SystemPropertiesLookup plugin and I'd expect them to behave similarly.
Config:
Configuration:
name: TestConfig
packages: "my.custom.plugins"
Appenders:
File:
name: FILE
fileName: target/surefire-reports/unit-test.log
append: false
PatternLayout:
Pattern: "%m%n%ex"
header: ${dynamicheader:key2}
Loggers:
Root:
AppenderRef:
ref: FILE
Tests:
public void testMyPlugin1() {
DynamicHeader dh = new DynamicHeader();
dh.put("key1", "val1");
PatternLayout pl = PatternLayout.newBuilder()
.withPattern("%m%n%ex")
.withHeader("${dynamicheader:key1}") // use my plugin here...
.build();
assertEquals("val1", pl.getHeader()); // <- this fails. pl.getHeader() always returns "${dynamicheader:key1}"
}
public void testSysPlugin1() {
System.setProperty("key1", "val1");
PatternLayout pl = PatternLayout.newBuilder()
.withPattern("%m%n%ex")
.withHeader("${sys:key1}") // use sys plugin here...
.build();
assertEquals("val1", pl.getHeader()); // <- this works.
}
public void testMyPlugin2() {
DynamicHeader dh = new DynamicHeader();
Layout l = ((LoggerContext)LogManager.getContect(false)).getConfiguration().getAppender("FILE").getLayout();
dh.put("key2", "val2");
assertEquals("val2", l.getHeader()); // <- this works.
dh.put("key2", "val3");
assertEquals("val3", l.getHeader()); // <- this fails. getHeader() returns "key2"
}
However, if I chanege the coinfiguration file from
header: ${dynamicheader:key2}
to
header: $${dynamicheader:key2}
the above test passes. In contrast, if I change
header: ${dynamicheader:key2}
to
header: ${sys:key2}
the equivalent test passes:
public void testSysPlugin2() {
Layout l = ((LoggerContext)LogManager.getContect(false)).getConfiguration().getAppender("FILE").getLayout();
System.setProperty("key2", "val2");
assertEquals("val2", l.getHeader()); // <- this works.
System.setProperty("key2", "val3");
assertEquals("val3", l.getHeader()); // <- this also works.
}
My questions then are:
Why is My plugin never evaluated when the PatternLayout is created programmatically? Am I doing something wrong?
Why does the SystemPropertyLookup behave differently than an virtually identical user-authored lookup.
I'm using log4j2 v2.8.2.
Thanks in advance!

Jenkins pipeline job hangs on simple call to print simple object from shared library

I have a jenkins pipeline job that has been working fine. I have a small handful of similar pipelines, and I've been duplicating a small set of reusable utility methods into each one. So, I've started to construct a shared library to reduce that duplication.
I'm using the following page for guidance: https://jenkins.io/doc/book/pipeline/shared-libraries/ .
For each method that I move into the shared library, I create a "vars/methodname.groovy" file in the shared library, and change the method name to "call".
I've been doing these one at a time and verifying the pipeline job still works, and this is all working fine.
The original set of methods would reference several "global" variables, like "env.JOB_NAME" and "params.". In order for the method to work in the shared library, I would add references to those env vars and params as parameters to the methods. This also works fine.
However, I don't like the fact that I have to pass these "global" variables, that are essentially static from the start of the job, sometimes through a couple of levels of these methods that I've put into the shared library.
So, I've now created something like the "vars/acme.groovy" example from that doc page. I'm going to define instance variables to store all of those "global" variables, and move each of the single methods defined in each of the "vars/methodname.groovy" files as instance variables in this new class.
I also defined a "with" method in the class for each of the instance variables (setter that returns "this" for chaining).
I initially would configure it inside my "node" block with something like the following (the file in the library is called "vars/uslutils.groovy"):
uslutils.withCurrentBuild(currentBuild).with...
And then when I need to call any of the reused methods, I would just do "uslutils.methodname(optionalparameters)".
I also added a "toString()" method to the class, just for debugging (since debugging Jenkinsfiles is so easy :) ).
What's odd is that I'm finding that if I call this toString() method from the pipeline script, the job hangs forever, and I have to manually kill it. I imagine I'm hitting some sort of non-obvious recursion in some Groovy AST, but I don't see what I'm doing wrong.
Here is my "vars/uslutils.groovy" file in the shared library:
import hudson.model.Cause
import hudson.triggers.TimerTrigger
import hudson.triggers.SCMTrigger
import hudson.plugins.git.GitStatus
class uslutils implements Serializable {
def currentBuild
String mechIdCredentials
String baseStashURL
String jobName
String codeBranch
String buildURL
String pullRequestURL
String qBotUserID
String qBotPassword
def getCurrentBuild() { return currentBuild }
String getMechIdCredentials() { return mechIdCredentials }
String getBaseStashURL() { return baseStashURL }
String getJobName() { return jobName }
String getCodeBranch() { return codeBranch }
String getBuildURL() { return buildURL }
String getPullRequestURL() { return pullRequestURL }
String getQBotUserID() { return qBotUserID }
String getQBotPassword() { return qBotPassword }
def withCurrentBuild(currentBuild) { this.currentBuild = currentBuild; return this }
def withMechIdCredentials(String mechIdCredentials) { this.mechIdCredentials = mechIdCredentials; return this }
def withBaseStashURL(String baseStashURL) { this.baseStashURL = baseStashURL; return this }
def withJobName(String jobName) { this.jobName = jobName; return this }
def withCodeBranch(String codeBranch) { this.codeBranch = codeBranch; return this }
def withBuildURL(String buildURL) { this.buildURL = buildURL; return this }
def withPullRequestURL(String pullRequestURL) { this.pullRequestURL = pullRequestURL; return this }
def withQBotUserID(String qBotUserID) { this.qBotUserID = qBotUserID; return this }
def withQBotPassword(String qBotPassword) { this.qBotPassword = qBotPassword; return this }
public String toString() {
// return "[currentBuild[${this.currentBuild}] mechIdCredentials[${this.mechIdCredentials}] " +
// "baseStashURL[${this.baseStashURL}] jobName[${this.jobName}] codeBranch[${this.codeBranch}] " +
// "buildURL[${this.buildURL}] pullRequestURL[${this.pullRequestURL}] qBotUserID[${this.qBotUserID}] " +
// "qBotPassword[${this.qBotPassword}]]"
return this.mechIdCredentials
}
Note that I've simplified the toString() method temporarily until I figure out what I'm doing wrong here.
This is what I added at the top of my "node" block:
uslutils.currentBuild = currentBuild
println "uslutils[${uslutils}]"
When I run the job, it prints information from lines that come before this, and then it just shows the rotating thing forever, until I kill the job. If I comment out the "println", it works fine.

Resources