My organization is migrating from Log4j1 to Log4j2. We have a custom rolling file appender that changes the filename that it logs to at runtime when a certain event occurs in the application. This is implemented so that it's easy to find the log file in the log directory. For example the log file directory might look like this;
mylog-2021-08-02.log
mylog-2021-08-03.log
SPECIAL_EVENT_mylog-2021-08-03.log
mylog-2021-08-04.log
mylog-2021-08-05.log
Based on the research I've done it appears that Appender filenames are immutable and I'd have to create a new Appender and add it to the configuration when the event occurs, then when the triggering policy is signaled remove this Appender and add back a new Appender for the original configuration? Is there a more elegant solution than this? Do I need to write a custom appender and handle the file naming/rollover logic myself?
Update 9/2/2021
Thanks for the answer #D.B. this helped me learn quite a bit about Log4j2. The question that you reference is very similar to my situation. We have many devices, and each device needs to log to its own file. I do have some additional requirements though. We have many threads in each device which need to log to the same device log file and many devices that each need their own log file. Additionally, I need to handle the special rollover file naming requirement (original post) when a particular event occurs in the device. Finally, the name assigned to each device is not known until runtime (its defined in another configuration file we have).
I could use markers, like you suggest, but this can quickly become difficult to maintain since developers would need to know they have to pass a marker with every logging statement and the entire existing code base would need to be updated to pass the appropriate marker. I also could use a context map as you suggest but the application has many threads and again developers would need to know they have to set the context data appropriately before logging from any thread.
With Log4j1 these requirements were met by:
A custom appender class derived from RollingFileAppender that handled the special event file naming rollover logic.
A custom filter that accepted events that met the following criteria:
a. The thread name the event came from included “device name” of the device
b. The event message included “device name”
When a new device is instantiated in the system:
a. A new custom filter is created with the “device name” string to filter on.
b. A new custom appender is created that logs to a file named “device name”.log. This appender is created with the custom filter.
c. The appender is added as reference to the Root logger
This results in all log events being sent to the new appender (and every other device appender that is created) but the log events are filter based on the “device name”. This results in a device specific log file.
I could implement a custom filter and appender like we did with Log4j1 but I’d prefer not to be dependent upon the logging core classes. Any additional recommendations you have would be greatly appreciated.
You can achieve it by using ThreadContext.
My configuration file is as following
<Routes pattern="$${ctx:logName}">
<Route>
<RollingRandomAccessFile name="Rolling-Random-Access-File-Appender" fileName="${ctx:logName}.log" filePattern="${ctx:logName}.log.%d{yyyy-MM-dd-hh-mm-ss}.gz">
<PatternLayout pattern="%msg %n"/>
<Policies>
<SizeBasedTriggeringPolicy size="50 MB"/>
</Policies>
</RollingRandomAccessFile>
</Route>
</Routes>
Put some thread context where you write your log like following
ThreadContext.put("logName", fileName);
log4j2logger.log(level, logMessage);
Related
We are trying to switch completely from log4net to Serilog. However, this part of functionality seems to be missing. What I need is to be able to get location of log-files Inside a library class. This is important for our Desktop Click-Once application because that location is different on different OSes and for different users. When user needs access to the logs we can direct him to the proper folder.
Very similar question was asked here:
Read current Serilog's configuration
But I can't believe that there is no way to get this information from Serilog. I don't need to change that configuration - just read it. In log4net we could do:
log4net.LogManager.GetAllRepositories()
and then
repository.Root.Appenders.OfType<FileAppender>
Please tell me that there is some kind back-door to the current LoggerConfiguration or if there is some alternative way to get file-path of the current File-Sink.
Serilog does not expose the list of Sinks that have been configured, so your only option at the moment would be to use Reflection if you really want to get this information from the live Serilog configuration.
You can see an example on this answer:
Unit test Serilog configuration
That said, given all you want to do is to know the path where log files are being written, that's something you can easily store at the start of the application at the moment you set up your Serilog logging pipeline.
If you configure the file path in code, you can store that information in a static property somewhere your entire app can access. If you get your folder path from the appSettings.json or App.config, you can read the information from there.
If you have environment variables in your configuration you can get the same values that Serilog gets by expanding these environment variables e.g. Environment.ExpandEnvironmentVariables("%LogPath%\\AppName.log")
I'm looking for ways to configure/reconfigure log4j after it may, or may not have been initialized. This should work running standalone or in a web container.
The configuration may be represented by a configuration file at a particular arbitrary URI. The knowledge of the URI comes from the application, not log4j framework. The configuration may also be done programmatically (less problematic but problematic still).
The public API is unfortunately sorely lacking so developers are forced to write brittle code using implementation classes from log4j core. From weeks of studying documentation and stepping through log4j code I see two ways to accomplish reconfiguration:
Stopping the current context and re-initializing using log4j.core.config.Configurator,
similar to the following:
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(buildDefaultConfiguration()); //programmatically building a configuration
or
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(null, ConfigurationSource.fromUri(loggingUri)); //passing the configuration source constructed from a known URI
The first line in both examples will stop the current context if it has already been created and started (when running in a web container for example). If log4j has not been initialized (when running as a standalone app for example) it will initialize log4j with the default configuration and start the context first (as a side effect of calling getContext()), and then stop it.
If the current context is not stopped first the call to Configurator.initialize() will have no effect. log4j will ignore your attempt to re-initialize, will not give you any indication of it, and just simply return the current context. This behavior is not mentioned in the "Reconfigure Log4j Using ConfigurationBuilder with the Configurator" section of the Manual. It simply says: "However, should any logging be attempted before Configurator.initialize() is called then the default configuration will be used for those log events." The default configuration will also be used for all subsequent log events in the provided examples because calling Configurator.initialize() will have no effect, unless the current context is stopped first.
Setting a new configuration location on the existing context thus forcing reconfiguration,
similar to the following:
((LoggerContext) LogManager.getContext(false)).setConfigLocation(loggingUri);
This works in a similar fashion: if log4j hasn't been initialized the call to getContext() will trigger initialization and creation of the default context that will then be reconfigured; and if it has been initialized then the current context, whatever it may have been, will get reconfigured. The configuration source will be created from the URI by the log4j framework.
The difference is that in the first way the context is replaced and all loggers in the old (stopped) context will be dead. If any code on the stack holds references to these dead loggers and tries to log to them it will be a no-op. In the second way the context is kept but the configuration is replaced and existing loggers are updated with the new configuration.
Both methods use core code and are therefore brittle, but both work for the sunny day scenario (using log4j-core 2.10.0 anyway). However, neither one appears to afford the user any control over handling any exceptional events, or even inform the user that something went wrong. Log4j will "eat up" any exceptions, and make its own executive decision how to handle them.
If a problem occurs after Configurator.initialize() is called log4j will create a default configuration that effectively cuts all logging off other than errors to the console and happily return the new context back not giving the calling code any clue that logging has effectively been stopped.
If a problem occurs after LoggerContext.setConfigLocation() is called log4j will do essentially the same thing except the current context will be kept. One would think that particularly in the case of a reconfiguration failure the most typical handling would be to revert back to the old configuration. There doesn't appear to be a way to accomplish this because log4j will simply stop logging (other than errors to the console) and give the calling code no indication of the failure.
Here's a typical scenario: several applications extend a common framework. The framework configures the same logging for all extending applications (to facilitate reuse and simplify post-processing of the logs). Some application has a unique logging need and attempts to reconfigure log4j from its own metadata (config file at a known URI). The XML parser throws an exception parsing this file. The exception gets handled internally by log4j, logging is quietly stopped, and no one knows. Well, there is an error log sent to the StatusLogger with the exception, but the calling code doesn't know.
So with this lengthy preamble, the question is: is there a mechanism I haven't discovered yet to modify log4j configuration in a predictable fashion and be able to handle abnormal events should they arise? That is, other than something drastic like (for the example above) extending the XML configuration class and replacing code that handles exceptions, thus running a risk of creating undesirable side effects in the current log4j implementation, not talking about an even greater risk of breaking in the future if the implementation changes.
Any help would be greatly appreciated!
Is there any way to customize logging on Neo4j 3+? Something like logback.xml where I can define log pattern, output files, levels, rolling policy and so on.
If you're talking about using Neo4j Server, then the logging configuration is available in the neo4j.conf file, with options prefixed by dbms.logs.: https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
These options include log level, output files, rotation policy, etc.
If you're using Neo4j embedded in another application (which you probably shouldn't), then you can use the setUserLogProvider(...) of the GraphDatabaseFactory. If you want to route user logging to another framework, there is a Slf4jLogProvider in the neo4j-logging jar, which can be used to send logs to slf4j and onto wherever you like.
I am getting below error when I try to deploy to the azure. I am doing basic hello world app. Can someone please give some insights. I have attached the logs below.
I am using :: Version 1.1.0-beta2
The silos start properly as per the log files but it cant initialize the Client in azure. I am using the below config file in the Client.
<?xml version="1.0" encoding="utf-8" ?>
<!--
This is a sample client configuration file for use by an Azure web role acting as an Orleans client.
The comments illustrate common customizations.
Elements and attributes with no comments should not usually need to be modified.
For a detailed reference, see "Orleans Configuration Reference.html".
-->
<ClientConfiguration xmlns="urn:orleans">
<!--
To turn tracing off, set DefaultTraceLevel="Off" and have no overrides. To see a minimum of messages, set DefaultTraceLevel="Error".
For the trace log file name, {0} is the silo name and {1} is the current time.
Setting WriteTraces to true will cause detailed performance information to be collected and logged about the individual steps in the
message lifecycle. This may be useful debugging performance issues.
-->
<Tracing DefaultTraceLevel="Off" TraceToConsole="false" TraceToFile="{0}-{1}.log" WriteTraces="false">
<!--
To get more detailed application logging, you can change the TraceLevel attribute value to "Verbose" or "Verbose2".
Depending on the log levels you have used in your code, this will cause additional messages to be written to the log.
-->
<TraceLevelOverride LogPrefix="Application" TraceLevel="Info" />
</Tracing>
</ClientConfiguration>
PS::
I figured it out after enabling the fusion logs. I could see 1 dll was not loaded properly. The issue is fixed.
You should be using AzureClient. It handles all configuration automatically.
Please refer to https://github.com/dotnet/orleans/tree/master/Samples/AzureWebSample and follow the same setup as there. The best way is to close that AzureWebSample and add your own grains and logic.
What version of Orleans are you using? If you included http://www.nuget.org/packages/Microsoft.Orleans.Templates.Interfaces/ in the project, the invoker class should have been generated at compile time. Try cleaning up Properties\orleans.codegen.cs files and rebuilding the project.
I'm trying to configure a Topshelf-based Windows service to log to a custom event log using Topshelf.Log4Net and log4net. This works fine if I run the application in command-line mode. When I try to install the service with BillsTestService.exe install, I get:
INFO Topshelf v3.1.107.0, .NET Framework v4.0.30319.18052
DEBUG Attempting to install 'BillsTestService'
Running a transacted installation.
...
Service BillsTestService has been successfully installed.
Creating EventLog source BillsTestService in log Application...
An exception occurred during the Install phase.
System.ArgumentException: Source BillsTestService already exists on the local computer.
...
at System.Diagnostics.EventLog.CreateEventSource(EventSourceCreationData sourceData)
I've tried running EventLog.DeleteEventSource("BillsTestService"); in LINQPad before installing; that succeeds, but a subsequent service install still fails.
My log4net appender configuration is:
<appender name="ErrorEventLogAppender" type="log4net.Appender.EventLogAppender" >
<threshold value="ERROR" />
<logName value="MyCompanyServices" />
<applicationName value="BillsTestService" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%-5level %logger - %message%newline" />
</layout>
</appender>
What am I doing wrong?
The intent is to have multiple services log errors to the same log name (with different application names); the log would be created by Operations.
Part of the issue is that Topshelf automatically creates an eventlog source named after the service when you install. Since the log4net appender applicationName is also used as an eventlog source, that cannot be the actual application/service name. The source must be unique on the local computer. I added a "Source" suffix to the name in the log4net configuration.
The other part is that the service does not have rights to create the log. It can create a new source, but not a new log. One way to do this is in code (I used LINQPad):
EventLog.CreateEventSource("FOODEBUG", "MyCoSvc");
EventLog mylog = new EventLog("MyCoSvc");
mylog.Source = "FOODEBUG";
mylog.WriteEntry("This is a test.");
EventLog.DeleteEventSource("FOODEBUG");
I'm not positive if you actually have to write to the log to create it; after spending over two days on this, I'd rather be safe.
Also note that log names are limited to 8 characters; you can go longer, but the system only considers the first 8 characters as significant.
There's no need to move the log4net initialization as Chris Patterson suggested. Simply including
configurator.DependsOnEventLog();
configurator.UseLog4Net("MyService.exe.config");
in the HostFactory.Run delegate is sufficient. (I'm using Topshelf.Log4Net.)
Finally, I'm reasonably sure that the entire Windows event logging system is flaky. Event Viewer's refresh doesn't work in all cases, at one point my Application log entries disappeared, and I believe I've seen different results after a reboot.
Move your log4net initialization to the ConstructUsing() configuration delegate for the service instead of specifying for use during install/uninstall which doesn't require the service class to be instantiated.
Or, only use the event log appender when the actual service is running, by either adding the appender outside of the config file or modifying the configuration to eliminate the event log appender unless an ERROR or FATAL event occurs.
My guess is the DEBUG/INFO level events are trying to log to the appender, and the source does not exist yet.