Since log4j 1.x is end of life, I want to build my appender in Log4j2 but there is not enough resources nor examples on the net. Additionally, being able to combine it with Messages and custom log levels would be great.
Something like this:
private static final Logger logger = LogManager.getLogger();
logger.log(ACCESS_LOG, new AccessLogMessage("DateTime", "User", "IP", "Data"));
...
try {
...
}
catch(ArithmeticException ex) {
logger.log(EXCEPTION, new ExceptionMessage(ex));
}
A simple custom appender that would write logs to console would be enough for me to get started.
PS: My ultimate goal is to convert the structered log data to json format and send it to my REST service.
An appender that writes to the console already exists. It is called the ConsoleAppender. If you want to format the data in some special way then you would create a custom Layout to do that.
Log4j provides many examples of Layouts. The most common use case is to extend AbstractStringLayout and implement the toSerializable method.
Related
I am using the following log configuration..
.WriteTo.Logger(lc => lc.Filter.ByIncludingOnly(Matching.FromSource<****.Application.Common.Services.Jobs.JobService>()).WriteTo.File("logs/jobs-.log", rollingInterval: RollingInterval.Day))
I am trying to make it so any classes the job service calls get logged. I can't add those classes directly to the configuration because they are used from other classes with a high transaction volume, and I don't want to muddy the log.
Is there anyway to have children classes log from a the calling class that is configured?
Matching.FromSource<****.Application.Common.Services.Jobs.JobService>() means that the SourceContext must be exactly ****.Application.Common.Services.Jobs.JobService to match the filter.
If you want the filter to match anything under the ****.Application.Common.Services.Jobs namespace then you need to apply a filter based on the namespace:
.WriteTo.Logger(lc => lc.Filter
.ByIncludingOnly(
Matching.FromSource(
typeof(****.Application.Common.Services.Jobs.JobService).Namespace)
)
// ...
This filter will match any SourceContext that starts with ****.Application.Common.Services.Jobs
NB: You also need to create the context logger to include the corresponding SourceContext property:
ILogger log = Log.ForContext<****.Application.Common.Services.Jobs.JobService>();
log.Information("...");
I have a number of logs like this:
Log.Information("Submitting order {#order}", order);
This log goes through RabbitMq -> LogStash -> Elastic and ends up generating a lot of fields (I assume one field for each propery). Eventually I have thousands and thousands of fields in Elastic which brings all kinds of problems.
If I specify the whole object as a parameter, it usually means I don't care much about having all its fields being parsed, I would be more than happy if it was stored as a single string object (but still serlialised as json). Is there a way to customise it in Serilog?
#flaxel's answer works well if you're happy to change the ToString() representation of your object. If you have already overriden ToString() or you don't want it to return a JSON string then consider one of the following options.
If you don't want to log the type as JSON all the time, consider just serializing the object when you log the message. This is the most explicit approach, and allows you to pick and choose which messages have the serialized form and which have the destructured form, but it might make your log statements quite verbose:
// Using Newtonsoft.Json to serialize.
Log.Information("Submitting order {Order}", JsonConvert.SerializeObject(order));
If you always want to serialize a type to JSON, you could register a destructuring policy for that specific type. This keeps your log statements concise and ensures that type is always serialized in the same way:
// When configuring your logger.
Log.Logger = new LoggerConfiguration()
.Destructure.ByTransforming<Order>(order => JsonConvert.SerializeObject(order))
// ...
// This will use the destructurer registered above, so will convert to a JSON string.
Log.Information("Submitting order {#Order}", order);
// These will still use the ToString() method.
Log.Information("Submitting order {Order}", order);
Log.Information("Submitting order {$Order}", order);
Another advantage of this approach is that if you want to change the way you're representing objects of that type, or if you want to revert to the default destructuring approach, you just have to change the policy used when configuring the logger (i.e. the lambda in the snippet above).
If your serialization approach is too complicated to fit in a lambda, or you want to use the same serialization approach for a large number of types, you could define your own IDestructuringPolicy and then register it in a similar way:
class SerializationPolicy : IDestructuringPolicy
{
public bool TryDestructure(object value, ILogEventPropertyValueFactory propertyValueFactory, out LogEventPropertyValue result)
{
// Check type of `value` and serialize if required.
}
}
// When configuring your logger.
Log.Logger = new LoggerConfiguration()
.Destructure.With<SerializationPolicy>()
// ...
I think you can force stringification with the $ operator. And I think you can then override the ToString method to adjust the output. There is also a short example in the documentation of serilog how to force stringification:
var unknown = new[] { 1, 2, 3 }
Log.Information("Received {$Data}", unknown);
And this is the output of the logging function:
Received "System.Int32[]"
the scenario I want is to set the global log level to Error. this is my config code which is called in startup class:
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Error()
.WriteTo.Logger(p=>p.Filter.ByIncludingOnly(evt => evt.Level ==
LogEventLevel.Error).WriteTo.MSSqlServer(ConnectionString, "Serilogs",
null, LogEventLevel.Error, 50,
null, null, false, columnOptions))
but the thing is that I want to write some custom Information log in some of my action methods in controllers, like this:
Log.Logger.Information("{User} {RawUrl} ",
userId,actionContext.Request.RequestUri.AbsolutePath);
the problem is that Serilog does not write info logs to SQL table because of the global Error level setting which is defined in startup.cs class. is there any solution for this problem (without setting the global log level to Information)?
The MinimumLevel.Error() construct is intended to be a coarse high level filter which can extremely efficiently rule out logging way before it makes it to a sink. While its natural to want to lean on that, its not critical - you'll be surprised how efficient Serilog will still be if you filter via whitelisting log entries later in the logging pipeline.
WriteTo.Logger and other sinks also provide an equivalent way to set the min level that will go to that sink. The key is to thus only do the filtering at that level (with a minimumLevel optional argument override).
Once you've removed the global filtering, which, by design, is blocking your log request from even getting captured, much less being submitted to the sinks, the next step is to have a way for your Filter.ByIncluding to identify some LogEvent's of Information level as being relevant - one example way is to whitelist particular contexts (but you might also just want to tag it with a property). Example:
Log.Logger = ....
var importantInfoLog = Log.Logger.ForContext<MyClass>();
importantInfoLog.Information("Warning, this is important Information!")
Then you can tell the WriteTo.Logger to
include anything >= Error
include Information if the SourceContext is <MyClass>
An alternate (but in my opinion inferior) solution is to configure multiple WriteTo.Logger:-
with minimumLevel set to Error
the other with minimumLevel left as the default (implying >= Information) but grabbing solely the specific Information level messages that you want.
I am using the static Logger with the following setup:
Log.Logger = new LoggerConfiguration()
.WriteTo.Seq("http://localhost:5341)
.CreateLogger();
with the following in all my micro-services:
_log = Log.ForContext<GameBase>()
.ForContext("CustomerID", CustomerID);
This code inserts an CustomerID property in each event but not to the message body.
Question: Is there a way to enrich all logs for this context so that the MESSAGE BODY contains this information as well? Like an enricher that would prepend a string to each message body? There are some items I really want to see in the events without having to drill down on each event.
Also, I'm not finding much documentation on the Enrichers. Is there one to not display the full context path?
The message body is configured at the Sink level, usually by defining an outputTemplate (if the Sink supports it, not all of them do). By using the ForContext you are making the CustomerID property available to all messages written to this log instance, but it's on the Sink configuration that you define how this property will be used / shown.
You can see examples in Serilog's documentation under Formatting Output
I'm developing an app using Grails and there are some app-wide configuration settings I'd like to store somewhere. The only way I've thought of is to create a domain class that stores the configuration values, and to use a service that queries that domain class. The problem I see is that there should be just one instance of that domain class, but I haven't found anything to enforce that restriction.
There may be other best practices to store app's own configuration that I may not be aware of, all suggestions are welcome.
Edit: the settings are supposed to be configurable from within the app.
There is special place: /grails-app/conf/Config.groovy. You can add values there like:
my.own.x=1
and read values by:
def x = grailsApplication.config.my.own.x
See docs for more details: http://grails.org/doc/latest/guide/conf.html#config
There is a plugin for that: Settings. It allows you to create named setting like my.own.x of different types (String, date, BigDecimal and integer), and provides you with the basic CRUD pages to manage them.
You can access the settings from either gsp:
<g:setting valueFor="my.own.x" default="50" encodeAs="HTML"/>
or controllers/services/domains
Setting.valueFor("my.own.x", 50)
I use it in several projects and think it works great.
You can enforce your single domain class db entry via custom validator:
// No more than one entry in DB
class MasterAccount {
boolean singleEntry = true
static constraints = {
singleEntry nullable: false, validator: { val, obj ->
if(val && obj.id != getMasterAccount()?.id && MasterAccount.count > 0){
return "Master account already exists in database"
}
}
}
MasterAccount static getMasterAccount(){
MasterAccount.list()?.first()
}
}
You can defer its configuration and persistence to Bootstrap.groovy, which would achieve the same effect as Config.groovy
If you're using 1.3.* you can try grails dynamic-config plugin (http://www.grails.org/plugin/dynamic-config). "This plugin gives your application the ability to change the config properties without restarting the application. The values in Config.groovy are persisted in database when application is run-for the first time after installing the plugin. "
I've never used it on a grails 2.0.* project.