Serilog MSSqlServer sink does not work when using ILogger interface - serilog

I am using Serilog MSSqlServer sink and have got it to write to database with
the built in static Log.Logger. However, when I cast it to using ILogger logger
and use the interface instance logger to log, it no longer writes to database.
I read somewhere I would need to dispose this interface instance to write to database.
Is there an equivalent of Log.CloseAndFlush() for the interface instance?
The reason I am using ILogger and not Log.Logger is because I keep several ILogger instances at the same time writting to different tables.

Related

How to enable Serilog minimum level overrides without particular convention of calling ForContext?

This article on Serilog minimum level overrides states:
The first argument of Override is a source context prefix, which is normally matched against the namespace-qualified type name of the class associated with the logger.
For this so-called "normal" behavior, wouldn't I need to manually set the .ForContext<>() differently for each class my logger is called from? In other words, how are namespace-specific minimum log levels supposed to work without a specific convention of how .ForContext is set?
If that makes sense, then how can I set ForContext automatically without calling it with a different argument everywhere?
For this so-called "normal" behavior, wouldn't I need to manually set
the .ForContext<>() differently for each class my logger is called
from?
Yes, you would. A common way of doing it is by using the Log.ForContext<T>() on each class, in a member variable that gets shared across the different methods of your class (so that all logs get written with the same context). e.g.
public class SomeService
{
private readonly ILogger _log = Log.ForContext<SomeService>();
// ...
}
public class SomeRepository
{
private readonly ILogger _log = Log.ForContext<SomeRepository>();
// ...
}
If you are using an IoC container such as Autofac, you can have the .ForContext<>() call happen automatically when classes are resolved by the IoC container (by using constructor injection, for example).
If you are using Autofac specifically, you could use AutofacSerilogIntegration that takes care of that. There might be similar implementations for other IoC containers (or you'd have to implement your own).
If you are using Microsoft's Generic Host, then you'll need to configure it to use a custom ServiceProviderFactory which will be responsible for creating the instances and making the call to .ForContext<>()... An easy route is to integrate Autofac with Microsoft's Generic Host and then leverage the AutofacSerilogIntegration I mentioned above.

Connection Pooling with Apache DBCP

I want to use Apache Commons DBCP to enable connection pooling in a Java Application (no container-provided DataSource in this). In many sites of the web -including Apache site- the usage of the library is based in this snippet:
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setUsername("scott");
ds.setPassword("tiger");
ds.setUrl(connectURI);
Then you get your DB connections through the getConnection() method. But on other sites -and Apache Site also- the Datasource instance is made through this:
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(connectURI,null);
PoolableConnectionFactory poolableConnectionFactory = new PoolableConnectionFactory(connectionFactory);
ObjectPool objectPool = new GenericObjectPool(poolableConnectionFactory);
PoolingDataSource dataSource = new PoolingDataSource(objectPool);
What's the difference between them? I'm using connection pooling with BasicDataSource, or I need an instance of PoolingDataSource to work with connection pooling? Is BasicDataSource thread-safe (can I use it as a Class attribute) or I need to synchronize its access?
BasicDataSource is everything for basic needs.
It creates internally a PoolableDataSource and an ObjectPool.
PoolableDataSource implements the DataSource interface using a provided ObjectPool. PoolingDataSource take cares of connections and ObjectPool take cares of holding and counting this object.
I would recommend using BasicDataSource.
Only, If you really need something special maybe then you can use PoolingDatasource with another implementation of ObjectPool, but it will be very rare and specific.
BasicDataSource is thread-safe, but you should take care to use appropriate accessors rather than accessing protected fields directly to ensure thread-safety.
This is more of a (big) supporting comment to ivi's answer above, but am posting it as an answer due to the need for adding snapshots.
BasicDataSource is everything for basic needs. It creates internally a
PoolableDataSource and an ObjectPool.
I wanted to look at the code in BasicDataSource to substantiate that statement (which turns out to be true). I hope the following snapshots help future readers.
The following happens when the first time one does a basicDatasource.getConnection(). The first time around the DataSource is created as follows :
This is the raw connectionFactory.
This is the Generic Object Pool ('connectionPool') that is used in the remaining steps.
This combines the above two (connectionFactory + an Object Pool) to create a PoolableConnectionFactory. Significantly, during the creation of the PoolableConnectionFactory, the connectionPool is linked with the connectionFactory like so:
Finally, a PoolingDataSource is created from the connectionPool

How can you log from a Neo4j Server Plugin?

I'm trying to debug a problem in the Neo4J Server plugin I'm writing. Is there a log I can output to? It's not obvious where or how to do this.
Good question. I think you could use Java Logging? That should be routed into the normal logging system.
Just inject org.neo4j.logging.Log in your class containing implementation of your Neo4j stored procedure.
public class YourProcedures {
#Context
public Transaction tx;
#Context
public Log log;
#Procedure(value = "yourProcedure", mode = Mode.READ)
public Stream<YourResult> yourProcedure(#Name("input") String input) {
log.debug("something");
}
}
Logs are then dumped into standard Neo4j log file.
The level is controlled by GraphDatabaseSettings.store_internal_log_level configuration.
The level can be also changed in runtime. Just inject DependencyResolver bean and define this admin procedure. (The framework has listener hooked to config change which reconfigures the internal logging framework. This is the simplest solution I could find.)
#Context
public DependencyResolver dependencyResolver;
#Procedure(value = "setLogLevel", mode = Mode.DBMS)
#Description("Runtime change of logging level")
public void setLogLevel(#Name("level") String level) {
Config config = dependencyResolver.resolveDependency(Config.class);
config.set(GraphDatabaseSettings.store_internal_log_level, Level.valueOf(level));
}
UPDATE:
This ^ solution works, however it is insufficient when one wants to use logging the way usual in Log4j - different loggers organized in hierarchy, each logger at its own level. The org.neo4j.logging.Log component is just a wrapper of Log4j logger for the GlobalProcedures class. This logger is only one of many loggers in hierarchy. In fact, the wrapper blocks access to richer features of underlying framework. (Unfortunately, to define multiple #Context Log fields in YourProcedures class distinguished by some annotation qualifying logger is also impossible because field injection is driven by Map<Class,instance> so there is only one possible instance to inject for any #Context-annotated field according to field type.)
Solution 1:
Use JUL as in accepted answer. The disadvantage is, JUL redirects log event to underlying Log4j anyway so if logger hierarchy is defined in JUL, Log4j must be set to lowest possible level in order to make JUL levels sensitive.
Solution 2:
Use Log4j directly (i.e. public static final Logger logger = LogManager.getLogger("some.identifier.in.hierarchy") in YourProcedures). There are some issues with redefining configuration programmatically though it is possible, I dropped this solution only because I had some trouble deploying this solution in non-docker environment.
Solution 3: (finally chosen)
I defined custom component LogWithHierarchy (it can be built from own ExtensionFactory loaded using ServiceLoaders - I was inspired in APOC config implementation). This component provides API of the form debug(loggerName, message), info(loggerName, message) etc. The component knows original Log, drills down into its log4j LoggerContext and redirects all logging requests to particular logger in this LoggerContext. Log messages finally end in debug.log. With this solution the original log4j logger hierarchy is fully utilized, levels can be changed dynamically in runtime (setLogLevel must be changed to operate on aforementioned LoggerContext) and still everything is implemented using standard Neo4j plugin support.

Using a single instance of ILog in an MVC application

I'm using Ninject for DI in my ASP.NET MVC application. I'm resolving the ILog dependency in controllers using the below module
public override void Load()
{
var configPath = ConfigurationManager.AppSettings["Log4NetConfigPath"];
// Load the external config file for Log4Net
XmlConfigurator.Configure(new System.IO.FileInfo(configPath));
log4net.Util.LogLog.InternalDebugging = true;
Bind<ILog>().ToMethod((c) => LogManager.GetLogger("AVLogger")).InSingletonScope();
}
I'm calling the InSingletonScope() to provide a single instance of ILog instance throughout the application. I've some questions?
Do I need to really bother about having single instance of ILog? Can I remove the InSingletonScope method itself.
Does having a single instance of ILog create some performance issues?
It would depend on how expensive it is to create your logger. I don't know what log4net's performance characteristics are, but if it's not expensive you should just create a new one.
When you use InSingletonScope() that means the log will exist for as long as your worker process exists (ie when it's recycled or shut down, the logger will be destroyed). That also means the logger is hanging around when you don't need it. It's not so much of a "performance" issue, as it is one of managing resources.

Soft-code ILogger in HostManager and InternalDependencyResolver

I created my own ILogger implementation and register an instance via
ResourceSpace.Uses.Resolver.AddDependencyInstance<ILogger>(...)
inside the
using (OpenRastaConfiguration.Manual)
block.
This works fine for most log messages, however some classes in OpenRasta try to figure out their ILogger instance before the DI is ready, like HostManager:
static HostManager()
{
Log = DependencyManager.IsAvailable
? DependencyManager.GetService<ILogger>()
: new TraceSourceLogger();
}
In my case (and I suspect the general case), IsAvailable is false, so it defaults to TraceSourceLogger.
As static ILogger HostManager.Log is not a public property, I hacked it and made it public so that I can now set it.
When it comes to InternalDependencyResolver, which is always initialized to new TraceSourceLogger() on object construction, it does have a publicly settable ILogger Log property, so I could just use that.
Now all of OpenRasta's log messages that I've encountered so far go to my custom ILogger.
Does anyone know of a way to get all of OpenRasta's classes (I did not check systematically and might have missed a class or two) to log to a custom ILogger without having to hack the sources? (It's always nice to know that upgrading OpenRasta won't require repatching and rebuilding)
as you found out, it's an ordering issue. Happy to take a patch to fix that though.

Resources