Creating log backup file with .bak extension using serilog - serilog

I am using Serilog framework for logging in my application. The file size limit i have given is 2MB. So when the file reaches 2MB, new file is created with like app_001.log, existing app.log is a back up file.
But what i want to do is when the file reaches 2MB, it should rename app.log to app.log.bak and write the new logs to newly created app.log file.
_logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.File(_filepath, restrictedToMinimumLevel: LogEventLevel.Debug, shared: true, rollOnFileSizeLimit: true, fileSizeLimitBytes: 2000000)
.CreateLogger();

You can create a class that derives from FileLifecycleHooks and override OnFileOpened and add some logic to check for the existence of app_*.log files and rename them to app_*.bak.
https://github.com/serilog/serilog-sinks-file#extensibility

Related

Serilog - keep only 7 latest files

I use serilog and have the date as a part of my filename. This is an easy way to get to the file. Currently I am checking nightly events and I and just pick the last file in the morning.
Now, I only want to keep 7 days. This is was retainedFileCountLimit is for.
However that does not work as I want it too, as it might check for that specific filename.
How can I do this? (I had my own log system which deleted files older than a week)
Where are all serilog properties described? I am missing an overview of those.
//Add Serilog
string logFileName = HostingEnvironment.MapPath(#"~/new_" + DateTime.Now.ToString("yyyyMMdd") + ".log");
Log.Logger = new LoggerConfiguration()
.WriteTo.File(
path: logFileName,
retainedFileCountLimit: 7,
shared: true,
rollingInterval: RollingInterval.Day,
rollOnFileSizeLimit: true,
fileSizeLimitBytes: 123456,
flushToDiskInterval: TimeSpan.FromSeconds(5))
.CreateLogger();
Log.Information("Starting Serilog #1");
The File sink automatically includes the date in the file name - do not include DateTime.Now in the file name and let Serilog take care of that and you should get the retention that you expect.
var log = new LoggerConfiguration()
.WriteTo.File
(
"new_.txt", // <<<<<<<<<<<<<<<<<<<<<<
rollingInterval: RollingInterval.Day,
retainedFileCountLimit: 7,
// ...
)
.CreateLogger();
This will append the time period to the filename, creating a file set like:
new_20180631.txt
new_20180701.txt
new_20180702.txt
The documentation of the File sink is the repository on GitHub.

Single log file

I would like to have a single log file that will be rolled on the size limit, previous file will be removed so there is only one log file at a time. Exaxmple:
logs.txt raches 10MB --> delete logs.txt start writing to logs_001.txt
My current code is:
Log.Logger = new LoggerConfiguration()
.WriteTo.File(
LogFile,
rollOnFileSizeLimit: true,
retainedFileCountLimit: 1,
fileSizeLimitBytes: 10485760) //10MB
.CreateLogger();
The code is from a Xamarin Forms project and it's executed every time the application is initialized.
The issue with that code is that on each application initialization a new log file is created, the previous one is deleted but the file size limit is not respected. So if the log file size is
lower than 10MB it will still roll to a new file at each start of the application.
The solution was to simply remove rollOnFileSizeLimit: true

Ocelot not finding reroutes files

I am trying to build my own micro services architecture and am stuck at the API.Gateway part.
I am trying to make Ocelot V14 find all configuration ocelot.json or configuration.json files inside the .sln file.
In the Program.cs file I am trying to merge configuration files with the following code from this link https://ocelot.readthedocs.io/en/latest/features/configuration.html#react-to-configuration-changes
builder.ConfigureServices(s => s.AddSingleton(builder))
.ConfigureAppConfiguration((hostingContext, config) =>
{
config
.SetBasePath(hostingContext.HostingEnvironment.ContentRootPath)
.AddJsonFile("appsettings.json", true, true)
.AddJsonFile($"appsettings.{hostingContext.HostingEnvironment.EnvironmentName}.json", true, true)
.AddOcelot(hostingContext.HostingEnvironment)
.AddEnvironmentVariables();
})
.UseStartup<Startup>();
When I run this the application creates the following ocelot.json file inside my OcelotApiGw Project
{
"ReRoutes": [
]
}
The problem is that it is empty and the reroutes do not work. When I paste the desired reroutes into this ocelot.json file the reroutes work, that is not the desired functionality I want.
What I want is to merge the configuration file automatically from different .json files.
Any help would be greatly appreciated.
How eShopOnContainers implements it this way with Ocelot V12
IWebHostBuilder builder = WebHost.CreateDefaultBuilder(args);
builder.ConfigureServices(s => s.AddSingleton(builder))
.ConfigureAppConfiguration(ic => ic.AddJsonFile(Path.Combine("configuration", "configuration.json")))
.UseStartup<Startup>();
In case you need more code, file structures or anything else just comment and ask.
Not sure if this is what you are looking for? This is the way Ocelot merges multiple routing files together
https://ocelot.readthedocs.io/en/latest/features/configuration.html#merging-configuration-files
We don't use this but this is how we have our startup defined:
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddJsonFile(this.env.IsDevelopment() ? "ocelot.json" : "ocelot.octopus.json")
.AddEnvironmentVariables();
So we have our standard appSettings plus the Ocelot one we use that Octopus will transform various variables we want when our Ocelot instance is deployed out into our test/production envs (or just our test/local one).
This seems to be the bit that defines what to do for multiple files:
In this scenario Ocelot will look for any files that match the pattern
(?i)ocelot.([a-zA-Z0-9]*).json and then merge these together. If you
want to set the GlobalConfiguration property you must have a file
called ocelot.global.json.
Not sure if you need to explicitly define each file (unless they can be defined via a variable like {env.EnvironmentName}) but that should be easy enough to test.
Sorry if I have got the wrong end of the stick but hope this helps.

Using Grails to store image but could not store outside CATALINA_HOME in production

I'm using Grails 2.5.6 to store uploaded images to folder on a server.
The following are my code to store the image
mpr.multiFileMap.file.each{fileData->
CommonsMultipartFile file = fileData
File convFile = new File(file.getOriginalFilename());
file.transferTo(convFile);
/** Processing File **/
File uploadedFile = new File("${directory}${generatedFileName}.${extension}")
convFile.renameTo(uploadedFile)
}
I have no problem running on development (MacOSX High Sierra)
But when i deployed on production (Ubuntu 14.04 server), i could not save the file outside CATALINA_HOME directory.
I have checked the permission and ownership of the destination directory, but still, the directory was created but the file was never stored.
For Example, i've tried to store the file on /home/tomcat/ directory (/home directory was in separate partition with tomcat which stored it /var), the directory was created, but the file was never stored.
When i put the destination directory within CATALINA_HOME folder, everything works fine. But this was not the scenario i want to do.
You say your destination directory is on another partition, so maybe another filesystem is used on this partition.
Or if you look on the javadoc of the renameTo method it is said :
Many aspects of the behavior of this method are inherently
platform-dependent: The rename operation might not be able to move a
file from one filesystem to another, it might not be atomic, and it
might not succeed if a file with the destination abstract pathname
already exists. The return value should always be checked to make
sure that the rename operation was successful.
...
#return true if and only if the renaming succeeded;
false otherwise
Thus I think the renameTo method is not able to move the file, don't know why but you can rewrite your code like this :
mpr.multiFileMap.file.each{fileData->
CommonsMultipartFile file = fileData
File uploadedFile = new File("${directory}${generatedFileName}.${extension}")
// String originalFilename = file.getOriginalFilename()
// you can store originalFilename in database for example
if(!uploadedFile.getParentFile().exists()) {
uploadedFile.getParentFile().mkdirs()
// You can set permissions on the target directory if you desired, using PosixFilePermission
}
file.transferTo(uploadedFile)
}

How to call input file which is qlready in the package

In my Hadoop Map Reduce application I have one input file.I want that when I execute the jar of my application, then the input file will automatically be called.To do this I code one class to specify the input,output and file itself but from where I am calling the file, there I want to specify the file path. To do that I have used this code:
QueriesTest.class.getResourceAsStream("/src/main/resources/test")
but it is not working (cannot read the input file from the generated jar)
so I have used this one
URL url = this.getClass().getResource("/src/main/resources/test") here I am getting the problem of URL. So please help me out. I am using Hadoop 0.21.
I'm not sure what you want to tell us with your resource loading, but the usual way to add an input file is this:
Configuration conf = new Configuration();
Job job = new Job(conf);
Path in = new Path("YOUR_PATH_IN_HDFS");
FileInputFormat.addInputPath(job, in);
job.setInputFormatClass(TextInputFormat.class); // could be a sequencefile also
// set the other stuff
job.waitForCompletion(true);
Make sure your file resides in HDFS then.

Resources