Inside the target directory of my project, there is a file called stacktrace.log. I have realised size of the file has become more than 3 gigabytes. Is it safe for me to delete this file? Would it cause any file not found exceptions after deleting it? Thanks for your time.
---edit
If it does cause file not found exception, how can I resolve the issue?
stacktrace.log is the default location where Grails writes the unfiltered stack traces of exceptions thrown by the app (for normal logging it filters out stack frames that are "internal" groovy mechanics, but sometimes it can be too aggressive so it's handy to have the full traces available). You can safely delete it and it will get recreated empty next time the app starts.
You can control this in the log4j DSL in Config.groovy, under the appenders block. The default behaviour is equivalent to an appender definition of
file name:'stacktrace', file:'stacktrace.log`
in prod mode and file:'target/stacktrace.log' in dev mode, you could replace it with e.g.
rollingFile name:'stacktrace', file:'stacktrace.log',
maxFileSize:'5MB', maxBackupIndex:2
to limit it to 15MB (the active file plus up to two rolled-over backups).
I guess that is just an log file,If you delete it,the later log will be missing.altough it depends on what logging framwork you are using.
Suggest you to use logback to rolling the log.It can avoid the log to be to Fat.
Related
I have a Serilog implementation on a C# application. While debugging in our development environment we like to have the log file open in a text editor, like Notepad++.
As I debug and test I like to clean the log file as I go by deleting all the text and saving the file - while the application is still running.
When the application then next writes a log entry, the log is filled with 'NUL' characters.
It is as through Serilog wants to continue to append the log at the last known index point.
I just want it to write cleanly at the start of the log file again.
Is there is a way to do this?
I don't recall having this outcome when using Log4Net.
change a config.properties file in a jar / war file in runtime and hotdeploy the changes ?
my requirement is something as follows, we have a "config.properties" in a jar/war file , i have to open the file through a webpage and after the user has made necessary changes to it, i have to update the "config.properties" in jar/war file and hot deploy it. can we achieve this feat ? if so can you please point me to relevant sites/documents so that i can jumpstart on this.
I will strongly recommend your architecht rethink this solution. What you describe should be done through JNDI or a similar technique, not through reloading properties.
Deployments should be considered static - that any given web container allows for magic trickery should not be depended on, and WILL break some day (most likely at the most inconvenient time).
You've got a couple of problems off the top of my head:
ensuring that nothing is holding static references to a java.util.Properties that has previously loaded your config.properties file.
most servlet engines will unpack your war to a working directory so the properties file you load won't be the one in the war, it will be the unpacked one. This means your changes
will be overwritten when you restart the servlet engine because this is typically one of the points the war is unpacked.
While these problems aren't insurmountable I've always found it much easier to implement this sort of behavior by storing the properties in JNDI (as Thorbjørn suggests) or a database (while being careful about the static references I mentioned in point 1).
The JNDI/database solution has the nice side effect of easing deployment into multiple environments because each typically has it's own registry/database.
Even that I agree with the comments explained before, I could suggest one solution:
Apache Commons Configuration extension gives you the posibility to do something like:
config.setReloadingStrategy(new FileChangedReloadingStrategy());
That could make the trick to change the configuration file on a runtime basis with no code at all.
However, like JNDI and other methods of web application configuration, the security is a concern. Be careful on which parameters you can/must be able to configure.
I'm looking for ways to configure/reconfigure log4j after it may, or may not have been initialized. This should work running standalone or in a web container.
The configuration may be represented by a configuration file at a particular arbitrary URI. The knowledge of the URI comes from the application, not log4j framework. The configuration may also be done programmatically (less problematic but problematic still).
The public API is unfortunately sorely lacking so developers are forced to write brittle code using implementation classes from log4j core. From weeks of studying documentation and stepping through log4j code I see two ways to accomplish reconfiguration:
Stopping the current context and re-initializing using log4j.core.config.Configurator,
similar to the following:
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(buildDefaultConfiguration()); //programmatically building a configuration
or
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(null, ConfigurationSource.fromUri(loggingUri)); //passing the configuration source constructed from a known URI
The first line in both examples will stop the current context if it has already been created and started (when running in a web container for example). If log4j has not been initialized (when running as a standalone app for example) it will initialize log4j with the default configuration and start the context first (as a side effect of calling getContext()), and then stop it.
If the current context is not stopped first the call to Configurator.initialize() will have no effect. log4j will ignore your attempt to re-initialize, will not give you any indication of it, and just simply return the current context. This behavior is not mentioned in the "Reconfigure Log4j Using ConfigurationBuilder with the Configurator" section of the Manual. It simply says: "However, should any logging be attempted before Configurator.initialize() is called then the default configuration will be used for those log events." The default configuration will also be used for all subsequent log events in the provided examples because calling Configurator.initialize() will have no effect, unless the current context is stopped first.
Setting a new configuration location on the existing context thus forcing reconfiguration,
similar to the following:
((LoggerContext) LogManager.getContext(false)).setConfigLocation(loggingUri);
This works in a similar fashion: if log4j hasn't been initialized the call to getContext() will trigger initialization and creation of the default context that will then be reconfigured; and if it has been initialized then the current context, whatever it may have been, will get reconfigured. The configuration source will be created from the URI by the log4j framework.
The difference is that in the first way the context is replaced and all loggers in the old (stopped) context will be dead. If any code on the stack holds references to these dead loggers and tries to log to them it will be a no-op. In the second way the context is kept but the configuration is replaced and existing loggers are updated with the new configuration.
Both methods use core code and are therefore brittle, but both work for the sunny day scenario (using log4j-core 2.10.0 anyway). However, neither one appears to afford the user any control over handling any exceptional events, or even inform the user that something went wrong. Log4j will "eat up" any exceptions, and make its own executive decision how to handle them.
If a problem occurs after Configurator.initialize() is called log4j will create a default configuration that effectively cuts all logging off other than errors to the console and happily return the new context back not giving the calling code any clue that logging has effectively been stopped.
If a problem occurs after LoggerContext.setConfigLocation() is called log4j will do essentially the same thing except the current context will be kept. One would think that particularly in the case of a reconfiguration failure the most typical handling would be to revert back to the old configuration. There doesn't appear to be a way to accomplish this because log4j will simply stop logging (other than errors to the console) and give the calling code no indication of the failure.
Here's a typical scenario: several applications extend a common framework. The framework configures the same logging for all extending applications (to facilitate reuse and simplify post-processing of the logs). Some application has a unique logging need and attempts to reconfigure log4j from its own metadata (config file at a known URI). The XML parser throws an exception parsing this file. The exception gets handled internally by log4j, logging is quietly stopped, and no one knows. Well, there is an error log sent to the StatusLogger with the exception, but the calling code doesn't know.
So with this lengthy preamble, the question is: is there a mechanism I haven't discovered yet to modify log4j configuration in a predictable fashion and be able to handle abnormal events should they arise? That is, other than something drastic like (for the example above) extending the XML configuration class and replacing code that handles exceptions, thus running a risk of creating undesirable side effects in the current log4j implementation, not talking about an even greater risk of breaking in the future if the implementation changes.
Any help would be greatly appreciated!
I'm using Microsoft's Web Deploy Remote Agent service to allow me to easily publish code to the server from within Visual Studio.
The web site I am deploying is using log4net to log messages to log files, and every time I try to deploy a new version of the code, I get this error in Visual Studio stating that the current log4net log file is in use:
An error occurred when the request was processed on the remote
computer. The file 'Web.log' is in use.
The process cannot access 'C:\inetpub\wwwroot\Logs\Web.log' because it
is being used by another process.
I can solve this by going onto the server and doing an iisreset before publishing... but that is kind of defeating the point of 'easy' publishing from Visual Studio :)
Is there some way I can get the publish task to issue an iisreset automatically, or some other way I can work round this?
I kept poking around and found some tidbits around the file being locked in a few other forums. Have you tried adding
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
To your <appender> element in the web.config file? From the Apache docs
Opens the file once for each AcquireLock/ReleaseLock cycle,
thus holding the lock for the minimal amount of time. This method of
locking is considerably slower than FileAppender.ExclusiveLock but
allows other processes to move/delete the log file whilst logging
continues.
As far as the performance considerations, I suppose you would need to test if this will affect you or not as I am assuming it really depends on how often you are writing to the log file as to how much this will impact performance. I can't believe that getting/releasing a lock could take all that much time though.
There is a MSDEPLOY provider called recycleApp which is used exactly for this. You can include this in your deployment manifest.
Another option is to use ignoreOnErrors flag which will skip the file in use and continue with the deployment.
Just wondering if anyone might know what's happening here. I have several schema.yml files, and when I try to build model classes using symfony propel:build-model I don't get any error message, however instead of any classes being generated I get xml files generated in the same config folder as the schema yml files. i.e. if I have a file named logger_schema.yml in the config directory, after I run build-model, I will also have a generated-logger_schema.xml file in the config directory as well, and no generated classes.
Any idea what could be causing this?
The XML file in question is a worker file symfony/Propel creates as part of the class generation process - it's not an "error" as such.
symfony CLI tasks require quite a lot of PHP memory, especially on Windows. If the Propel task is failing, I would recommend a permanent change to the php.ini file setting on memory allocation to at least 256M. I know this seems high, but you should only ever need these tasks on a development machine. As you note, you saw evidence of memory exhaustion on another related task.
If that doesn't fix it, could you add to your question all of the CLI output when you run the task? It might shed some light on the step which is failing.
After looking at this ticket, it appears the XML files are likely the result of a symfony error, despite the fact that I repeatedly got no error message using propel:build-model. After trying propel:build --model --forms, I did in fact get a "memory exhausted" error, which was solved by temporarily increasing the PHP memory limit.