I have a Grails 2.4.4 project configured with the default ':cache:1.1.8' plugin. It also uses the default ":asset-pipeline:1.9.9" plugin.
When running the application, I'm seeing this DEBUG message in the logs:
DEBUG simple.MemoryPageFragmentCachingFilter - No cacheable annotation found for GET:/PROJECTNAME/grails/assets/index.dispatch [controller=assets, action=index]
How do I make this message go away? I don't mean by filtering the log file, I mean by putting a cacheable annotation for the asset pipeline controller, or something like that.
UPDATE: It turns out that I was getting dozens of those DEBUG log messages instead of just one, because of a flaw in sass-asset-pipeline:1.9.0.
I updated to sass-asset-pipeline:1.9.1, because they said they fixed some caching issues in 1.9.1 here:
https://github.com/bertramdev/sass-grails-asset-pipeline/issues/11
You don't want to. Caching responses and method calls should use very different logic from caching static resources.
Typically static resources change rarely and are cached forever, but use a unique name or some other mechanism so if you do change the CSS/JS/etc. file, you can get clients to use the new version.
But caching service method calls and controller responses is typically much more short-lived, since database updates often trigger cache invalidation and flushing to ensure that the correct data is used.
The asset-pipeline plugin and its addon plugins have great support for smart caching and you should manage that there, but not by misusing the cache plugin(s).
Related
change a config.properties file in a jar / war file in runtime and hotdeploy the changes ?
my requirement is something as follows, we have a "config.properties" in a jar/war file , i have to open the file through a webpage and after the user has made necessary changes to it, i have to update the "config.properties" in jar/war file and hot deploy it. can we achieve this feat ? if so can you please point me to relevant sites/documents so that i can jumpstart on this.
I will strongly recommend your architecht rethink this solution. What you describe should be done through JNDI or a similar technique, not through reloading properties.
Deployments should be considered static - that any given web container allows for magic trickery should not be depended on, and WILL break some day (most likely at the most inconvenient time).
You've got a couple of problems off the top of my head:
ensuring that nothing is holding static references to a java.util.Properties that has previously loaded your config.properties file.
most servlet engines will unpack your war to a working directory so the properties file you load won't be the one in the war, it will be the unpacked one. This means your changes
will be overwritten when you restart the servlet engine because this is typically one of the points the war is unpacked.
While these problems aren't insurmountable I've always found it much easier to implement this sort of behavior by storing the properties in JNDI (as Thorbjørn suggests) or a database (while being careful about the static references I mentioned in point 1).
The JNDI/database solution has the nice side effect of easing deployment into multiple environments because each typically has it's own registry/database.
Even that I agree with the comments explained before, I could suggest one solution:
Apache Commons Configuration extension gives you the posibility to do something like:
config.setReloadingStrategy(new FileChangedReloadingStrategy());
That could make the trick to change the configuration file on a runtime basis with no code at all.
However, like JNDI and other methods of web application configuration, the security is a concern. Be careful on which parameters you can/must be able to configure.
Explanation
I'm having an issue with Workbox where my website doesn't update when a file's content is changed, unless I manually clear storage/site data in my browser.
Since v4 release this year, the cleanupOutdatedCaches, which is in my code, should take care of this, but the problem persists.
Example
I created this website to exemplify. Once you access it, Workbox will install the service worker, but if I change, for example, test1 to test2, you won't see the change, unless you clear the site data in your browser and refresh.
I also tried only unregistering the sw; it shows the updated version (test2), but when refreshing twice it goes back to the old version (test1).
You can see the website's code in GitHub here.
Thanks in advance,
Luiz.
cleanOutdatedCaches will only clean caches created by older versions of the workbox library. In this case, since you are using the same version fo workbox, the call to this method does nothing.
https://developers.google.com/web/tools/workbox/reference-docs/latest/workbox.precaching#.cleanupOutdatedCaches
Once a particular file is precached by Workbox, it will never attempt to retrieve that file from the network, unless the revision you have specified in the precacheAndRoute call is different from what was previously cached.
Since you changed index.html but not the revision in precacheAndRoute, workbox assumes the file is unchanged. So,what you need to do is to update the precacheAndRoute with a new hash that corresponds to the new version of index.html
You can achieve this by either using injectManifest
https://developers.google.com/web/tools/workbox/modules/workbox-build
or any other build tooling you use.
Edit:
You can invoke skipWaiting programmatically as well
https://developers.google.com/web/tools/workbox/modules/workbox-core#skip_waiting_and_clients_claim
But you do need to use it with caution. Here is one way to do it :
https://developers.google.com/web/tools/workbox/guides/advanced-recipes
I'm looking for ways to configure/reconfigure log4j after it may, or may not have been initialized. This should work running standalone or in a web container.
The configuration may be represented by a configuration file at a particular arbitrary URI. The knowledge of the URI comes from the application, not log4j framework. The configuration may also be done programmatically (less problematic but problematic still).
The public API is unfortunately sorely lacking so developers are forced to write brittle code using implementation classes from log4j core. From weeks of studying documentation and stepping through log4j code I see two ways to accomplish reconfiguration:
Stopping the current context and re-initializing using log4j.core.config.Configurator,
similar to the following:
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(buildDefaultConfiguration()); //programmatically building a configuration
or
((LoggerContext) LogManager.getContext(false)).stop();
Configurator.initialize(null, ConfigurationSource.fromUri(loggingUri)); //passing the configuration source constructed from a known URI
The first line in both examples will stop the current context if it has already been created and started (when running in a web container for example). If log4j has not been initialized (when running as a standalone app for example) it will initialize log4j with the default configuration and start the context first (as a side effect of calling getContext()), and then stop it.
If the current context is not stopped first the call to Configurator.initialize() will have no effect. log4j will ignore your attempt to re-initialize, will not give you any indication of it, and just simply return the current context. This behavior is not mentioned in the "Reconfigure Log4j Using ConfigurationBuilder with the Configurator" section of the Manual. It simply says: "However, should any logging be attempted before Configurator.initialize() is called then the default configuration will be used for those log events." The default configuration will also be used for all subsequent log events in the provided examples because calling Configurator.initialize() will have no effect, unless the current context is stopped first.
Setting a new configuration location on the existing context thus forcing reconfiguration,
similar to the following:
((LoggerContext) LogManager.getContext(false)).setConfigLocation(loggingUri);
This works in a similar fashion: if log4j hasn't been initialized the call to getContext() will trigger initialization and creation of the default context that will then be reconfigured; and if it has been initialized then the current context, whatever it may have been, will get reconfigured. The configuration source will be created from the URI by the log4j framework.
The difference is that in the first way the context is replaced and all loggers in the old (stopped) context will be dead. If any code on the stack holds references to these dead loggers and tries to log to them it will be a no-op. In the second way the context is kept but the configuration is replaced and existing loggers are updated with the new configuration.
Both methods use core code and are therefore brittle, but both work for the sunny day scenario (using log4j-core 2.10.0 anyway). However, neither one appears to afford the user any control over handling any exceptional events, or even inform the user that something went wrong. Log4j will "eat up" any exceptions, and make its own executive decision how to handle them.
If a problem occurs after Configurator.initialize() is called log4j will create a default configuration that effectively cuts all logging off other than errors to the console and happily return the new context back not giving the calling code any clue that logging has effectively been stopped.
If a problem occurs after LoggerContext.setConfigLocation() is called log4j will do essentially the same thing except the current context will be kept. One would think that particularly in the case of a reconfiguration failure the most typical handling would be to revert back to the old configuration. There doesn't appear to be a way to accomplish this because log4j will simply stop logging (other than errors to the console) and give the calling code no indication of the failure.
Here's a typical scenario: several applications extend a common framework. The framework configures the same logging for all extending applications (to facilitate reuse and simplify post-processing of the logs). Some application has a unique logging need and attempts to reconfigure log4j from its own metadata (config file at a known URI). The XML parser throws an exception parsing this file. The exception gets handled internally by log4j, logging is quietly stopped, and no one knows. Well, there is an error log sent to the StatusLogger with the exception, but the calling code doesn't know.
So with this lengthy preamble, the question is: is there a mechanism I haven't discovered yet to modify log4j configuration in a predictable fashion and be able to handle abnormal events should they arise? That is, other than something drastic like (for the example above) extending the XML configuration class and replacing code that handles exceptions, thus running a risk of creating undesirable side effects in the current log4j implementation, not talking about an even greater risk of breaking in the future if the implementation changes.
Any help would be greatly appreciated!
I recently learned how to register custom grails artifacts (I need it for dynamic controllers in my application) using grailsApplication.addArtefact(java.lang.String artefactType, GrailsClass artefactGrailsClass) and it works fine, but now I realized that I also want to be able to unregister them.
Unfortunately, interface GrailsApplication provides no clear way to do so and it seems that unregistering unwanted dynamically registered grails artifacts can only be done by restarting the whole application.
Maybe I'm missing something and an artifact can be removed from an application without having to restart the app?
Thank you
You can always rebuild the GrailsApplication. That throws away all artefacts and loads the default ones. That obviously means you'd have to addArtefact the artefacts you want to keep again.
Other option is access the loadedClasses set (which is protected) and make the changes manually and then call populateAllClasses to make available (this method is protected too).
I'm investigating Grails vs. other Agile web frameworks, and one key use case I'm trying to support is the ability to modify controllers and install plugins post deployment. It appears that this isn't possible with Grails, but I want to make sure before I write it off.
As far as modifying controllers goes, it would be sufficient if the Groovlet behavior existed (compile-on-demand).
As far as plugin installs go, I understand this may be a long shot, but I thought I'd check to be sure.
For your information, I need this because I work on a product that requires a little site-specific customization, such as adding validation of simple meta-data, integrating with customer security environments, and maybe even including new controllers/pages quickly.
Out of the box, no, grails doesn't really support what you want. There may be ways to customize it but I've never looked into it. A PHP framework might be more of your ally since there is no real deployment process other than copying PHP files to a location.
That said, I personally would prefer a strict set of deployment policies. And honestly, deploying changes with Grails is as simple as running the 'grails war' command and copying that war to your servlet container. The site's downtime is negligible and if you have multiple web servers with a load-balancer, your customers should never see down time due to deployments.
Although it's not recommended for complex coding; You could execute groovy code from a string that you could store in database or a file on the fly at run time:
check out Groovy template engine:
http://groovy.codehaus.org/Groovy+Templates
but even then, you are still limited on what you can do or can't do let alone debugging will lack. you may want to consider an interpreted language; few to mention PHP/Perl/Coldfusion.