Does Grails surround log statements with isSomethingEnabled() - grails

As per http://docs.codehaus.org/display/GROOVY/Groovy+1.8+release+notes#Groovy18releasenotes-Log groovy does surround log statements with checks like isDebugEnabled() etc.
Does grails do that for the generated code?
For this log call in a grails service:
log.debug("competitors errors stage 1: ${failedCarrierRequests}")
In the decompiled .class file I see only this:
arrayOfCallSite[85].call(log, new GStringImpl(new Object[] { allCompetitorDepartmentsRows.get() }, new String[] { "All competitors: ", "" }));
It is unclear whether there is a check for log level behind the scenes or not.

As of 2.2.2: no.
Grails injects an apache commons Log field into the artefact classes, and the log4j plugin marries that to a log4j Logger.
However, in your example you pass a GString as the only parameter. Since they are lazy-transformed to Java Strings, the log4j logger would hit it's own internal debug enabled check and skip the toString() call.
If you do something expensive like parameter building, however, and you're concerned about the wasted cycles, you must call isDebugEnabled() yourself:
if (log.isDebugEnabled()) {
log.debug("Some string concatenation with a slow method: " + slowMethod())
}
I should point out that this contrived example could be converted to use GString to save the debug check:
log.debug "GString with lazily initialized slow method call: ${slowMethod()}"
There was some discussion awhile back on the grails-user mailing list about adding an AST transformation to add the checks, but it didn't go anywhere.

Okay response that GStrings are lazy loaded doesn't seems to hold up against my testing.
I have created a test to basically log something four different ways, and it does appear to evaluate any logging statement GString.
Here is my test file GIST: https://gist.github.com/tgsoverly/34f9a56287291297777b
The test fails for the GString and the method IS called when in the log statement.
It would also not be something the groovy people say should be the case according to their template section: http://groovy.codehaus.org/Strings+and+GString

Related

Extract info from a Groovy DSL file?

I recently switched my logback configuration file from logback.xml to logback.groovy. Using a DSL with Groovy is more versatile than XML for this sort of thing.
I need to analyse this file programmatically, like I analysed the previous XML file (any of innumerable parsing tools). I realise that this will be imperfect, as a DSL config file sits on top of an object which it configures and must be executed, so its results are inevitably dynamic, whereas an XML file is static.
If you want to include one Groovy file in another file there are solutions. This one worked for me.
But I'm struggling to find what I need from the results.
If I put a function like this in the DSL file ...
def greet(){
println "hello world"
}
... not only can I execute it (config.greet() as below), but I can also see it listed when I go
GroovyShell shell = new GroovyShell()
def config = shell.parse( logfileConfigPath.toFile() )
println "config.class.properties ${config.class.properties}"
But if I put a line like this in the DSL file...
def MY_CONSTANT = "XXX"
... I have no idea how to find it and get its value (it is absent from the confusing and copious output from config.class.properties).
PS printing out config.properties just gives this:
[class:class logback, binding:groovy.lang.Binding#564fa2b]
... and yes, I did look at config.binding.properties: there was nothing.
further thought
My question is, more broadly, about what if any tools are available for analysis of Groovy DSL configuration files. Given that such a file is pretty meaningless without the underlying object it is configuring (an object implementing org.gradle.api.Project in the case of Gradle; I don't know what class it may be in the case of logback), you would have thought there would need to be instrumentation to kind of hitch up such an object and then observe the effects of the config file in a controlled, observable way. If Groovy DSL config files are to be as versatile as their XML counterparts surely you need something along those lines? NB I have a suspicion that org.gradle.tooling.model.GradleProject or org.gradle.tooling.model.ProjectModel might serve that purpose. Unfortunately, at the current time I am unable to get GradleConnector working, as detailed here.
I presume there is nothing of this kind for logback, and at the moment I have no knowledge of its DSL or configurable object, or the latter's class or interface...
The use of def creates a local variable in the execution of the script that is not available in the binding of the script; see this. Even dropping def will not expose MY_CONSTANT in the binding because parsing the script via GroovyShell.parse() does not interpret/execute the code.
To expose MY_CONSTANT in config's binding, change def MY_CONSTANT = "XXX" to MY_CONSTANT = "XXX" and execute the config script via config.run().

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

Unit testing grails ConfigSlurper behavior

I'd like to write tests that would test behavior of externalized configs and assert that what gets set is what I expect. This is for the specific case where something like this is done:
Config.groovy:
a.reused.value = 'orig'
my.variable = '${a.reused.value}'
Externalized groovy file:
a.reused.value = 'new_value'
I expect that both a.reused.value and my.variable would be 'new_value'.
Now, I think I could have my unit test read in strings representing these config files (I do similar things for other unit tests to populate Holders.grailsApplication.config, for example), utilizing perhaps merge?
But what I cannot figure out is how to get the value that Grails actually gets during application run time. Instead, I get "${a.reused.value}" in my unit tests.
Is there a way to mimic this behavior of what Grails does of actually resolving this value? I did some digging around in Grails 2.4.4 source (which is what we are using) and didn't have any luck in figuring this part out. I also did try Eval.me(), but that doesn't seem to be quite right either.
While setting my.variable, you are not using a GString object, causing the expression to be treated as a value itself. Use double quotes to resolve expression automatically.
a.reused.value = 'orig' my.variable = "${a.reused.value}"
Update 1:
What you want to do is directly not possible. You are assigning the value to a variable from an expression. During evaluation of the config object for the first time, my.variable has been assigned a value, and now it doesn't contain an expression any more. So you have two options: 1) either reassign the second variable in external config also or 2) use a closure to assign the value to second variable.
my.variable = { -> "$a.reused.value" }
and while accessing do: grailsApplication.config.my.variable.call()
But again, in your code, you would have to be sure that this variable contains a closure not a value itself.

Add logging to an external pluggable script

As described in Can't call one closure from another, I am using a pluggable script from within a Grails app.
Unfortunately, I've found that I can't use log4j from within these scripts. I am forced to use println.
I tried using
import org.apache.commons.logging.LogFactory
def Log log = LogFactory.getLog(getClass())
but I got no output. When I print out the result of the call to getClass(), I get something like
myscript$_run_closure5
So I'm thinking the issue is that there is no configuration in my Grails Config.groovy file for this class.
Is there a way for me to programmatically add these pluggable scripts to the log4j configuration? Keep in mind that I do not know in advance what the names of the scripts are, so this has to happen at runtime.
Consider the following code:
import org.apache.log4j.Logger
// ...
Logger log = Logger.getLogger('MyPlugin')
new File( grailsApplication.config.externalFiles ).eachFile { file ->
Binding binding = new Binding()
binding.variables.log = log
GroovyShell shell = new GroovyShell(binding)
shell.evaluate(file)
strategies.put( binding.variables.key, binding.variables )
}
Explanation:
It is not obligatory to pass class name to getLogger, it can be actually any string. You just need to make sure that this string is matched in log4j.properties of the main program.
You pass once created log to plugin scripts via binding variable "log". Then plugin scripts can access it simply as log.info('test123')
My personal recommendation would be to use logback instead of log4j. Both libraries were developed by the same guy and it is stated that logback supersedes log4j.

Groovy, STS, and debug info, information, or symbols

I'm trying to include the debug information or symbols in my Groovy code so that I can use the Spring Security annotations with SpEL to access an annotated method's arguments by name. For example:
#PreAuthorize("hasPermission(#id, 'View')")
public void doSomething(Integer id)
{
....
}
Everything works fine when I use the STS 'run-test' command, which uses the Groovy RunTest script. By that I mean I can access a method's argument by name. However, whenever I try to use the 'run-app' command, the debug information is not included.
I looked at the RunTest script and the script explicitly calls the Java Compiler with the debug option set to true.
How can I enable debug information for my development and production environments? Do I need to modify the Groovy script to call the Java compiler on the Groovy code or is there any easier way?
Never found an elegant solution to this. Instead I just used filters as the parameters that were being passed to my methods were being extracted from the URL by Grails.

Resources