In my Grails app I've installed the Quartz plugin. I want to intercept calls to every Quartz job class' execute method in order to do something before the execute method is invoked (similar to AOP before advice).
Currently, I'm trying to do this interception from the doWithDynamicMethods closure of another plugin as shown below:
def doWithDynamicMethods = { ctx ->
// get all the job classes
application.getArtefacts("Job").each { klass ->
MetaClass jobMetaClass = klass.clazz.metaClass
// intercept the methods of the job classes
jobMetaClass.invokeMethod = { String name, Object args ->
// do something before invoking the called method
if (name == "execute") {
println "this should happen before execute()"
}
// now call the method that was originally invoked
def validMethod = jobMetaClass.getMetaMethod(name, args)
if (validMethod != null) {
validMethod.invoke(delegate, args)
} else {
jobMetaClass.invokeMissingMethod(delegate, name, args)
}
}
}
}
So, given a job such as
class TestJob {
static triggers = {
simple repeatInterval: 5000l // execute job once in 5 seconds
}
def execute() {
"execute called"
}
}
It should print:
this should happen before execute()
execute called
But my attempt at method interception seems to have no effect and instead it just prints:
execute called
Perhaps the cause of the problem is this Groovy bug? Even though the Job classes don't explicitly implement the org.quartz.Job interface, I suspect that implicitly (due to some Groovy voodoo), they are instances of this interface.
If indeed this bug is the cause of my problem, is there another way that I can do "before method interception"?
Because all the job classes are Spring beans you can solve this problem using Spring AOP. Define an aspect such as the following (adjust the pointcut definition so that it matches only your job classes, I've assumed they are all in a package named org.example.job and have a class name that ends with Job).
#Aspect
class JobExecutionAspect {
#Pointcut("execution(public * org.example.job.*Job.execute(..))")
public void executeMethods() {}
#Around("executeMethods()")
def interceptJobExecuteMethod(ProceedingJoinPoint jp) {
// do your stuff that should happen before execute() here, if you need access
// to the job object call jp.getTarget()
// now call the job's execute() method
jp.proceed()
}
}
You'll need to register this aspect as a Spring bean (it doesn't matter what name you give the bean).
You can have your customized JobListener registered in the application to handle logics before execute() is triggered. You can use something like:-
public class MyJobListener implements JobListener {
public void jobToBeExecuted(JobExecutionContext context) {
println "Before calling Execute"
}
public void jobWasExecuted(JobExecutionContext context,
JobExecutionException jobException) {}
public void jobExecutionVetoed(JobExecutionContext context) {}
}
Register the customized Job Listener to Quartz Scheduler in Bootstrap:-
Scheduler scheduler = ctx.getBean("quartzScheduler") //ctx being application context
scheduler.getListenerManager().addJobListener(myJobListener, allJobs())
resources.groovy:-
beans = {
myJobListener(MyJobListener)
}
One benefit I see here using this approach is that we don't need the second plugin used for method interception any more.
Second, we can register the listener to listen all jobs, specific jobs, and jobs in a group. Refer Customize Quartz JobListener and API for JobListener, TriggerListener, ScheduleListener for more insight.
Obviously, AOP is another approach if we do want want to use Quartz API.
You are not getting the job classes like that. If you refer to the Quartz plugin, you can get them by calling jobClasses:
application.jobClasses.each {GrailsJobClass tc -> ... }
see https://github.com/nebolsin/grails-quartz/blob/master/QuartzGrailsPlugin.groovy
If you actually look, you can see that they are almost doing what you are trying to acheive without the need to use aop or anything else.
For method interception implement invokeMethod on the metaclass. In my case the class was not of third party so I can modify the implementation.
Follow this blog for more information.
Related
In my dataflow job, I need to initialize a Config factory and log certain messages in an audit log before actual processing begins.
I have placed the Config factory initialization code + audit logging in a parent class PlatformInitializer and extending that in my Main Pipeline class.
public class CustomJob extends PlatformInitializer implements Serializable {
public static void main(String[] args) throws PropertyVetoException {
CustomJob myCustomjob = new CustomJob();
// Initialize config factories
myCustomjob.initialize();
// trigger dataflow job
myCustomjob.parallelRead(args);
}
as a result, I had to also implement Serializable interface in my Pipeline class because beam was throwing error - java.io.NotSerializableException: org.devoteam.CustomJob
Inside PlatformInitializer, I have an initilize() method that contains initialization logic for config factory and also log some initial audit messages.
public class PlatformInitializer {
public void initialize() {
// Configfactory factory = new Configfactory()
// CustomLogger.log("JOB-BEGIN-EVENT" + Current timestamp )
}
}
My question is - is this right way to invoke some code that needs to be called before pipeline begins execution?
If you need the initialized object at runtime (not at the pipeline construction time), you should move your initialization logic to a Beam DoFn. DoFn has a number of method annotations that could be used to denote methods that should be executed in different lifecycle phases. Setup and StartBundle annotations might be useful for your use-case. See here for more details.
My test class like this:
public class HandlerTest extends Specification {
Handler hander
EventBus eventBus=Mock()
def setup(){
handler=new Handler(eventBus)
}
def "constructor"(){
//How to verify two events do get added to eventBus?
}
}
and Constructor of Handler(it is a java class)
public Handler(EventBus eventBus)
{
eventBus.add(FetchSearchWordsEvent.TYPE, this);
eventBus.add(SetSearchBoxTextEvent.TYPE, this);
}
Question is:
how to verify that two events do get registered?
I would move the call to Handler constructor into the test itself given that it is the function under test.
Try the following:
public class HandlerTest extends Specification {
Handler hander
def mockEventBus = Mock(EventBus)
def "constructor"(){
when:
new Handler(mockEventBus)
then:
1 * mockEventBus.add(FetchSearchWordsEvent.TYPE, _ as Handler)
1 * mockEventBus.add(SetSearchBoxTextEvent.TYPE, _ as Handler)
}
}
The functionality of EventBus.add() should be tested separately.
It depends on how registerHandler is implemented, and what exactly you want to verify. If the goal is to verify that the constructor ultimately calls some methods on eventBus, you can just use regular mocking. If the goal is to verify that the constructor calls registerHandler on itself, you can use partial mocking using Spy(), as explained in the Spock reference documentation.
PS: Note that partial mocking is considered a smell. Often it's better to change the class under test to make it easier to test. For example, you could add a method that allows to query which handlers have been registered. Then you won't need mocking at all.
I have some executeQuery in the code for complex group by/having clause so I need an integration test case to test it. Grails 2.1.1 was used.
However, I found several issues:
1. the setUp method is not called automatically before the test.
2. So I add #Before annotation to the setUp method and it can be called now. But the executeQuery statement can't be used now.
java.lang.UnsupportedOperationException: String-based queries like [executeQuery] are currently not supported in this implementation of GORM. Use criteria instead.
It seems I can't use any annotation in the integration test. Otherwise it becomes a unit test case? If I don't use any annotation, the test passed.
Here is the code example.
class JustTests extends GroovyTestCase {
void setUp() {
log.warn "setup"
}
void tearDown() {
log.warn "cleanup"
}
void "test something"() {
// Here is the code to invoke a method with executeQuery
}
}
Thanks.
Is it possible to run Quartz.NET jobs in a separate AppDomain? If so, how can this be achieved?
Disclaimer: I've not tried this, it's just an idea. And none of this code has been compiled, even.
Create a custom job factory that creates a wrapper for your real jobs. Have this wrapper implement the Execute method by creating a new app domain and running the original job in that app domain.
In more detail: Create a new type of job, say IsolatedJob : IJob. Have this job take as a constructor parameter the type of a job that it should encapsulate:
internal class IsolatedJob: IJob
{
private readonly Type _jobType;
public IsolatedJob(Type jobType)
{
_jobType = jobType ?? throw new ArgumentNullException(nameof(jobType));
}
public void Execute(IJobExecutionContext context)
{
// Create the job in the new app domain
System.AppDomain domain = System.AppDomain.CreateDomain("Isolation");
var job = (IJob)domain.CreateInstanceAndUnwrap("yourAssembly", _jobType.Name);
job.Execute(context);
}
}
You may need to create an implementation of IJobExecutionContext that inherits from MarshalByRefObject and proxies calls onto the original context object. Given the number of other objects that IJobExecutionContext provides access to, I'd be tempted to implement many members with a NotImplementedException as most won't be needed during job execution.
Next you need the custom job factory. This bit is easier:
internal class IsolatedJobFactory : IJobFactory
{
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return NewJob(bundle.JobDetail.JobType);
}
private IJob NewJob(Type jobType)
{
return new IsolatedJob(jobType);
}
}
Finally, you will need to instruct Quartz to use this job factory rather than the out of the box one. Use the IScheduler.JobFactory property setter and provide a new instance of IsolatedJobFactory.
I am writing a custom ant task that extends Task. I am using the log() method in the task. What I want to do is use a unit test while deveoping the task, but I don't know how to set up a context for the task to run in to initialise the task as if it were running in ant.
This is the custom Task:
public class CopyAndSetPropertiesForFiles extends Task {
public void execute() throws BuildException {
log("CopyAndSetPropertiesForFiles begin execute()");
log("CopyAndSetPropertiesForFiles end execute()");
}
}
This is the unit test code:
CopyAndSetPropertiesForFiles task = new CopyAndSetPropertiesForFiles();
task.execute();
When the code is run as a test it gives a NullPointerException when it calls log.
java.lang.NullPointerException
at org.apache.tools.ant.Task.log(Task.java:346)
at org.apache.tools.ant.Task.log(Task.java:334)
at uk.co.tbp.ant.custom.CopyAndSetPropertiesForFiles.execute(CopyAndSetPropertiesForFiles.java:40)
at uk.co.tbp.ant.custom.test.TestCopyAndSetPropertiesForFiles.testCopyAndSetPropertiesForFiles(TestCopyAndSetPropertiesForFiles.java:22)
Does anybody know a way to provide a context or stubs or something similar to the task?
Thanks,
Rob.
Accepted answer from Abarax. I was able to call task.setProject(new Project());
The code now executes OK (except no logging appears in th console - at least I can exercise the code :-) ).
Or better yet, decouple the task object itself from the logic (lets call it TaskImpl) inside the task - so that you can pass in your own dependencies (e.g., the logger). Then, instead of testing the task object, you test TaskImpl -> which you can pass in the logger, and any other weird bits and pieces it might need to do its job. Then unit testing is a matter of mocking the dependencies.
Looking at the Ant source code these are the two relevent classes: ProjectComponent and Task
You are calling the log method from Task:
public void log(String msg) {
log(msg, Project.MSG_INFO);
}
Which calls:
public void log(String msg, int msgLevel) {
if (getProject() != null) {
getProject().log(this, msg, msgLevel);
} else {
super.log(msg, msgLevel);
}
}
Since you do not have project set it will call "super.log(msg, msgLevel)"
public void log(String msg, int msgLevel) {
if (getProject() != null) {
getProject().log(msg, msgLevel);
} else {
// 'reasonable' default, if the component is used without
// a Project ( for example as a standalone Bean ).
// Most ant components can be used this way.
if (msgLevel <= Project.MSG_INFO) {
System.err.println(msg);
}
}
}
It looks like this may be your problem. Your task needs a project context.
Ant has a handy class called BuildFileTest that extends the JUnit TestCase class. You can use it to test the behaviour of individual targets in a build file. Using this would take care of all the annoying context.
There's a Test The Task chapter in the Apache Ant Writing Tasks Tutorial that describes this.