I have an activity(say actN) that is dependent on N number of activities. All the N activities are executed in parallel. After the completion of all the activities i want to execute activity actN. I want to do this without using any #Asynchronous annotation as #Asynchronous tag is not working for me.
public Promise<Integer> executeLastactivity(List<Promise<Integer>> prm){
//TODO
}
The parameter of any type that extends Collection should be annotated with #Wait. It is necessary because the Flow framework relies on Java reflection to determine if type of the argument is Promise. But Java doesn't expose generic types through reflection.
So your method signature should look like:
#Asynchronous
public Promise<Integer> executeLastactivity(#Wait List<Promise<Integer>> prm){
//TODO
}
Related
Whenever I need to pass data down the reactive chain I end up doing something like this:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto)
.flatMap(dtoMono -> {
Mono<String> result = // remote call returning a Mono
return Mono.zip(Mono.just(dtoMono), result);
})
.flatMap(tup2 -> {
return doSomething(tup2.getT1().getFoo(), tup2.getT2()); // do something that requires foo and result and returns a Mono
});
}
Given the below sample Dto class:
class Dto {
private String foo;
public String getFoo() {
return this.foo;
}
}
Because it often gets tedious to zip the data all the time to pass it down the chain (especially a few levels down) I was wondering if it's ok to simply reference the dto directly like so:
public Mono<String> doFooAndReferenceParam(Dto dto) {
Mono<String> result = // remote call returning a Mono
return result.flatMap(result -> {
return doSomething(dto.getFoo(), result); // do something that requires foo and result and returns a Mono
});
}
My concern about the second approach is that assuming a subscriber subscribes to this Mono on a thread pool would I need to guarantee that Dto is thread safe (the above example is simple because it just carries a String but what if it's not)?
Also, which one is considered "best practice"?
Based on what you have shared, you can simply do following:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto.getFoo());
}
The way you are using zip in the first option doesn't solve any purpose. Similarly, the 2nd option will not work either as once the mono is empty then the next flat map will not be triggered.
The case is simple if
The reference data is available from the beginning (i.e. before the creation of the chain), and
The chain is created for processing at most one event (i.e. starts with a Mono), and
The reference data is immutable.
Then you can simple refer to the reference data in a parameter or local variable – just like in your second solution. This is completely okay, and there are no concurrency issues.
Using mutable data in reactive flows is strongly discouraged. If you had a mutable Dto class, you might still be able to use it (assuming proper synchronization) – but this will be very surprising to readers of your code.
According to the Apache Beam documentation the recommended way
to write simple sources is by using Read Transforms and ParDo. Unfortunately the Apache Beam docs has let me down here.
I'm trying to write a simple unbounded data source which emits events using a ParDo but the compiler keeps complaining about the input type of the DoFn object:
message: 'The method apply(PTransform<? super PBegin,OutputT>) in the type PBegin is not applicable for the arguments (ParDo.SingleOutput<PBegin,Event>)'
My attempt:
public class TestIO extends PTransform<PBegin, PCollection<Event>> {
#Override
public PCollection<Event> expand(PBegin input) {
return input.apply(ParDo.of(new ReadFn()));
}
private static class ReadFn extends DoFn<PBegin, Event> {
#ProcessElement
public void process(#TimerId("poll") Timer pollTimer) {
Event testEvent = new Event(...);
//custom logic, this can happen infinitely
for(...) {
context.output(testEvent);
}
}
}
}
A DoFn performs element-wise processing. As written, ParDo.of(new ReadFn()) will have type PTransform<PCollection<PBegin>, PCollection<Event>>. Specifically, the ReadFn indicates it takes an element of type PBegin and returns 0 or more elements of type Event.
Instead, you should use an actual Read operation. There are a variety provided. You can also use Create if you have a specific set of in-memory collections to use.
If you need to create a custom source you should use the Read transform. Since you're using timers, you likely want to create an Unbounded Source (a stream of elements).
How to access the elements of a side input if I have my class extend DoFn?
For example:
Say I have a ParDo transform like:
PCollection<String> data = myData.apply("Get data",
ParDo.of(new MyClass()).withSideInputs(myDataView));
And I have a class:-
static class MyClass extends DoFn<String,String>
{
//How to access side input here
}
c.sideInput() isn't working in this case.
Thanks.
In this case, the problem is that the processElement method in your DoFn does not have access to the PCollectionView instance in your main method.
You can pass the PCollectionView to the DoFn in the constructor:
class MyClass extends DoFn<String,String>
{
private final PCollectionView<..> mySideInput;
public MyClass(PCollectionView<..> mySideInput) {
// List, or Map or anything:
this.mySideInput = mySideInput;
}
#ProcessElement
public void processElement(ProcessContext c) throws IOException
{
// List or Map or any type you need:
List<..> sideInputList = c.sideInput(mySideInput);
}
}
You would then pass the side input to the class when you instantiate it, and indicate it as a side input like so:
p.apply(ParDo.of(new MyClass(mySideInput)).withSideInputs(mySideInput));
The explanation for this is that when you use an anonymous DoFn, the process method has a closure with access to all the objects within the scope that encloses the DoFn (among them is the PCollectionView). When you're not using an anonymous DoFn, there is no closure, and you need another way of passing the PCollectionView.
So although the answer above is correct, it is still a little incomplete.
So once you finish implementing the above answer, you need to execute your pipeline like this:
p.apply(ParDo.of(new MyClass(mySideInput)).withSideInputs(mySideInput));
In my Grails app I've installed the Quartz plugin. I want to intercept calls to every Quartz job class' execute method in order to do something before the execute method is invoked (similar to AOP before advice).
Currently, I'm trying to do this interception from the doWithDynamicMethods closure of another plugin as shown below:
def doWithDynamicMethods = { ctx ->
// get all the job classes
application.getArtefacts("Job").each { klass ->
MetaClass jobMetaClass = klass.clazz.metaClass
// intercept the methods of the job classes
jobMetaClass.invokeMethod = { String name, Object args ->
// do something before invoking the called method
if (name == "execute") {
println "this should happen before execute()"
}
// now call the method that was originally invoked
def validMethod = jobMetaClass.getMetaMethod(name, args)
if (validMethod != null) {
validMethod.invoke(delegate, args)
} else {
jobMetaClass.invokeMissingMethod(delegate, name, args)
}
}
}
}
So, given a job such as
class TestJob {
static triggers = {
simple repeatInterval: 5000l // execute job once in 5 seconds
}
def execute() {
"execute called"
}
}
It should print:
this should happen before execute()
execute called
But my attempt at method interception seems to have no effect and instead it just prints:
execute called
Perhaps the cause of the problem is this Groovy bug? Even though the Job classes don't explicitly implement the org.quartz.Job interface, I suspect that implicitly (due to some Groovy voodoo), they are instances of this interface.
If indeed this bug is the cause of my problem, is there another way that I can do "before method interception"?
Because all the job classes are Spring beans you can solve this problem using Spring AOP. Define an aspect such as the following (adjust the pointcut definition so that it matches only your job classes, I've assumed they are all in a package named org.example.job and have a class name that ends with Job).
#Aspect
class JobExecutionAspect {
#Pointcut("execution(public * org.example.job.*Job.execute(..))")
public void executeMethods() {}
#Around("executeMethods()")
def interceptJobExecuteMethod(ProceedingJoinPoint jp) {
// do your stuff that should happen before execute() here, if you need access
// to the job object call jp.getTarget()
// now call the job's execute() method
jp.proceed()
}
}
You'll need to register this aspect as a Spring bean (it doesn't matter what name you give the bean).
You can have your customized JobListener registered in the application to handle logics before execute() is triggered. You can use something like:-
public class MyJobListener implements JobListener {
public void jobToBeExecuted(JobExecutionContext context) {
println "Before calling Execute"
}
public void jobWasExecuted(JobExecutionContext context,
JobExecutionException jobException) {}
public void jobExecutionVetoed(JobExecutionContext context) {}
}
Register the customized Job Listener to Quartz Scheduler in Bootstrap:-
Scheduler scheduler = ctx.getBean("quartzScheduler") //ctx being application context
scheduler.getListenerManager().addJobListener(myJobListener, allJobs())
resources.groovy:-
beans = {
myJobListener(MyJobListener)
}
One benefit I see here using this approach is that we don't need the second plugin used for method interception any more.
Second, we can register the listener to listen all jobs, specific jobs, and jobs in a group. Refer Customize Quartz JobListener and API for JobListener, TriggerListener, ScheduleListener for more insight.
Obviously, AOP is another approach if we do want want to use Quartz API.
You are not getting the job classes like that. If you refer to the Quartz plugin, you can get them by calling jobClasses:
application.jobClasses.each {GrailsJobClass tc -> ... }
see https://github.com/nebolsin/grails-quartz/blob/master/QuartzGrailsPlugin.groovy
If you actually look, you can see that they are almost doing what you are trying to acheive without the need to use aop or anything else.
For method interception implement invokeMethod on the metaclass. In my case the class was not of third party so I can modify the implementation.
Follow this blog for more information.
When writing a custom IPipelineContributor it isn't clear how to get a reference to the selected Handler. The purpose of the custom contributor is to dispose any handlers that implement IDisposable once they have returned a result.
Given the following code sample:
public class DisposerPipelineContributor : IPipelineContributor
{
public void Initialize(IPipeline pipelineRunner)
{
pipelineRunner.Notify(MyMethod).After<KnownStages.IOperationExecution>();
}
PipelineContinuation MyMethod(ICommunicationContext arg)
{
return PipelineContinuation.Continue;
}
}
The ICommunicationContext gives us access to OpenRasta's own type system and reveals the type of selected handler: [OpenRasta.TypeSystem.ReflectionBased.ReflectionBasedType] = {CLR Type: MySelectedHandler}. However, it isn't clear how to get the instance of the handler that was actually used to satisfy the request.
Iain,
first tings first, if you want features such as disposing of objects, you should use your own IoC container, most of those frameworks implement that functionality.
We're going to add disposing to the contract we have with containers in the next major version, as it is now more or less ok to do this, it wasn't when we built 2.0.
If you want to call IDisposable on a handler yourself and you cannot switch to a full-fledged IoC container, you'll find the handler instance in ICommunicationContext.PipelineData.