I'm trying to implement logic with manual deleting of AWS SQS message using spring-cloud-aws-messaging. This feature was implemented in scope of this ticket from the example in tests
#SqsListener(value = "queueName", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(SqsEventDTO message, Acknowledgment acknowledgment) {
LOGGER.info("Received message {}", message.getFoo());
try {
acknowledgment.acknowledge().get();
} catch (InterruptedException e) {
LOGGER.error("Opps", e);
} catch (ExecutionException e) {
LOGGER.error("Opps", e);
}
}
But faced with the unexpected exception
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance oforg.springframework.cloud.aws.messaging.listener.Acknowledgment(no Creators, like default construct, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information
Solution with SqsMessageDeletionPolicy.ON_SUCCESS works but I want to avoid throwing an exception.
What have I missed in the configuration?
It took some fiddling around and trying different things from other SO answers.
Here is my code and I'll try to explain as best I can. I'm including everything that I'm using for my SQS consumer.
My config class is below. Only not-so-obvious thing to note below is the converter and resolver objects instantiated in the queueMessageHandlerFactory method. The MappingJackson2MessageConverter (in case it isn't obvious from the oh-so-obvious class name) class handles the deserialization of the payload from SQS.
It's also important that the strict content type match be set to false.
Also, the MappingJackson2MessageConverter allows you to set your own Jackson ObjectMapper, however if you do that you will need to configure it as follows:
objectMapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
You may not want to do that, so you can leave it null and it will create its own ObjectMapper.
I think the rest of the code is pretty self-explanatory...? Let me know if not.
One difference between our use-cases, it looks like you're mapping your own custom object (SqsEventDTO) and I assume that's working? In that case, I don't think you will need the MappingJackson2MessageConverter, but I could be wrong.
#Configuration
public class AppConfig {
#Bean
#Primary
public QueueMessageHandler queueMessageHandler(#Autowired QueueMessageHandlerFactory queueMessageHandlerFactory) {
return queueMessageHandlerFactory.createQueueMessageHandler();
}
#Bean
#Primary
public QueueMessageHandlerFactory queueMessageHandlerFactory(#Autowired AmazonSQSAsync sqsClient) {
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
factory.setAmazonSqs(sqsClient);
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setSerializedPayloadClass(String.class);
//set strict content type match to false
messageConverter.setStrictContentTypeMatch(false);
// Uses the MappingJackson2MessageConverter object to resolve/map
// the payload against the Message/S3EventNotification argument.
PayloadArgumentResolver payloadResolver = new PayloadArgumentResolver(messageConverter);
// Extract the acknowledgment data from the payload's headers,
// which then gets deserialized into the Acknowledgment object.
AcknowledgmentHandlerMethodArgumentResolver acknowledgmentResolver = new AcknowledgmentHandlerMethodArgumentResolver("Acknowledgment");
// I don't remember the specifics of WHY, however there is
// something important about the order of the argument resolvers
// in the list
factory.setArgumentResolvers(Arrays.asList(acknowledgmentResolver, payloadResolver));
return factory;
}
#Bean("ConsumerBean")
#Primary
public SimpleMessageListenerContainer simpleMessageListenerContainer(#Autowired AmazonSQSAsync amazonSQSAsync, #Autowired QueueMessageHandler queueMessageHandler,
#Autowired ThreadPoolTaskExecutor threadPoolExecutor) {
SimpleMessageListenerContainer smlc = new SimpleMessageListenerContainer();
smlc.setWaitTimeOut(20);
smlc.setAmazonSqs(amazonSQSAsync);
smlc.setMessageHandler(queueMessageHandler);
smlc.setBeanName("ConsumerBean");
smlc.setMaxNumberOfMessages(sqsMaxMessages);
smlc.setTaskExecutor(threadPoolExecutor);
return smlc;
}
#Bean
#Primary
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(corePoolSize);
executor.setAllowCoreThreadTimeOut(coreThreadsTimeout);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.setMaxPoolSize(maxPoolSize);
executor.setKeepAliveSeconds(threadTimeoutSeconds);
executor.setThreadNamePrefix(threadName);
executor.initialize();
return executor;
}
}
My SQS consumer Service class is below.
#Service
public class RawConsumer {
#SqsListener(deletionPolicy = SqsMessageDeletionPolicy.NEVER, value = "${input.sqs.queuename}")
public void sqsListener(S3EventNotification event, Acknowledgment ack) throws Exception {
// Handle event here
}
I hope that helps, let me know if you have any issues.
What the question author did not mention is that he tried to customize the Jackson ObjectMapper. Therefore, he instantiated a MappingJackson2MessageConverter, wrapped that in a PayloadArgumentResolver and set this as the single HandlerMethodArgumentResolver on QueueMessageHandlerFactory.setArgumentResolvers(). Doing this overrides the list of default argument resolvers defined in QueueMessageHandler.initArgumentResolvers() (which is invoked when creating an instance of QueueMessageHandler inside the QueueMessageHandlerFactory).
When e.g. only a PayloadArgumentResolver is set as single argument resolver, the Acknowledgement argument cannot be bound anymore.
A better solution than overriding the list of argument resolvers for customizing the Jackson message converter thus is to set the list of message converters on the QueueMessageHandlerFactory:
#Bean
fun queueMessageHandlerFactory(objectMapper: ObjectMapper): QueueMessageHandlerFactory {
val factory = QueueMessageHandlerFactory()
val messageConverter = MappingJackson2MessageConverter()
messageConverter.objectMapper = objectMapper
factory.setMessageConverters(listOf(messageConverter)) // <-- this is the important line.
return factory
}
The registered MessageConverters are inside QueueMessageHandler.initArgumentResolvers() used as PayloadArgumentResolvers.
Thus, this is a less intrusive change.
Related
This question is a follow on after such a great answer Is there a way to upload jars for a dataflow job so we don't have to serialize everything?
This made me realize 'ok, what I want is injection with no serialization so that I can mock and test'.
Our current method requires our apis/mocks to be serialiable BUT THEN, I have to put static fields in the mock because it gets serialized and deserialized creating a new instance that dataflow uses.
My colleague pointed out that perhaps this needs to be a sink and that is treated differently? <- We may try that later and update but we are not sure right now.
My desire is from the top to replace the apis with mocks during testing. Does someone have an example for this?
Here is our bootstrap code that does not know if it is in production or inside a feature test. We test end to end results with no apache beam imports in our tests meaning we swap to any tech if we want to pivot and keep all our tests. Not only that, we catch way more integration bugs and can refactor without rewriting tests since the contracts we test are customer ones we can't easily change.
public class App {
private Pipeline pipeline;
private RosterFileTransform transform;
#Inject
public App(Pipeline pipeline, RosterFileTransform transform) {
this.pipeline = pipeline;
this.transform = transform;
}
public void start() {
pipeline.apply(transform);
pipeline.run();
}
}
Notice that everything we do is Guice Injection based so the Pipeline may be direct runner or not. I may need to modify this class to pass things through :( but anything that works for now would be great.
The function I am trying to get our api(and mock and impl to) with no serialization is thus
private class ValidRecordPublisher extends DoFn<Validated<PractitionerDataRecord>, String> {
#ProcessElement
public void processElement(#Element Validated<PractitionerDataRecord>element) {
microServiceApi.writeRecord(element.getValue);
}
}
I am not sure how to pass in microServiceApi in a way that avoid serialization. I would be ok with delayed creation as well after deserialization using guice Provider provider; with provider.get() if there is a solution there too.
Solved in such a way that mocks no longer need static or serialization anymore by one since glass bridging the world of dataflow(in prod and in test) like so
NOTE: There is additional magic-ness we have in our company that passes through headers from service to service and through dataflow and that is some of it in there which you can ignore(ie. the RouterRequest request = Current.request();). so for anyone else, they will have to pass in projectId into getInstance each time.
public abstract class DataflowClientFactory implements Serializable {
private static final Logger log = LoggerFactory.getLogger(DataflowClientFactory.class);
public static final String PROJECT_KEY = "projectKey";
private transient static Injector injector;
private transient static Module overrides;
private static int counter = 0;
public DataflowClientFactory() {
counter++;
log.info("creating again(usually due to deserialization). counter="+counter);
}
public static void injectOverrides(Module dfOverrides) {
overrides = dfOverrides;
}
private synchronized void initialize(String project) {
if(injector != null)
return;
/********************************************
* The hardest part is this piece since this is specific to each Dataflow
* so each project subclasses DataflowClientFactory
* This solution is the best ONLY in the fact of time crunch and it works
* decently for end to end testing without developers needing fancy
* wrappers around mocks anymore.
***/
Module module = loadProjectModule();
Module modules = Modules.combine(module, new OrderlyDataflowModule(project));
if(overrides != null) {
modules = Modules.override(modules).with(overrides);
}
injector = Guice.createInjector(modules);
}
protected abstract Module loadProjectModule();
public <T> T getInstance(Class<T> clazz) {
if(!Current.isContextSet()) {
throw new IllegalStateException("Someone on the stack is extending DoFn instead of OrderlyDoFn so you need to fix that first");
}
RouterRequest request = Current.request();
String project = (String)request.requestState.get(PROJECT_KEY);
initialize(project);
return injector.getInstance(clazz);
}
}
I suppose this may not be what you're looking for, but your use case makes me think of using factory objects. They may depend on the pipeline options that you pass (i.e. your PipelineOptions object), or on some other configuration object.
Perhaps something like this:
class MicroserviceApiClientFactory implements Serializable {
MicroserviceApiClientFactory(PipelineOptions options) {
this.options = options;
}
public static MicroserviceApiClient getClient() {
MyPipelineOptions specialOpts = options.as(MySpecialOptions.class);
if (specialOpts.getMockMicroserviceApi()) {
return new MockedMicroserviceApiClient(...); // Or whatever
} else {
return new MicroserviceApiClient(specialOpts.getMicroserviceEndpoint()); // Or whatever parameters it needs
}
}
}
And for your DoFns and any other execution-time objects that need it, you would pass the factory:
private class ValidRecordPublisher extends DoFn<Validated<PractitionerDataRecord>, String> {
ValidRecordPublisher(MicroserviceApiClientFactory msFactory) {
this.msFactory = msFactory;
}
#ProcessElement
public void processElement(#Element Validated<PractitionerDataRecord>element) {
if (microServiceapi == null) microServiceApi = msFactory.getClient();
microServiceApi.writeRecord(element.getValue);
}
}
This should allow you to encapsulate the mocking functionality into a single class that lazily creates your mock or your client at pipeline execution time.
Let me know if this matches what you want somewhat, or if we should try to iterate further.
I have no experience with Guice, so I don't know if Guice configurations can easily pass the boundary between pipeline construction and pipeline execution (serialization / submittin JARs / etc).
Should this be a sink? Maybe, if you have an external service, and you're writing to it, you can write a PTransform that takes care of it - but the question of how you inject various dependencies will remain.
I'm writing a REST controller that exposes CRUD operations based on the type of OAuth2 services beans that are found, something like this:
#Bean
#ConditionalOnBean(ClientDetailsService::class)
fun clientServiceController(
clientDetailsService: ClientDetailsService
): ClientDetailsServiceController {
return ClientDetailsServiceController(clientDetailsService)
}
#Bean
#ConditionalOnBean(ClientRegistrationService::class)
fun clientRegistrationServiceController(
clientRegistrationService: ClientRegistrationService
): ClientRegistrationServiceController {
return ClientRegistrationServiceController(clientRegistrationService)
}
I want to only register a controller that exposes ClientDetailsService if we do not have a ClientRegistrationService. If it does exist, to additionally register a controller for the methods in that interface.
One of our modules that registers these controllers, also registers a JdbcClientDetailsService bean, which implements both interfaces. Yet, the #ConditionalOnBean(ClientRegistrationService::class) fails to match it, so only the first bean is created by not the second.
This is an example of how we declare the JdbcClientDetailsService:
#Bean
fun jdbcClientDetailsService(
passwordEncoder: PasswordEncoder,
dataSource: DataSource): JdbcClientDetailsService {
return JdbcClientDetailsService(dataSource).apply { setPasswordEncoder(passwordEncoder) }
}
The odd thing is that #Autowired ClientRegistrationService does successfully inject JdbcClientDetailsService.
What am I missing? How can I declare a bean that implements both interfaces, and match correctly against the conditionals? Is there a work around?
I succeed to get around this with the following:
#Bean
#Lazy
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
public ClientRegistrationService registrationDetailsService(ClientDetailsServiceConfigurer configurer)
throws Exception {
ClientDetailsService built = configurer.and().build();
if (built instanceof ClientRegistrationService) {
return (ClientRegistrationService) built;
} else {
throw new IllegalStateException(built + " is not instanceof " + ClientRegistrationService.class);
}
}
It applies the same pattern as ClientDetailsServiceConfiguration, and rely on the same configurer.
We might get ride of '#Scope(proxyMode = ScopedProxyMode.INTERFACES)' if you want to retrieve an actual JdbcClientDetailsService
Problem: I am migrating from MessageListener interface impl to #RabbitListener. I had logic like this where I was doing "pre" and "post" message processing on a MessageListener that was inherited by several classes
example:
public AbstractMessageListener implements MessageListener {
#Override
public void onMessage(Message message) {
//do some pre message processing
process(Message message);
// do some post message processing
}
protected abstract void process(Message message);
}
Question: Is there a way I can achieve something similar using #RabbitListener annotation Where I can inherit pre/post message processing logic without having to re-implement or call the pre/post message processing inside each child #RabbitListener annotation and all the while maintaining a customizable method signatures for the child #RabbitListener? Or is this being too greedy?
Example desired result:
public class SomeRabbitListenerClass {
#RabbitListener( id = "listener.mypojo",queues = "${rabbitmq.some.queue}")
public void listen(#Valid MyPojo myPojo) {
//...
}
}
public class SomeOtherRabbitListenerClass {
#RabbitListener(id = "listener.orders",queues ="${rabbitmq.some.other.queue}")
public void listen(Order order, #Header("order_type") String orderType) {
//...
}
}
with both these #RabbitListener(s) utilizing the same inherited pre/post message processing
I see there is a 'containerFactory' argument in the #RabbitListener annotation but i'm already declaring one in the config... and i'm really sure how to achieve the inheritance I desire with a custom containerFactory.
Updated Answer: This is what I ended up doing.
Advice defintion:
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.springframework.amqp.core.Message;
/**
* AOP Around advice wrapper. Every time a message comes in we can do
* pre/post processing by using this advice by implementing the before/after methods.
* #author sjacobs
*
*/
public class RabbitListenerAroundAdvice implements MethodInterceptor {
/**
* place the "AroundAdvice" around each new message being processed.
*/
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
Message message = (Message) invocation.getArguments()[1];
before(message)
Object result = invocation.proceed();
after(message);
return result;
}
declare beans: In your rabbitmq config declare the advice as a Spring bean and pass it to the rabbitListenerContainerFactory#setAdviceChain(...)
//...
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory( cachingConnectionFactory() );
factory.setTaskExecutor(threadPoolTaskExecutor());
factory.setMessageConverter(jackson2JsonMessageConverter());
factory.setAdviceChain(rabbitListenerAroundAdvice());
return factory;
}
#Bean
public RabbitListenerAroundAdvice rabbitListenerAroundAdvice() {
return new RabbitListenerAroundAdvice();
}
// ...
Correction
You can use the advice chain in the SimpleRabbitListenerContainerFactory to apply an around advice to listeners created for #RabbitListener; the two arguments are the Channel and Message.
If you only need to take action before calling the listener, you can add MessagePostProcessor(s) to the container afterReceivePostProcessors.
The inheritance isn't possible here because annotation processing on the POJO methods and MessageListener implementation are fully different stories.
Using MessageListener you fully have control around the target behavior and the container.
With the annotations you deal only with the POJO, framework-free code. The particular MessageListener is created on the background. And that one fully based on the annotated method.
I'd say we can achieve your requirement using Spring AOP Framework.
See the recent question and its answers on the matter: How to write an integration test for #RabbitListener annotation?
My question is really a follow up question to
RabbitMQ Integration Test and Threading
There it states to wrap "your listeners" and pass in a CountDownLatch and eventually all the threads will merge. This answer works if we were manually creating and injecting the message listener but for #RabbitListener annotations... i'm not sure how to pass in a CountDownLatch. The framework is auto magically creating the message listener behind the scenes.
Are there any other approaches?
With the help of #Gary Russell I was able to get an answer and used the following solution.
Conclusion: I must admit i'm indifferent about this solution (feels like a hack) but this is the only thing I could get to work and once you get over the initial one time setup and actually understand the 'work flow' it is not so painful. Basically comes down to defining ( 2 ) #Beans and adding them to your Integration Test config.
Example solution posted below with explanations. Please feel free to suggest improvements to this solution.
1. Define a ProxyListenerBPP that during spring initialization will listen for a specified clazz (i.e our test class that contains #RabbitListener) and
inject our custom CountDownLatchListenerInterceptor advice defined in the next step.
import org.aopalliance.aop.Advice;
import org.springframework.aop.framework.ProxyFactoryBean;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.core.Ordered;
import org.springframework.core.PriorityOrdered;
/**
* Implements BeanPostProcessor bean... during spring initialization we will
* listen for a specified clazz
* (i.e our #RabbitListener annotated class) and
* inject our custom CountDownLatchListenerInterceptor advice
* #author sjacobs
*
*/
public class ProxyListenerBPP implements BeanPostProcessor, BeanFactoryAware, Ordered, PriorityOrdered{
private BeanFactory beanFactory;
private Class<?> clazz;
public static final String ADVICE_BEAN_NAME = "wasCalled";
public ProxyListenerBPP(Class<?> clazz) {
this.clazz = clazz;
}
#Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (clazz.isAssignableFrom(bean.getClass())) {
ProxyFactoryBean pfb = new ProxyFactoryBean();
pfb.setProxyTargetClass(true); // CGLIB, false for JDK proxy (interface needed)
pfb.setTarget(bean);
pfb.addAdvice(this.beanFactory.getBean(ADVICE_BEAN_NAME, Advice.class));
return pfb.getObject();
}
else {
return bean;
}
}
#Override
public int getOrder() {
return Ordered.LOWEST_PRECEDENCE - 1000; // Just before #RabbitListener post processor
}
2. Create the MethodInterceptor advice impl that will hold the reference to the CountDownLatch. The CountDownLatch needs to be referenced in both in the Integration test thread and inside the async worker thread in the #RabbitListener. So we can later release back to the Integration Test thread as soon as the #RabbitListener async thread has completed execution. No need for polling.
import java.util.concurrent.CountDownLatch;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
/**
* AOP MethodInterceptor that maps a <b>Single</b> CountDownLatch to one method and invokes
* CountDownLatch.countDown() after the method has completed execution. The motivation behind this
* is for integration testing purposes of Spring RabbitMq Async Worker threads to be able to merge
* the Integration Test thread after an Async 'worker' thread completed its task.
* #author sjacobs
*
*/
public class CountDownLatchListenerInterceptor implements MethodInterceptor {
private CountDownLatch countDownLatch = new CountDownLatch(1);
private final String methodNameToInvokeCDL ;
public CountDownLatchListenerInterceptor(String methodName) {
this.methodNameToInvokeCDL = methodName;
}
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
String methodName = invocation.getMethod().getName();
if (this.methodNameToInvokeCDL.equals(methodName) ) {
//invoke async work
Object result = invocation.proceed();
//returns us back to the 'awaiting' thread inside the integration test
this.countDownLatch.countDown();
//"reset" CountDownLatch for next #Test (if testing for more async worker)
this.countDownLatch = new CountDownLatch(1);
return result;
} else
return invocation.proceed();
}
public CountDownLatch getCountDownLatch() {
return countDownLatch;
}
}
3. Next add to your Integration Test Config the following #Bean(s)
public class SomeClassThatHasRabbitListenerAnnotationsITConfig extends BaseIntegrationTestConfig {
// pass into the constructor the test Clazz that contains the #RabbitListener annotation into the constructor
#Bean
public static ProxyListenerBPP listenerProxier() { // note static
return new ProxyListenerBPP(SomeClassThatHasRabbitListenerAnnotations.class);
}
// pass the method name that will be invoked by the async thread in SomeClassThatHasRabbitListenerAnnotations.Class
// I.E the method name annotated with #RabbitListener or #RabbitHandler
// in our example 'listen' is the method name inside SomeClassThatHasRabbitListenerAnnotations.Class
#Bean(name=ProxyListenerBPP.ADVICE_BEAN_NAME)
public static Advice wasCalled() {
String methodName = "listen";
return new CountDownLatchListenerInterceptor( methodName );
}
// this is the #RabbitListener bean we are testing
#Bean
public SomeClassThatHasRabbitListenerAnnotations rabbitListener() {
return new SomeClassThatHasRabbitListenerAnnotations();
}
}
4. Finally, in the integration #Test call... after sending a message via rabbitTemplate to trigger the async thread... now call the CountDownLatch#await(...) method obtained from the interceptor and make sure to pass in a TimeUnit args so it can timeout in case of long running process or something goes wrong. Once the async the Integration Test thread is notified (awakened) and now we can finally begin to actually test/validate/verify the results of the async work.
#ContextConfiguration(classes={ SomeClassThatHasRabbitListenerAnnotationsITConfig.class } )
public class SomeClassThatHasRabbitListenerAnnotationsIT extends BaseIntegrationTest{
#Inject
private CountDownLatchListenerInterceptor interceptor;
#Inject
private RabbitTemplate rabbitTemplate;
#Test
public void shouldReturnBackAfterAsyncThreadIsFinished() throws Exception {
MyObject payload = new MyObject();
rabbitTemplate.convertAndSend("some.defined.work.queue", payload);
CountDownLatch cdl = interceptor.getCountDownLatch();
// wait for async thread to finish
cdl.await(10, TimeUnit.SECONDS); // IMPORTANT: set timeout args.
//Begin the actual testing of the results of the async work
// check the database?
// download a msg from another queue?
// verify email was sent...
// etc...
}
It's a bit more tricky with #RabbitListener but the simplest way is to advise the listener.
With the custom listener container factory just have your test case add the advice to the factory.
The advice would be a MethodInterceptor; the invocation will have 2 arguments; the channel and the (unconverted) Message. The advice has to be injected before the container(s) are created.
Alternatively, get a reference to the container using the registry and add the advice later (but you'll have to call initialize() to force the new advice to be applied).
An alternative would be a simple BeanPostProcessor to proxy your listener class before it is injected into the container. That way, you will see the method argumen(s) after any conversion; you will also be able to verify any result returned by the listener (for request/reply scenarios).
If you are not familiar with these techniques, I can try to find some time to spin up a quick example for you.
EDIT
I issued a pull request to add an example to EnableRabbitIntegrationTests. This adds a listener bean with 2 annotated listener methods, a BeanPostProcessor that proxies the listener bean before it is injected into a listener container. An Advice is added to the proxy which counts latches down when the expected messages are received.
This line in TopLevelTransaction (neo4j-kernel-2.1.2) throws a NullPointerException every time I call next() on an iterator obtained via GraphRepository#findAll():
protected void markAsRollbackOnly()
{
try
{
transactionManager.getTransaction().setRollbackOnly(); // NPE here
}
catch ( Exception e )
{
throw new TransactionFailureException(
"Failed to mark transaction as rollback only.", e );
}
}
I found some threads about similar crashes with slightly different stack traces. The accepted solution on this question is to use "proxy" transaction management, but that seems like a band-aid solution. This question also mentions "proxy" transaction management and suggests that there might be something wrong with the #Transactional annotation when using AspectJ.
Is this legitimately a bug, or have I just set up my project incorrectly? My code is essentially the same as in my standalone hello world, with a slightly more complex main class:
#Component
public class Test2 {
#Autowired
FooRepository repo;
public static void main(String[] args) {
AbstractApplicationContext context = new AnnotationConfigApplicationContext("test2");
Test2 test2 = context.getBean(Test2.class);
test2.doStuff();
}
public void doStuff() {
createFoo();
printFoos();
}
#Transactional
public Foo createFoo() {
Foo foo = new Foo();
foo.setName("Derp" + System.currentTimeMillis());
repo.save(foo);
System.out.println("saved " + foo.toString());
return foo;
}
#Transactional
public void printFoos() {
Iterable<Foo> foos = repo.findAll();
System.out.println("findAll() returned instance of " + foos.getClass().getName());
Iterator<Foo> iter = foos.iterator();
System.out.println("iterator is instance of " + iter.getClass().getName());
if(iter.hasNext()) {
iter.next(); // CRASHES HERE
}
}
}
I can post my POM if needed.
I didn't find a bug. Two or three things are required to make this work, depending on whether you want to use proxy or AspectJ transaction management.
First, transaction management must be enabled. Since I'm using annotation-based configuration, I did this by annotating my #Configuration class with #EnableTransactionManagement. Contrary to the docs, the default mode now seems to be AdviceMode.ASPECTJ, not AdviceMode.PROXY.
Next, you need to ensure that the Iterator is used within a transaction. In my example, if I use AdviceMode.PROXY the entire bean containing the #Autowired repository has to be annotated #Transactional. If I use AdviceMode.ASPECTJ I can annotate just the method. This is because the call to the method using the iterator is a self-call from within the bean, and proxy transaction management cannot intercept and manage internal calls.
Finally, if you're using AdviceMode.ASPECTJ you must set up weaving as discussed here.