#ConditionalOnBean(ClientRegistrationService::class) fails to match JdbcClientDetailsService - spring-security

I'm writing a REST controller that exposes CRUD operations based on the type of OAuth2 services beans that are found, something like this:
#Bean
#ConditionalOnBean(ClientDetailsService::class)
fun clientServiceController(
clientDetailsService: ClientDetailsService
): ClientDetailsServiceController {
return ClientDetailsServiceController(clientDetailsService)
}
#Bean
#ConditionalOnBean(ClientRegistrationService::class)
fun clientRegistrationServiceController(
clientRegistrationService: ClientRegistrationService
): ClientRegistrationServiceController {
return ClientRegistrationServiceController(clientRegistrationService)
}
I want to only register a controller that exposes ClientDetailsService if we do not have a ClientRegistrationService. If it does exist, to additionally register a controller for the methods in that interface.
One of our modules that registers these controllers, also registers a JdbcClientDetailsService bean, which implements both interfaces. Yet, the #ConditionalOnBean(ClientRegistrationService::class) fails to match it, so only the first bean is created by not the second.
This is an example of how we declare the JdbcClientDetailsService:
#Bean
fun jdbcClientDetailsService(
passwordEncoder: PasswordEncoder,
dataSource: DataSource): JdbcClientDetailsService {
return JdbcClientDetailsService(dataSource).apply { setPasswordEncoder(passwordEncoder) }
}
The odd thing is that #Autowired ClientRegistrationService does successfully inject JdbcClientDetailsService.
What am I missing? How can I declare a bean that implements both interfaces, and match correctly against the conditionals? Is there a work around?

I succeed to get around this with the following:
#Bean
#Lazy
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
public ClientRegistrationService registrationDetailsService(ClientDetailsServiceConfigurer configurer)
throws Exception {
ClientDetailsService built = configurer.and().build();
if (built instanceof ClientRegistrationService) {
return (ClientRegistrationService) built;
} else {
throw new IllegalStateException(built + " is not instanceof " + ClientRegistrationService.class);
}
}
It applies the same pattern as ClientDetailsServiceConfiguration, and rely on the same configurer.
We might get ride of '#Scope(proxyMode = ScopedProxyMode.INTERFACES)' if you want to retrieve an actual JdbcClientDetailsService

Related

Can't inject the guice dependency in the jersey filter

In the process of setup a bridge between guice and jersey, I ran into one problem.
When trying to create a jersey filter, I was unable to inject guice dependencies into it.
I found a duplicate, however there is no solution to the problem there.
Everything is exactly the same.
The only difference is that I don't get a startup error. The filter works, but my dependencies are null.
Interestingly, Filter and HttpFilter work fine. But it doesn't really work for me.
There's another thing that's interesting. In the resource, which I understand is an HK2 dependency, I can inject guice bean.
#ApplicationPath("/test")
private static class TestApplicationConfig extends ResourceConfig
{
public TestApplicationConfig()
{
register(JacksonFeature.class);
register(AuthFilter.class);
register(new ContainerLifecycleListener()
{
public void onStartup(Container container)
{
ServletContainer servletContainer = (ServletContainer) container;
ServiceLocator serviceLocator = container.getApplicationHandler().getServiceLocator();
GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
Injector injector = (Injector) servletContainer
.getServletContext()
.getAttribute(Injector.class.getName());
guiceBridge.bridgeGuiceInjector(injector);
}
public void onReload(Container container)
{
}
public void onShutdown(Container container)
{
}
});
}
}
In ServletModule child.
serve(path).with(ServletContainer.class, ImmutableMap.of(
"javax.ws.rs.Application", TestApplicationConfig.class.getName(),
"jersey.config.server.provider.packages", sb.toString()));
I trying with register(AuthFilter.class) and #Provider
#Singleton
#Provider
public class AuthFilter implements ContainerRequestFilter
{
#Inject
private SomeInjectedService someInjectedService; **// null here**
#Context
private ResourceInfo resourceInfo;
#Override
public void filter(ContainerRequestContext requestContext) throws IOException
{
// some code
}
}
SomeInjectedService I register by guice
bind(SomeInjectedService.class).asEagerSingleton();
Where can I start diagnosing and what can I do?
UPD:
I noticed different behavior when using different annotations.
If I use javax.inject.Inject, I get the following error message.
org.glassfish.hk2.api.MultiException: A MultiException has 3 exceptions. They are:
1. org.glassfish.hk2.api.UnsatisfiedDependencyException: There was no object available for injection at SystemInjecteeImpl(requiredType=SomeInjectedService,parent=AuthFilter,qualifiers={},position=-1,optional=false,self=false,unqualified=null,1496814489)
2. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of some.package.AuthFilter errors were found
3. java.lang.IllegalStateException: Unable to perform operation: resolve on some.package.AuthFilter
If com.google.inject.Inject, just null. As I understand this method is not correct.
Considering that javax Inject is trying to inject the service but can't find it. Can we conclude that the bridge is not working correctly? But if it's not working correctly, why can I inject this service into my resource?
#Path("/test")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class SomeResource
{
private final SomeInjectedService someInjectedResource;
#Inject // here I use javax annotation and this code working correctry
public SomeResource(SomeInjectedService someInjectedResource)
{
this.someInjectedResource = someInjectedResource;
}
#GET
#Path("/{user}")
public Response returnSomeResponse(#PathParam("user") String user) throws Exception
{
// some code
}
}

Getting an injected object using CDI Produces

I have a class (OmeletteMaker) that contains an injected field (Vegetable). I would like to write a producer that instantiates an injected object of this class. If I use 'new', the result will not use injection. If I try to use a WeldContainer, I get an exception, since OmeletteMaker is #Alternative. Is there a third way to achieve this?
Here is my code:
#Alternative
public class OmeletteMaker implements EggMaker {
#Inject
Vegetable vegetable;
#Override
public String toString() {
return "Omelette: " + vegetable;
}
}
a vegetable for injection:
public class Tomato implements Vegetable {
#Override
public String toString() {
return "Tomato";
}
}
main file
public class CafeteriaMainApp {
public static WeldContainer container = new Weld().initialize();
public static void main(String[] args) {
Restaurant restaurant = (Restaurant) container.instance().select(Restaurant.class).get();
System.out.println(restaurant);
}
#Produces
public EggMaker eggMakerGenerator() {
return new OmeletteMaker();
}
}
The result I get is "Restaurant: Omelette: null", While I'd like to get "Restaurant: Omelette: Tomato"
If you provide OmeletteMaker yourself, its fields will not be injected by the CDI container. To use #Alternative, don't forget specifying it in the beans.xml and let the container instantiate the EggMaker instance:
<alternatives>
<class>your.package.path.OmeletteMaker</class>
</alternatives>
If you only want to implement this with Producer method then my answer may be inappropriate. I don't think it is possible (with standard CDI). The docs says: Producer methods provide a way to inject objects that are not beans, objects whose values may vary at runtime, and objects that require custom initialization.
Thanks Kukeltje for pointing to the other CDI question in comment:
With CDI extensions like Deltaspike, it is possible to inject the fields into an object created with new, simply with BeanProvider#injectFileds. I tested this myself:
#Produces
public EggMaker eggMakerProducer() {
EggMaker eggMaker = new OmeletteMaker();
BeanProvider.injectFields(eggMaker);
return eggMaker;
}

Spring Cloud AWS Issue with setting manual acknowledge of SQS message

I'm trying to implement logic with manual deleting of AWS SQS message using spring-cloud-aws-messaging. This feature was implemented in scope of this ticket from the example in tests
#SqsListener(value = "queueName", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(SqsEventDTO message, Acknowledgment acknowledgment) {
LOGGER.info("Received message {}", message.getFoo());
try {
acknowledgment.acknowledge().get();
} catch (InterruptedException e) {
LOGGER.error("Opps", e);
} catch (ExecutionException e) {
LOGGER.error("Opps", e);
}
}
But faced with the unexpected exception
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance oforg.springframework.cloud.aws.messaging.listener.Acknowledgment(no Creators, like default construct, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information
Solution with SqsMessageDeletionPolicy.ON_SUCCESS works but I want to avoid throwing an exception.
What have I missed in the configuration?
It took some fiddling around and trying different things from other SO answers.
Here is my code and I'll try to explain as best I can. I'm including everything that I'm using for my SQS consumer.
My config class is below. Only not-so-obvious thing to note below is the converter and resolver objects instantiated in the queueMessageHandlerFactory method. The MappingJackson2MessageConverter (in case it isn't obvious from the oh-so-obvious class name) class handles the deserialization of the payload from SQS.
It's also important that the strict content type match be set to false.
Also, the MappingJackson2MessageConverter allows you to set your own Jackson ObjectMapper, however if you do that you will need to configure it as follows:
objectMapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
You may not want to do that, so you can leave it null and it will create its own ObjectMapper.
I think the rest of the code is pretty self-explanatory...? Let me know if not.
One difference between our use-cases, it looks like you're mapping your own custom object (SqsEventDTO) and I assume that's working? In that case, I don't think you will need the MappingJackson2MessageConverter, but I could be wrong.
#Configuration
public class AppConfig {
#Bean
#Primary
public QueueMessageHandler queueMessageHandler(#Autowired QueueMessageHandlerFactory queueMessageHandlerFactory) {
return queueMessageHandlerFactory.createQueueMessageHandler();
}
#Bean
#Primary
public QueueMessageHandlerFactory queueMessageHandlerFactory(#Autowired AmazonSQSAsync sqsClient) {
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
factory.setAmazonSqs(sqsClient);
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setSerializedPayloadClass(String.class);
//set strict content type match to false
messageConverter.setStrictContentTypeMatch(false);
// Uses the MappingJackson2MessageConverter object to resolve/map
// the payload against the Message/S3EventNotification argument.
PayloadArgumentResolver payloadResolver = new PayloadArgumentResolver(messageConverter);
// Extract the acknowledgment data from the payload's headers,
// which then gets deserialized into the Acknowledgment object.
AcknowledgmentHandlerMethodArgumentResolver acknowledgmentResolver = new AcknowledgmentHandlerMethodArgumentResolver("Acknowledgment");
// I don't remember the specifics of WHY, however there is
// something important about the order of the argument resolvers
// in the list
factory.setArgumentResolvers(Arrays.asList(acknowledgmentResolver, payloadResolver));
return factory;
}
#Bean("ConsumerBean")
#Primary
public SimpleMessageListenerContainer simpleMessageListenerContainer(#Autowired AmazonSQSAsync amazonSQSAsync, #Autowired QueueMessageHandler queueMessageHandler,
#Autowired ThreadPoolTaskExecutor threadPoolExecutor) {
SimpleMessageListenerContainer smlc = new SimpleMessageListenerContainer();
smlc.setWaitTimeOut(20);
smlc.setAmazonSqs(amazonSQSAsync);
smlc.setMessageHandler(queueMessageHandler);
smlc.setBeanName("ConsumerBean");
smlc.setMaxNumberOfMessages(sqsMaxMessages);
smlc.setTaskExecutor(threadPoolExecutor);
return smlc;
}
#Bean
#Primary
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(corePoolSize);
executor.setAllowCoreThreadTimeOut(coreThreadsTimeout);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.setMaxPoolSize(maxPoolSize);
executor.setKeepAliveSeconds(threadTimeoutSeconds);
executor.setThreadNamePrefix(threadName);
executor.initialize();
return executor;
}
}
My SQS consumer Service class is below.
#Service
public class RawConsumer {
#SqsListener(deletionPolicy = SqsMessageDeletionPolicy.NEVER, value = "${input.sqs.queuename}")
public void sqsListener(S3EventNotification event, Acknowledgment ack) throws Exception {
// Handle event here
}
I hope that helps, let me know if you have any issues.
What the question author did not mention is that he tried to customize the Jackson ObjectMapper. Therefore, he instantiated a MappingJackson2MessageConverter, wrapped that in a PayloadArgumentResolver and set this as the single HandlerMethodArgumentResolver on QueueMessageHandlerFactory.setArgumentResolvers(). Doing this overrides the list of default argument resolvers defined in QueueMessageHandler.initArgumentResolvers() (which is invoked when creating an instance of QueueMessageHandler inside the QueueMessageHandlerFactory).
When e.g. only a PayloadArgumentResolver is set as single argument resolver, the Acknowledgement argument cannot be bound anymore.
A better solution than overriding the list of argument resolvers for customizing the Jackson message converter thus is to set the list of message converters on the QueueMessageHandlerFactory:
#Bean
fun queueMessageHandlerFactory(objectMapper: ObjectMapper): QueueMessageHandlerFactory {
val factory = QueueMessageHandlerFactory()
val messageConverter = MappingJackson2MessageConverter()
messageConverter.objectMapper = objectMapper
factory.setMessageConverters(listOf(messageConverter)) // <-- this is the important line.
return factory
}
The registered MessageConverters are inside QueueMessageHandler.initArgumentResolvers() used as PayloadArgumentResolvers.
Thus, this is a less intrusive change.

How to inject a list with different implementations of the same interface in a nested module scenario via Guice?

There is an interface DCE, which is implemented by a class DCEImpl which has a dependency, say, string S, which it gets via its constructor.
The universe of S is limited, say S can only take values {'A','B','C'}.
There is an already existing Guice module that accepts the value of S in its constructor, and then binds the interface DCE to the correctly initialized version of DCEImpl.
public class DCEModule extends AbstractModule {
private final String s;
public DCEModule(String s){
this.s = s;
}
protected void configure() {
bind(DCE.class).toInstance(new DCEImpl(s));
}
}
Now I have a class C which needs a List<DCE> with all the 3 implementations (actually a lot more than 3, using 3 for example purpose).
I want to inject this list via Guice in C. To do that, I created a new module DCEPModule, which will provide a List<DCE> in this way:
#Provides
List<DCE> getDCE() {
for(String s: S){
Module m = new DCEModule(s);
install(m);
Injector injector = Guice.createInjector(m);
listDomains.add(injector.getInstance(DCE.class));
}
}
My problem is that I don't want to call a new injector in this module, because DCEPModule will be installed by a different module.
public class NewModule extends AbstractModule {
protected void configure() {
install(DCEPModule);
}
}
I want a way to get the List<DCE> without explicitly creating a new injector in DCEPModule.
You can achieve this by using a Multibinder (javadoc, wiki).
Here’s an example:
public class SnacksModule extends AbstractModule {
protected void configure(){
Multibinder<Snack> multibinder = Multibinder.newSetBinder(binder(), Snack.class);
multibinder.addBinding().toInstance(new Twix());
  multibinder.addBinding().toProvider(SnickersProvider.class);
  multibinder.addBinding().to(Skittles.class);
}
}
Now, the multibinder will provide a Set<Snack>. If you absolutely need a List instead of a Set, then you can add a method to your module like this:
#Provides
public List<Snack> getSnackList(Set<Snack> snackSet) {
return new ArrayList(snackSet);
}
You can add implementations to the same Multibinding in more than one module. When you call Multibinder.newSetBinder(binder, type) it doesn’t necessarily create a new Multibinding. If a Multibinding already exists for for that type, then you will get the existing Multibinding.

Crashes related to GraphRepository#findAll() when using AspectJ

This line in TopLevelTransaction (neo4j-kernel-2.1.2) throws a NullPointerException every time I call next() on an iterator obtained via GraphRepository#findAll():
protected void markAsRollbackOnly()
{
try
{
transactionManager.getTransaction().setRollbackOnly(); // NPE here
}
catch ( Exception e )
{
throw new TransactionFailureException(
"Failed to mark transaction as rollback only.", e );
}
}
I found some threads about similar crashes with slightly different stack traces. The accepted solution on this question is to use "proxy" transaction management, but that seems like a band-aid solution. This question also mentions "proxy" transaction management and suggests that there might be something wrong with the #Transactional annotation when using AspectJ.
Is this legitimately a bug, or have I just set up my project incorrectly? My code is essentially the same as in my standalone hello world, with a slightly more complex main class:
#Component
public class Test2 {
#Autowired
FooRepository repo;
public static void main(String[] args) {
AbstractApplicationContext context = new AnnotationConfigApplicationContext("test2");
Test2 test2 = context.getBean(Test2.class);
test2.doStuff();
}
public void doStuff() {
createFoo();
printFoos();
}
#Transactional
public Foo createFoo() {
Foo foo = new Foo();
foo.setName("Derp" + System.currentTimeMillis());
repo.save(foo);
System.out.println("saved " + foo.toString());
return foo;
}
#Transactional
public void printFoos() {
Iterable<Foo> foos = repo.findAll();
System.out.println("findAll() returned instance of " + foos.getClass().getName());
Iterator<Foo> iter = foos.iterator();
System.out.println("iterator is instance of " + iter.getClass().getName());
if(iter.hasNext()) {
iter.next(); // CRASHES HERE
}
}
}
I can post my POM if needed.
I didn't find a bug. Two or three things are required to make this work, depending on whether you want to use proxy or AspectJ transaction management.
First, transaction management must be enabled. Since I'm using annotation-based configuration, I did this by annotating my #Configuration class with #EnableTransactionManagement. Contrary to the docs, the default mode now seems to be AdviceMode.ASPECTJ, not AdviceMode.PROXY.
Next, you need to ensure that the Iterator is used within a transaction. In my example, if I use AdviceMode.PROXY the entire bean containing the #Autowired repository has to be annotated #Transactional. If I use AdviceMode.ASPECTJ I can annotate just the method. This is because the call to the method using the iterator is a self-call from within the bean, and proxy transaction management cannot intercept and manage internal calls.
Finally, if you're using AdviceMode.ASPECTJ you must set up weaving as discussed here.

Resources