Spring AmqpAdmin create exchange & queues in vHost - spring-amqp

A similar question to Send message to arbitrary vhost / exchange with RabbitMQ / Spring AMQP but I'm trying to have the AmqpAdmin create an exchange under a specific vHost
I tried doing something like
SimpleResourceHolder.bind(((RabbitAdmin) amqpAdmin).getRabbitTemplate().getConnectionFactory(), vhost);
...
amqpAdmin.declareExchange(exchange);
...
amqpAdmin.declareQueue(queue);
amqpAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange).with(routingKey));
SimpleResourceHolder.unbind(((RabbitAdmin) amqpAdmin).getRabbitTemplate().getConnectionFactory());
However AmqpAdmin keeps using "/"
is there a way to tell it to use a specific vHost programmatically at runtime?
update 1:
based on #artem-bilan I've had (partial) success by doing:
public void sendToTopic(String domain, String topic, String routingKey, Object payload) {
bindToVirtualHost(template, domain);
try {
template.setUsePublisherConnection(true);
template.convertAndSend(topic, routingKey, payload);
} finally {
unbindFromVirtualHost(template);
template.setUsePublisherConnection(false);
}
}
private void bindToVirtualHost(RabbitTemplate rabbitTemplate, String vHost) {
AbstractConnectionFactory factory = (AbstractConnectionFactory) rabbitTemplate.getConnectionFactory();
LOG.debug("binding {} to {}", factory, vHost);
factory.setVirtualHost(vHost);
}
private void unbindFromVirtualHost(RabbitTemplate rabbitTemplate) {
AbstractConnectionFactory factory = (AbstractConnectionFactory) rabbitTemplate.getConnectionFactory();
LOG.debug("unbinding {} back to default {}", factory, DEFAULT_VHOST);
factory.setVirtualHost(DEFAULT_VHOST);
}
I say (partial) because if I do:
// pre :Manually create vHost foo
sendToTopic("bar","myTopic","key","The payload"); // connection error; protocol method: #method<connection.close>(reply-code=530, reply-text=NOT_ALLOWED - vhost TNo not found, as expected
sendToTopic("foo","myTopic","key","The payload2"); // success, as expected
sendToTopic("bar","myTopic","key","The payload3"); // success, NOT EXPECTED!
and the message of payload3 goes to vHost foo

The RabbitAdmin cannot do more than its ConnectionFactory allows. So, the vHost is similar to the host/port and it cannot be managed from end-user perspectiv.
See:
/**
* Create a new CachingConnectionFactory given a host name
* and port.
* #param hostNameArg the host name to connect to
* #param port the port number
*/
public CachingConnectionFactory(#Nullable String hostNameArg, int port) {
and its:
public void setVirtualHost(String virtualHost) {
The RabbitAdmin, on its turn, is just like this:
/**
* Construct an instance using the provided {#link ConnectionFactory}.
* #param connectionFactory the connection factory - must not be null.
*/
public RabbitAdmin(ConnectionFactory connectionFactory) {
So, to deal with different vHost, you need to have its own ConnectionFactory and RabbitAdmin.
No, AmqpAdmin cannot create for you a vHost. This is not an AMQP protocol operation.
See https://docs.spring.io/spring-amqp/docs/2.2.7.RELEASE/reference/html/#management-rest-api for more info.

Related

Avro Deserialization exception handling with Spring Cloud Stream

I have an application using Spring Cloud Stream and Spring Kafka, which processes Avro messages. The application works fine, but now I'd like to add some error handling.
The Goal: I would like to catch deserialization exceptions, build a new object with the exception details + original Kafka message + custom context info, and push this object to a dedicated Kafka topic. Basically a DLQ, but the original message will be intercepted and decorated.
The Problem: While I can intercept the exception, I can't figure out how to acquire the original message from Kafka (TODO 1, below). I've been all through the data object returned in ConsumerAwareErrorHandler.handle and I don't see it there.
Below is the code I have:
#EnableBinding(EventStream.class)
#SpringBootApplication
#Slf4j
public class SpringcloudApplication {
public static void main(String[] args) {
SpringApplication.run(SpringcloudApplication.class, args);
}
/* Configure custom exception handler */
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> cust() {
return (container, destination, group) -> {
container.setErrorHandler(new ConsumerAwareErrorHandler() {
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data, Consumer<?, ?> consumer) {
log.info("Got error with data: {}", data);
// TODO 1 - How to get original message?
// TODO 2 - Send to dedicated (DLQ) topic
}
});
};
}
#StreamListener(EventStream.INBOUND)
public void consumeEvent(#Payload Message message) {
log.info("Consuming event --> {}", message.toString());
produceEvent(message);
}
#Autowired private EventStream eventStream;
public Boolean produceEvent(Message message) {
log.info("Producing event --> {}", message.toString());
return eventStream
.producer()
.send(MessageBuilder.withPayload(message)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.build());
}
}
And the properties files:
spring:
cloud:
stream:
default-binder: kafka
default:
consumer:
useNativeEncoding: true
producer:
useNativeEncoding: true
kafka:
binder:
brokers: localhost:9092
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: "http://localhost:8081"
consumer-properties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
schema.registry.url: "http://localhost:8081"
specific.avro.reader: true
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
bindings:
event-consumer:
destination: data_stream_in # incoming topic
contentType: application/**avro
group: data_stream_consumer
event-producer:
destination: data_stream_out
contentType: application/**avro
I am using the following versions:
Spring Boot 2.3.2.RELEASE
Spring Cloud: Hoxton.SR8
spring-cloud-stream-binder-kafka 3.0.8.RELEASE
spring-kafka 2.5.12
Any help is appreciated!
The second argument in the handle method is the ConsumerRecord which is the original Kafka record, but if you want the record to be automatically sent to a DLQ you can do the following.
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
Essentially, you are setting up a SeekToCurrentErrorHandler which is capable of sending the failed record to the DLQ. See the ref docs for Spring for Apache Kafka for more details on how DeadLetterPublishingRecoverer works: https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters
;
More info on SeekToCurrentErrorHandler:https://docs.spring.io/spring-kafka/docs/current/reference/html/#seek-to-current
you also need to configure and ErrorHandlingDeserializer,
spring.cloud.stream.kafka.binder.configuration.value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.cloud.stream.kafka.binder.configuration.spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
...
similar for the value class.
More info on ErrorHandlingDeserializer: https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handling-deserializer
If you want to modify the record and add a custom message to DLQ, you can do that by overrding the handle method and then gain access to the ConsumerRecord and then call the super class method.

Using Spring AMQP consumer in spring-webflux

I have an app that's using Boot 2.0 with webflux, and has an endpoint returning a Flux of ServerSentEvent. The events are created by leveraging spring-amqp to consume messages off a RabbitMQ queue. My question is: How do I best bridge the MessageListener's configured listener method to a Flux that can be passed up to my controller?
Project Reactor's create section mentions that it "can be very useful to bridge an existing API with the reactive world - such as an asynchronous API based on listeners", but I'm unsure how to hook into the message listener directly since it's wrapped in the DirectMessageListenerContainer and MessageListenerAdapter. Their example from the create section:
Flux<String> bridge = Flux.create(sink -> {
myEventProcessor.register(
new MyEventListener<String>() {
public void onDataChunk(List<String> chunk) {
for(String s : chunk) {
sink.next(s);
}
}
public void processComplete() {
sink.complete();
}
});
});
So far, the best option I have is to create a Processor and simply call onNext() each time in the RabbitMQ listener method to manually produce an event.
I have something like this:
#SpringBootApplication
#RestController
public class AmqpToWebfluxApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(AmqpToWebfluxApplication.class, args);
RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
for (int i = 0; i < 100; i++) {
rabbitTemplate.convertAndSend("foo", "event-" + i);
}
}
private TopicProcessor<String> sseFluxProcessor = TopicProcessor.share("sseFromAmqp", Queues.SMALL_BUFFER_SIZE);
#GetMapping(value = "/sseFromAmqp", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getSeeFromAmqp() {
return this.sseFluxProcessor;
}
#RabbitListener(id = "fooListener", queues = "foo")
public void handleAmqpMessages(String message) {
this.sseFluxProcessor.onNext(message);
}
}
The TopicProcessor.share() allows to have many concurrent subscribers which we get when we return this TopicProcessor as a Flux to our /sseFromAmqp REST request via WebFlux.
The #RabbitListener just delegates its received messages to that TopicProcessor.
In the main() I have a code to confirm that I can publish to the TopicProcessor even if there is no subscribers.
Tested with two separate curl sessions and published messages to the queue via RabbitMQ Management Plugin.
By the way I use share() because of: https://projectreactor.io/docs/core/release/reference/#_topicprocessor
from multiple upstream Publishers when created in the shared configuration
That' because that #RabbitListener really can be called from different ListenerContainer threads, concurrently.
UPDATE
Also I moved this sample to my Sandbox: https://github.com/artembilan/sendbox/tree/master/amqp-to-webflux
Let's suppose you want to have a single RabbitMQ listener that somehow puts messages to one or more Flux(es). Flux.create is indeed a good way how to create such a Flux.
Let's start with Messaging with RabbitMQ Spring guide and try to adapt it.
The original Receiver would have to be modified in order to be able to put received messages to a FluxSink.
#Component
public class Receiver {
/**
* Collection of sinks enables more than one subscriber.
* Have to keep in mind that the FluxSink instance that the emitter works with, is provided per-subscriber.
*/
private final List<FluxSink<String>> sinks = new ArrayList<>();
/**
* Adds a sink to the collection. From now on, new messages will be put to the sink.
* Method will be called when a new Flux is created by calling Flux.create method.
*/
public void addSink(FluxSink<String> sink) {
sinks.add(sink);
}
public void receiveMessage(String message) {
sinks.forEach(sink -> {
if (!sink.isCancelled()) {
sink.next(message);
} else {
// If canceled, don't put any new messages to the sink.
// Sink is canceled when a subscriber cancels the subscription.
sinks.remove(sink);
}
});
}
}
Now we have a receiver that puts RabbitMQ messages to sink. Then, creating a Flux is rather simple.
#Component
public class FluxFactory {
private final Receiver receiver;
public FluxFactory(Receiver receiver) { this.receiver = receiver; }
public Flux<String> createFlux() {
return Flux.create(receiver::addSink);
}
}
Receiver bean is autowired to the factory. Of course, you don't have to create a special factory. This only demonstrates the idea how to use the Receiver to create the Flux.
The rest of the application from Messaging with RabbitMQ guide may stay the same, including the bean instantiation.
#SpringBootApplication
public class Application {
...
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
...
}
I used similar design to adapt Twitter streaming API sucessfuly. Though, there may be a nicer way how to do it.

Access to HttpConduit before loading WSDL on CXF Dynamic Client

Before loading WSDL from https URL for my dynamic client I need to set appropriate configuration on HttpConduit to avoid all SSL errors. According to docs we could hardcode conduit but not sure to do it programmatically. Is there way I could get hold of HttpConduit before creating Client object on DynamicClientFactory?
JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance();
//Need to get HttpConduit here before the client is created, how?
Client client = dcf.createClient(wsdlUri);
// Can access http conduit only after client is created
HTTPConduit conduit = (HTTPConduit) client.getConduit();
One way to get of hold HttpConduit and customize the http(s) configuration is through HTTPConduitConfigurer. Below code snippet shows how it can be done.
Bus bus = CXFBusFactory.getThreadDefaultBus();
bus.setExtension(new HTTPConduitConfigurer() {
#Override
public void configure(String name, String address, HTTPConduit conduit) {
//set conduit parameters ...
// ex. disable host name verification
TLSClientParameters clientParameters = new TLSClientParameters();
clientParameters.setHostnameVerifier(new HostnameVerifier() {
#Override
public boolean verify(String hostname, SSLSession session) {
return true;
}
});
conduit.setTlsClientParameters(clientParameters);
}
}, HTTPConduitConfigurer.class);
JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance(bus);
Client client = dcf.createClient(wsdlUri);

How to secure reactor netServer with spring security?

I try to develop an "hybrid" server using spring boot webApplication with embedded tomcat and a netServer from reactor to scale-up my Rest Api.
There are no Spring controller, all the incoming request are handled by the netServer.
Never the less i'd like to have a login page using spring security remember me facilities to authenticate the user and use this authentication to secure incoming request on the reactor netServer.
I start to implements the netServer, according to this tutorial reactor thumbmailer
here is my netServer :
NetServer<FullHttpRequest, FullHttpResponse> server = new TcpServerSpec<FullHttpRequest, FullHttpResponse>(NettyTcpServer.class)
.env(env)
.dispatcher("sync")
.listen(8080)
.options(opts)
.consume(ch -> {
// attach an error handler
ch.when(Throwable.class, UserController.errorHandler(ch));
// filter requests by URI
Stream<FullHttpRequest> in = ch.in();
// serve image thumbnail to browser
in.filter((FullHttpRequest req) -> req.getUri().startsWith(UserController.GET_USER_PROFILE))
.consume(UserController.getUserProfile(ch));
})
.get();
So when a user try to load his profile, the incoming request is handled by the userController :
public static Consumer<FullHttpRequest> getUserProfile(NetChannel<FullHttpRequest, FullHttpResponse> channel) {
UserService userService = StaticContextAccessor.getBean(UserService.class);
return req -> {
try {
LoginDTO login = RestApiUtils.parseJson(LoginDTO.class, RestApiUtils.getJsonContent(req));
DefaultFullHttpResponse resp = new DefaultFullHttpResponse(HTTP_1_1, OK);
String result = userService.loadUserProfile(login);
resp.headers().set(CONTENT_TYPE, "application/json");
resp.headers().set(CONTENT_LENGTH, result.length());
resp.content().writeBytes(result.getBytes());
channel.send(resp);
} catch (Exception e) {
channel.send(badRequest(e.getMessage()));
}
};
}
Here is the hack : getUserProfile is a static methode, so i can't use GlobalMethodSecurity to secure it.
i then inject a userService in this controller using a StaticContextAccessor :
#Component
public class StaticContextAccessor {
private static StaticContextAccessor instance;
#Autowired
private ApplicationContext applicationContext;
#PostConstruct
public void registerInstance() {
instance = this;
}
public static <T> T getBean(Class<T> clazz) {
return instance.applicationContext.getBean(clazz);
}
}
UserService :
#Service
#PreAuthorize("true")
public class UserServiceImpl implements UserService{
public String loadUserProfile(LoginDTO login){
//TODO load profile in mongo
return new GsonBuilder().create().toJson(login);
}
}
the service is managed by spring so i guess i could use spring GlobalMethodSecurity on it (i m still developping this part, but i'm not sure this is the best way to secure my netServer)
Is there a easier way to use Spring security on reactor netServer ???
My first web site version was developped with nodeJS to handle many concurent users, and i try to refactor it using a JVM nio solution.
So is spring / reactor / netty a good solution to have a highly scalable server, or should i use something like play! or vertx.io ?
Thank you so much
Have you tried bootstrapping your NetServer from within a JavaConfig #Bean method? Something like:
#Configuration
#EnableReactor
class AppConfig {
public Function<NetChannel, UserController> users() {
return new UserControllerFactory();
}
#Bean
public NetServer netServer(Environment env, Function<NetChannel, UserController> users) {
return new TcpServerSpec(NettyTcpServer.class)
.env(env)
.dispatcher("sync")
.listen(8080)
.options(opts)
.consume(ch -> {
// attach an error handler
ch.when(Throwable.class, UserController.errorHandler(ch));
// filter requests by URI
Stream<FullHttpRequest> in = ch.in();
// serve image thumbnail to browser
in.filter((FullHttpRequest req) -> req.getUri().startsWith(UserController.GET_USER_PROFILE))
.consume(users.apply(ch));
})
.get();
}
}
This should preserve your Spring Security support and enable you to share handlers as beans rather than as return values from static methods. In general, just about everything you need to do in a Reactor TCP app can be done using beans and injection and by returing the NetServer as a bean itself.

How to propagate spring security context to JMS?

I have a web application which sets a spring security context through a spring filter. Services are protected with spring annotations based on users roles. This works.
Asynchronous tasks are executed in JMS listeners (extend javax.jms.MessageListener). The setup of this listeners is done with Spring.
Messages are sent from the web application, at this time a user is authenticated. I need the same authentication in the JMS thread (user and roles) during message processing.
Today this is done by putting the spring authentication in the JMS ObjectMessage:
SecurityContext context = SecurityContextHolder.getContext();
Authentication auth = context.getAuthentication();
... put the auth object in jms message object
Then inside the JMS listener the authentication object is extracted and set in the context:
SecurityContext context = new SecurityContextImpl();
context.setAuthentication(auth);
SecurityContextHolder.setContext(context);
This works most of the time. But when there is a delay before the processing of a message, message will never be processed. I couldn't determine yet the cause of these messages loss, but I'm not sure the way we propagate authentication is good, even if it works in custer when the message is processed in another server.
Is this the right way to propagate a spring authentication ?
Regards,
Mickaƫl
I did not find better solution, but this one works for me just fine.
By sending of JMS Message I'am storing Authentication as Header and respectively by receiving recreating Security Context. In order to store Authentication as Header you have to serialise it as Base64:
class AuthenticationSerializer {
static String serialize(Authentication authentication) {
byte[] bytes = SerializationUtils.serialize(authentication);
return DatatypeConverter.printBase64Binary(bytes);
}
static Authentication deserialize(String authentication) {
byte[] decoded = DatatypeConverter.parseBase64Binary(authentication);
Authentication auth = (Authentication) SerializationUtils.deserialize(decoded);
return auth;
}
}
By sending just set Message header - you can create Decorator for Message Template, so that it will happen automatically. In you decorator just call such method:
private void attachAuthenticationContext(Message message){
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
String serialized = AuthenticationSerializer.serialize(auth);
message.setStringProperty("authcontext", serialized);
}
Receiving gets more complicated, but it can be also done automatically. Instead of applying #EnableJMS use following Configuration:
#Configuration
class JmsBootstrapConfiguration {
#Bean(name = JmsListenerConfigUtils.JMS_LISTENER_ANNOTATION_PROCESSOR_BEAN_NAME)
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public JmsListenerAnnotationBeanPostProcessor jmsListenerAnnotationProcessor() {
return new JmsListenerPostProcessor();
}
#Bean(name = JmsListenerConfigUtils.JMS_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME)
public JmsListenerEndpointRegistry defaultJmsListenerEndpointRegistry() {
return new JmsListenerEndpointRegistry();
}
}
class JmsListenerPostProcessor extends JmsListenerAnnotationBeanPostProcessor {
#Override
protected MethodJmsListenerEndpoint createMethodJmsListenerEndpoint() {
return new ListenerEndpoint();
}
}
class ListenerEndpoint extends MethodJmsListenerEndpoint {
#Override
protected MessagingMessageListenerAdapter createMessageListenerInstance() {
return new ListenerAdapter();
}
}
class ListenerAdapter extends MessagingMessageListenerAdapter {
#Override
public void onMessage(Message jmsMessage, Session session) throws JMSException {
propagateSecurityContext(jmsMessage);
super.onMessage(jmsMessage, session);
}
private void propagateSecurityContext(Message jmsMessage) throws JMSException {
String authStr = jmsMessage.getStringProperty("authcontext");
Authentication auth = AuthenticationSerializer.deserialize(authStr);
SecurityContextHolder.getContext().setAuthentication(auth);
}
}
I have implemented for myself a different solution, which seems easier for me.
Already I have a message converter, the standard JSON Jackson message converter, which I need to configure on the JMSTemplate and the listeners.
So I created a MessageConverter implementation which wraps around another message converter, and propagates the security context via the JMS message properties.
(In my case, the propagated context is a JWT token which I can extract from the current context and apply to the security context of the listening thread).
This way the entire responsibility for propagation of security context is elegantly implemented in a single class, and requires only a little bit of configuration.
Thanks great but I am handling this in easy way . put one util file and solved .
public class AuthenticationSerializerUtil {
public static final String AUTH_CONTEXT = "authContext";
public static String serialize(Authentication authentication) {
byte[] bytes = SerializationUtils.serialize(authentication);
return DatatypeConverter.printBase64Binary(bytes);
}
public static Authentication deserialize(String authentication) {
byte[] decoded = DatatypeConverter.parseBase64Binary(authentication);
Authentication auth = (Authentication) SerializationUtils.deserialize(decoded);
return auth;
}
/**
* taking message and return string json from message & set current context
* #param message
* #return
*/
public static String jsonAndSetContext(Message message){
LongString authContext = (LongString)message.getMessageProperties().getHeaders().get(AUTH_CONTEXT);
Authentication auth = deserialize(authContext.toString());
SecurityContextHolder.getContext().setAuthentication(auth);
byte json[] = message.getBody();
return new String(json);
}
}

Resources