How to configure exclusive consumer with Grails and JMS / ActiveMQ? - grails

I have a Grails app that subscribes to a given ActiveMQ topic using the JMS plugin. How can I make the TestService class an exclusive consumer? Details of exclusive consumer here
The use case is that I am running the consumer on AWS EC2 and the ActiveMQ feed has a durability of 5 mins and it takes longer than this to replace the instance if it dies. I can't afford to lose messages and message order must be preserved, hence I wish to use multiple instances, where the first instance to connect will be the one that the broker sends every message, and the others are sat in reserve. In the event of the first instance dying, the AMQ broker will send the messages to one of the other instances.
Also, what criteria are used by JMS to determine when an exclusive consumer has died or gone away?
// resources.groovy
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) {
brokerURL top://example.com:1234
userName = 'user'
password = 'password'
}
}
class TestService {
static exposes = ["jms"]
static destination = "SOME_TOPIC_NAME"
static isTopic = true
def onMessage(msg) {
// handle message
// explicitly return null to prevent unwanted replyTo attempt
return null
}
}

First of all, your example uses topics, that won't work; you want queues:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE"
...
}
Configuring exclusive consumers in ActiveMQ is straightforward:
queue = new ActiveMQQueue("MYQUEUE?consumer.exclusive=true");
..but may be tricky with the Grails plugin; you can try these:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE?consumer.exclusive=true"
def onMessage(msg){ ...}
}
class TestService {
static expose = ["jms"]
#Queue(
name = "MYQUEUE?consumer.exclusive=true"
)
def handleMessage(msg){ ...}
}
Regarding your question on how the broker determines if a consumer dies, I'm not sure how it's done exactly in ActiveMQ, but in most JMS implementations, TCP failures trigger an exception on the connection; the peer (the broker in this case) handles the exception and fails over to the next available consumer.
Hope that helps

Related

How can I activate JMX for Caffeine cache

I was looking at this SO question, but here I only want Caffeine to start reporting to JMX.
I have added an application.conf file as such and referenced it via -Dconfig.file:
caffeine.jcache {
# A named cache is configured by nesting a new definition under the caffeine.jcache namespace. The
# per-cache configuration is overlaid on top of the default configuration.
default {
# The monitoring configuration
monitoring {
# If JCache statistics should be recorded and externalized via JMX
statistics = true
# If the configuration should be externalized via JMX
management = true
}
}
It is not working, but I suspect it might be related to jcache, but not sure what is the expected way to implement this basic monitoring.
The cache instance is registered with the MBean server when it is instantiated by the CacheManager. The following test uses the programmatic api for test simplicity.
public final class JmxTest {
#Test
public void jmx() throws MalformedObjectNameException {
var config = new CaffeineConfiguration<>();
config.setManagementEnabled(true);
config.setStatisticsEnabled(true);
var manager = Caching.getCachingProvider().getCacheManager();
var cache = manager.createCache("jmx", config);
cache.put(1, 2);
var server = ManagementFactory.getPlatformMBeanServer();
String name = String.format("javax.cache:type=%s,CacheManager=%s,Cache=%s",
"CacheStatistics", manager.getURI().toString(), cache.getName());
var stats = JMX.newMBeanProxy(server, new ObjectName(name), CacheStatisticsMXBean.class);
assertThat(stats.getCachePuts()).isEqualTo(1);
}
}
If you do not need JCache for an integration then you will likely prefer to use the native APIs and metrics library. It is supported by Micrometer, Dropwizard Metrics, and the Prometheus client. While JCache is great for framework integrations, its api is rigid and cause surprising performance issues.

MassTransit configured with StructureMap - ContainerScoped not working

I've configured a class X with ContainerScope in my StructureMap configuration, but for some reason, when the app initially starts up and MassTransit consumer consumes the initial message, it creates the instance, but on subsequent messages received for that consumer, the consumer is recreated, but not object X (I would expect a new instance is created per message received). I know if I configure it with transient it'll work, but I just want a single instance of that class created for the entirety of the processing of that message.
Any help with this would be greatly appreciated.
When using MassTransit, creating a new consumer instance is the preferred behavior for each message. It is recommended that any state or behavior that needs to be maintained as a single instance across messages is done using a dependency of that consumer (which can be configured in the container by the application developer).
I realize that you are asking how to configure your consumer to be a singleton, and you can probably figure that out, but MassTransit will reconfigure the container to make it scoped for each message if you're using AddMassTransit/AddConsumer.
A better approach is to have your state configured:
public interface IConsumerState
{
}
public class ConsumerState :
IConsumerState
{
}
x.For<IConsumerState>().Use<ConsumerState>().Singleton();
Then, for MassTransit, configure your consumer where your consumer depends upon that interface.
public class Consumer :
IConsume<Message>
{
public Consumer(IConsumerState state)
{
_state = state;
}
public async Consume(ConsumeContext<Message> context)
{
}
}
x.AddMassTransit(m =>
{
m.AddConsumer<Consumer>();
m.AddBus(provider => Bus.Factory.CreateUsingInMemory(cfg =>
{
cfg.ConfigureEndpoints();
}
});
Using this approach, a new consumer is created for each message and the state is maintained/shared by all consumer instances.

Inject OSGi Services in a non-component class

Usually I have seen in OSGi development that one service binds to another service. However I am trying to inject an OSGi service in a non-service class.
Scenario trying to achieve: I have implemented a MessageBusListener which is an OSGi service and binds to couple of more services like QueueExecutor etc.
Now one of the tasks of the MessageBusListener is to create a FlowListener (non-service class) which would invoke the flows based on the message content. This FlowListener requires OSGi services like QueueExecutor to invoke the flow.
One of the approach I tried was to pass the reference of the services while creating the instance of FlowListener from MessageBusListener. However when the parameterized services are deactivated and activated back, I think OSGi service would create a new instance of a service and bind to MessageBusListener, but FlowListener would still have a stale reference.
#Component
public class MessageBusListener
{
private final AtomicReference<QueueExecutor> queueExecutor = new AtomicReference<>();
#Activate
protected void activate(Map<String, Object> osgiMap)
{
FlowListener f1 = new FlowListener(queueExeciutor)
}
Reference (service = QueueExecutor.class, cardinality = ReferenceCardinality.MANDATORY, policy = ReferencePolicy.STATIC)
protected void bindQueueExecutor(QueueExecutor queueExecutor)
{
this.queueExecutor = queueExecutor;
}
}
public class FlowListener
{
private final AtomicReference<QueueExecutor> queueExecutor;
FlowListener(QueueExecutor queueExecutor)
{
this.queueExecutor = queueExecutor;
}
queueExecutor.doSomething() *// This would fail in case the QueueExecutor
service was deactivated and activated again*
}
Looking forward to other approaches which could suffice my requirement.
Your approach is correct you just need to also handle the deactivation if necessary.
If the QueueExecutor disappears the MessageBuslistener will be shut down. You can handle this using a #Deactivate method. In this method you can then also call a sutdown method of FlowListener.
If a new QeueExecutor service comes up then DS will create a new MessageBuslistener so all should be fine.
Btw. you can simply inject the QueueExecutor using:
#Reference
QueueExecutor queueExecutor;

Fetch message details in Spring RecoveryCallback

I'm publishing messages into RabbitMQ and I would like to track the errors when RabbitMQ is down, for this I added one RetryTemplate with the recovery callback, but the recovery callback only provides this method getLastThrowable() and I'm not sure how to provide the details of the messages that failed when RabbitMQ is down. (as per documentation "The RecoveryCallback is somewhat limited in that the retry context only contains the
lastThrowable field. For more sophisticated use cases, you should use an external
RetryTemplate so that you can convey additional information to the RecoveryCallback via
the context’s attributes") but I don't know how to do that, if anyone could help me with one example that will be awesome.
Rabbit Template
public RabbitTemplate rabbitMqTemplate(RecoveryCallback publisherRecoveryCallback) {
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(exchangeName);
r.setRoutingKey(routingKey);
r.setConnectionFactory(rabbitConnectionFactory);
r.setMessageConverter(jsonMessageConverter());
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10.0);
backOffPolicy.setMaxInterval(10000);
retryTemplate.setBackOffPolicy(backOffPolicy);
r.setRetryTemplate(retryTemplate);
r.setRecoveryCallback(publisherRecoveryCallback);
return r;
}
Recovery Callback
#Component
public class PublisherRecoveryCallback implements RecoveryCallback<AssortmentEvent> {
#Override
public AssortmentEvent recover(RetryContext context) throws Exception {
log.error("Error publising event",context.getLastThrowable());
//how to get message details here??
return null;
}
}
AMQP Outbound Adapter
return IntegrationFlows.from("eventsChannel")
.split()
.handle(Amqp.outboundAdapter(rabbitMqTemplate)
.exchangeName(exchangeName)
.confirmCorrelationExpression("payload")
.confirmAckChannel(ackChannel)
.confirmNackChannel(nackChannel)
)
.get();
The isn't possible because the function RabbitTemplate.execute() is already not aware about message you send, because it may be performed from any other method, where we might not have messages to deal:
return this.retryTemplate.execute(
(RetryCallback<T, Exception>) context -> RabbitTemplate.this.doExecute(action, connectionFactory),
(RecoveryCallback<T>) this.recoveryCallback);
What I suggest you to do is like storing message to the ThreadLocal before send and get it from there from your custom RecoveryCallback.

Grails 2.5: how to propogate the IP address down to service layer?

The controller layer can get the IP using request.getRemoteAddr() and/or request.getHeader("Client-IP") etc.
However, down in the bowels of the service layer, we might want to log some detected or suspected fraudulent activity by the user, along with the IP address of the user. However, the IP is not available to the service layer, nor is the request.
Obviously, every call from every controller method to every single service method could also pass in the IP or the request, but as we have thousands of these calls and lots of chains of them, it is not really practical.
Can anyone think of a better way?
As we are not in charge of instantiation of the services (these just get magically injected), we can't even pass the IP in when each service is created for the current HTTP call.
UPDATE 1
As suggested, tried the MDC route. Unfortunately, this does not seem to work.
in filter:
import org.apache.log4j.MDC
class IpFilters {
def filters = {
all() {
before = {
MDC.put "IP", "1.1.1.1"
println "MDC.put:" + MDC.get("IP")
}
afterView = { Exception e ->
println "MDC.remove:" + MDC.get("IP")
MDC.remove 'IP'
}
}
in service:
import org.apache.log4j.MDC
:
def someMethod() {
String ip = MDC.get("IP")
println("someMethod: IP = $ip")
}
The result is always:
MDC.put:1.1.1.1
MDC.remove:1.1.1.1
someMethod: IP = null
So the service cant access MDC variables put on the thread in the filter, which is a real shame. Possibly the problem is that "someMethod" is actually called by springSecuirty.
Well, it is highly recommended that we should keep the business logic aware of the controller logic. But keeping your situation in mind, you have to do that and absolutely available. In your service method, write this to log the IP address of the current request:
import org.springframework.web.context.request.RequestContextHolder
// ... your code and class
def request = RequestContextHolder.currentRequestAttributes().getRequest()
println request.getRemoteAddr()
Just make sure, you handle the whatever exception thrown from that line when the same service method is invoked from outside a Grails request context like from a Job.
my two pence worth
basically been using above and it works perfectly fine when a request is directed through standard grails practices.
In this scenario, user triggers websockets connection this then is injected into websockets listener using Holders.applicationContext.
The issue arises around are your outside of the web request.
the fix was painful but may come in handy for anyone else in this situation:
private static String userIp
String getIp() {
String i
new Thread({
//to bypass :
// Are you referring to request attributes outside of an actual web request, or processing a
// request outside of the originally receiving thread? If you are actually operating within a web request
// and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet:
// In this case, use RequestContextListener or RequestContextFilter to expose the current request.
def webRequest = RequestContextHolder.getRequestAttributes()
if(!webRequest) {
def servletContext = ServletContextHolder.getServletContext()
def applicationContext = WebApplicationContextUtils.getRequiredWebApplicationContext(servletContext)
webRequest = grails.util.GrailsWebMockUtil.bindMockWebRequest(applicationContext)
}
//def request = RequestContextHolder.currentRequestAttributes().request
def request = WebUtils.retrieveGrailsWebRequest().currentRequest
i=request.getRemoteAddr()
if (!i ||i == '127.0.0.1') {
i=request.getHeader("X-Forwarded-For")
}
if (!i ||i == '127.0.0.1') {
i=request.getHeader("Client-IP")
}
if (!i) { i="127.0.0.1"}
this.userIp=i
} as Runnable ).start()
return i
}
Now when calling this some sleep time is required due to it running in as a runnable :
def aa = getIp()
sleep(300)
println "$aa is aa"
println "---- ip ${userIp}"
Also provided alternative way of calling request def request = WebUtils.retrieveGrailsWebRequest().currentRequest in grails 3 the commented out line .request comes up unrecognised in ide (even though it works)
the new Thread({ was still needed since even though it returned ip after getting ip it was attempting to save to a db and some other bizarre issue appeared around
java.lang.RuntimeException: org.springframework.mock.web.MockHttpServletRequest.getServletContext()Ljavax/servlet/ServletContext;
at org.apache.tomcat.websocket.pojo.PojoMessageHandlerBase.handlePojoMethodException(PojoMessageHandlerBase.java:119)
at org.apache.tomcat.websocket.pojo.PojoMessageHandlerWholeBase.onMessage(PojoMessageHandlerWholeBase.java:82)
so the fix to getting hold of request attribute in this scenario is above
for the mock libraries you will require this in build.gradle:
compile 'org.springframework:spring-test:2.5'
So the saga continued - the above did not actually appear to work in my case since basically the request originated by user but when sent to websockets - the session attempting to retrieve Request (ip/session) was not actual real user.
This in the end had to be done a very different way so really steeply off the topic but when this method of attempting ip does not work the only way left is through SessionListeners:
in src/main/groovy/{packageName}
class SessionListener implements HttpSessionListener {
private static List activeUsers = Collections.synchronizedList(new ArrayList())
static Map sessions = [:].asSynchronized()
void sessionCreated (HttpSessionEvent se) {
sessions.put(se.session.id, se.session)
}
void sessionDestroyed (HttpSessionEvent se) {
sessions.remove(se.session.id)
}
}
in grails-app/init/Application.groovy
Closure doWithSpring() {
{ ->
websocketConfig WebSocketConfig
}
}
// this already exists
static void main(String[] args) {
GrailsApp.run(Application, args)
}
in that same init folder:
class WebSocketConfig {
#Bean
public ServletContextInitializer myInitializer() {
return new ServletContextInitializer() {
#Override
public void onStartup(ServletContext servletContext) throws ServletException {
servletContext.addListener(SessionListener)
}
}
}
}
Now to get userIP, when the socket initially connects it sends the user's session to sockets. the socket registers that user's session within the websockets usersession details.
when attempting to get the user ip (i have registered the users ip to session.ip on the controller/page hitting the page opening sockets)
def aa = SessionListener.sessions.find{it.key==sessionId}?.value
println "aa $aa.ip"

Resources