I have modified to implement channel interceptor in spring-websocket-portfolio sample application (https://github.com/rstoyanchev/spring-websocket-portfolio). whenever the client disconnects, channel interceptor is processed twice. I have similar implementation in my production application. As it is being invoked twice so it has unwanted result for the 2nd invocation. I had put work around for the time being. But wondering why my channel interceptor is invoked twice? Any help would be highly appreciated.
modified items: WebSocketConfig.java:
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.setInterceptors(channelInterceptor());
}
#Bean
public ChannelInterceptor channelInterceptor() {
return new ChannelInterceptor();
}
ChannelInterceptor :
package org.springframework.samples.portfolio.config;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.simp.stomp.StompHeaderAccessor;
import org.springframework.messaging.support.ChannelInterceptorAdapter;
public class ChannelInterceptor extends ChannelInterceptorAdapter {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
StompHeaderAccessor sha = StompHeaderAccessor.wrap(message);
System.out.println(sha.getCommand() + " " + sha);
switch (sha.getCommand()) {
case CONNECT: {
System.out.println("connected:"+sha.getSessionId());
break;
}
case DISCONNECT: {
System.out.println("disconnected:"+sha.getSessionId());
break;
}
default:
System.out.println("default:"+sha.getCommand());
break;
}
}
}
logs:
**disconnected**:9k1hvln6
**disconnected**:9k1hvln6
Disconnect events may happen more than once for the same session, your interceptor should be idempotent and ignore duplicate events.
You may also consider using application events (SessionConnectEvent, SessionDisconnectEvent...) instead of a channel interceptor. Here's an example of an idempotent event listener: https://github.com/salmar/spring-websocket-chat/blob/master/src/main/java/com/sergialmar/wschat/event/PresenceEventListener.java
Generally a DISCONNECT frame comes the client side, is processed in the StompSubProtocolHandler, and is then propagated to the broker. However, a connection can also be closed or lost without a DISCONNECT frame. Regardless of how a connection is closed, the StompSubProtocolMessageHandler generates a DISCONNECT frame. So there is some redundancy on the server side to ensure the broker is aware the client connection is gone.
As Sergi mentioned you can either subscribe to listen for SessionDisconnectEvent (of which there should be only one) and other AbstractSubProtocol events or ensure your code is idempotent.
Related
I have requirement where workflow should start execution but it should wait for next subsequent events to do the execution. Events are generated based of API calls that the service receives from client. The events semantics is based on a state machine. Please let me know how this can be implemented in java. Please provide any sample code for reference.
If you are using AWS Flow Framework an external event should use Signal API to notify a workflow instance. Inside a workflow it becomes a handler method like:
#Workflow
#WorkflowRegistrationOptions(
defaultExecutionStartToCloseTimeoutSeconds = 60,
defaultTaskStartToCloseTimeoutSeconds = 10)
public interface MyWorkflow
{
#Execute(version = "1.0")
void startMyWF();
#Signal
void signal1();
}
public class MyWFImpl implements MyWorkflow
{
MyActivitiesClient client = new MyActivitiesClientImpl();
// Used to block the workflow until a signal is received.
Settable<Void> signal1Called = new Settable<Void>();
#Override
public void startMyWF(){
Promise<Integer> result = client.activity1();
// Continues when both result and signal1 are ready.
client.activity2(result, signal1);
}
#Override
public void signal1() {
//Process signal
signal1.set(null);
}
}
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/features.workflow.html describes how to write a workflow interface.
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/workflowimpl.html describes how to implement workflows.
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/clients.html describes how to send signals using external client.
I'm publishing messages into RabbitMQ and I would like to track the errors when RabbitMQ is down, for this I added one RetryTemplate with the recovery callback, but the recovery callback only provides this method getLastThrowable() and I'm not sure how to provide the details of the messages that failed when RabbitMQ is down. (as per documentation "The RecoveryCallback is somewhat limited in that the retry context only contains the
lastThrowable field. For more sophisticated use cases, you should use an external
RetryTemplate so that you can convey additional information to the RecoveryCallback via
the context’s attributes") but I don't know how to do that, if anyone could help me with one example that will be awesome.
Rabbit Template
public RabbitTemplate rabbitMqTemplate(RecoveryCallback publisherRecoveryCallback) {
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(exchangeName);
r.setRoutingKey(routingKey);
r.setConnectionFactory(rabbitConnectionFactory);
r.setMessageConverter(jsonMessageConverter());
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10.0);
backOffPolicy.setMaxInterval(10000);
retryTemplate.setBackOffPolicy(backOffPolicy);
r.setRetryTemplate(retryTemplate);
r.setRecoveryCallback(publisherRecoveryCallback);
return r;
}
Recovery Callback
#Component
public class PublisherRecoveryCallback implements RecoveryCallback<AssortmentEvent> {
#Override
public AssortmentEvent recover(RetryContext context) throws Exception {
log.error("Error publising event",context.getLastThrowable());
//how to get message details here??
return null;
}
}
AMQP Outbound Adapter
return IntegrationFlows.from("eventsChannel")
.split()
.handle(Amqp.outboundAdapter(rabbitMqTemplate)
.exchangeName(exchangeName)
.confirmCorrelationExpression("payload")
.confirmAckChannel(ackChannel)
.confirmNackChannel(nackChannel)
)
.get();
The isn't possible because the function RabbitTemplate.execute() is already not aware about message you send, because it may be performed from any other method, where we might not have messages to deal:
return this.retryTemplate.execute(
(RetryCallback<T, Exception>) context -> RabbitTemplate.this.doExecute(action, connectionFactory),
(RecoveryCallback<T>) this.recoveryCallback);
What I suggest you to do is like storing message to the ThreadLocal before send and get it from there from your custom RecoveryCallback.
The controller layer can get the IP using request.getRemoteAddr() and/or request.getHeader("Client-IP") etc.
However, down in the bowels of the service layer, we might want to log some detected or suspected fraudulent activity by the user, along with the IP address of the user. However, the IP is not available to the service layer, nor is the request.
Obviously, every call from every controller method to every single service method could also pass in the IP or the request, but as we have thousands of these calls and lots of chains of them, it is not really practical.
Can anyone think of a better way?
As we are not in charge of instantiation of the services (these just get magically injected), we can't even pass the IP in when each service is created for the current HTTP call.
UPDATE 1
As suggested, tried the MDC route. Unfortunately, this does not seem to work.
in filter:
import org.apache.log4j.MDC
class IpFilters {
def filters = {
all() {
before = {
MDC.put "IP", "1.1.1.1"
println "MDC.put:" + MDC.get("IP")
}
afterView = { Exception e ->
println "MDC.remove:" + MDC.get("IP")
MDC.remove 'IP'
}
}
in service:
import org.apache.log4j.MDC
:
def someMethod() {
String ip = MDC.get("IP")
println("someMethod: IP = $ip")
}
The result is always:
MDC.put:1.1.1.1
MDC.remove:1.1.1.1
someMethod: IP = null
So the service cant access MDC variables put on the thread in the filter, which is a real shame. Possibly the problem is that "someMethod" is actually called by springSecuirty.
Well, it is highly recommended that we should keep the business logic aware of the controller logic. But keeping your situation in mind, you have to do that and absolutely available. In your service method, write this to log the IP address of the current request:
import org.springframework.web.context.request.RequestContextHolder
// ... your code and class
def request = RequestContextHolder.currentRequestAttributes().getRequest()
println request.getRemoteAddr()
Just make sure, you handle the whatever exception thrown from that line when the same service method is invoked from outside a Grails request context like from a Job.
my two pence worth
basically been using above and it works perfectly fine when a request is directed through standard grails practices.
In this scenario, user triggers websockets connection this then is injected into websockets listener using Holders.applicationContext.
The issue arises around are your outside of the web request.
the fix was painful but may come in handy for anyone else in this situation:
private static String userIp
String getIp() {
String i
new Thread({
//to bypass :
// Are you referring to request attributes outside of an actual web request, or processing a
// request outside of the originally receiving thread? If you are actually operating within a web request
// and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet:
// In this case, use RequestContextListener or RequestContextFilter to expose the current request.
def webRequest = RequestContextHolder.getRequestAttributes()
if(!webRequest) {
def servletContext = ServletContextHolder.getServletContext()
def applicationContext = WebApplicationContextUtils.getRequiredWebApplicationContext(servletContext)
webRequest = grails.util.GrailsWebMockUtil.bindMockWebRequest(applicationContext)
}
//def request = RequestContextHolder.currentRequestAttributes().request
def request = WebUtils.retrieveGrailsWebRequest().currentRequest
i=request.getRemoteAddr()
if (!i ||i == '127.0.0.1') {
i=request.getHeader("X-Forwarded-For")
}
if (!i ||i == '127.0.0.1') {
i=request.getHeader("Client-IP")
}
if (!i) { i="127.0.0.1"}
this.userIp=i
} as Runnable ).start()
return i
}
Now when calling this some sleep time is required due to it running in as a runnable :
def aa = getIp()
sleep(300)
println "$aa is aa"
println "---- ip ${userIp}"
Also provided alternative way of calling request def request = WebUtils.retrieveGrailsWebRequest().currentRequest in grails 3 the commented out line .request comes up unrecognised in ide (even though it works)
the new Thread({ was still needed since even though it returned ip after getting ip it was attempting to save to a db and some other bizarre issue appeared around
java.lang.RuntimeException: org.springframework.mock.web.MockHttpServletRequest.getServletContext()Ljavax/servlet/ServletContext;
at org.apache.tomcat.websocket.pojo.PojoMessageHandlerBase.handlePojoMethodException(PojoMessageHandlerBase.java:119)
at org.apache.tomcat.websocket.pojo.PojoMessageHandlerWholeBase.onMessage(PojoMessageHandlerWholeBase.java:82)
so the fix to getting hold of request attribute in this scenario is above
for the mock libraries you will require this in build.gradle:
compile 'org.springframework:spring-test:2.5'
So the saga continued - the above did not actually appear to work in my case since basically the request originated by user but when sent to websockets - the session attempting to retrieve Request (ip/session) was not actual real user.
This in the end had to be done a very different way so really steeply off the topic but when this method of attempting ip does not work the only way left is through SessionListeners:
in src/main/groovy/{packageName}
class SessionListener implements HttpSessionListener {
private static List activeUsers = Collections.synchronizedList(new ArrayList())
static Map sessions = [:].asSynchronized()
void sessionCreated (HttpSessionEvent se) {
sessions.put(se.session.id, se.session)
}
void sessionDestroyed (HttpSessionEvent se) {
sessions.remove(se.session.id)
}
}
in grails-app/init/Application.groovy
Closure doWithSpring() {
{ ->
websocketConfig WebSocketConfig
}
}
// this already exists
static void main(String[] args) {
GrailsApp.run(Application, args)
}
in that same init folder:
class WebSocketConfig {
#Bean
public ServletContextInitializer myInitializer() {
return new ServletContextInitializer() {
#Override
public void onStartup(ServletContext servletContext) throws ServletException {
servletContext.addListener(SessionListener)
}
}
}
}
Now to get userIP, when the socket initially connects it sends the user's session to sockets. the socket registers that user's session within the websockets usersession details.
when attempting to get the user ip (i have registered the users ip to session.ip on the controller/page hitting the page opening sockets)
def aa = SessionListener.sessions.find{it.key==sessionId}?.value
println "aa $aa.ip"
I have a Struts2 Action class that places a JMS Fetch request for a list of Trade in a JMS Queue. This JMS Fetch message is processed by an external process and can take either a few seconds or even few minutes depending on the number of Trade files to be processed by the external task processing app.
I want to know how to handle this HTTP Request with an appropriate response. Does the client wait till the list of Trades is returned? (client (UI) has to action on it and has nothing else to do meanwhile).
The way I approached it is
HTTP Request -->
Struts2 Action -->
Invokes a Runnable to run in a separate Thread (separate from Action class)
UI waits
Action class thread sleeps till runnable does it's job
When Task completed, return list of Trades to UI
Flow is as follows:
Place JMS Fetch Request on Queue1
ExecutorService for Runnable
CClass cclass = new CClass();
final ExecutorService execSvc = Executors.newFixedThreadPool(1);
execSvc.execute(cclass);
Where CClass implements runnable returning a list of Trades:
List<Trade> tradesList = new ArrayList<Trade>();
#Override
public void run() {
while (true) {
try {
Message message = msgConsumer.receive(); // SYNCHRONOUS / NO MDB
if (message == null){
break;
}
if (message instanceof TextMessage) {
TextMessage txtMessage = (TextMessage) message;
Trade trade = TradeBuilder.buildTradeFromInputXML(txtMessage);
if (trade != null) {
tradesList.add(trade); // tradeList is a CClass class variable
}
}
} catch (JMSException e) {
logger.error("JMSException occurred ", e);
}
}
closeConnection();
}
And while this runnableis executing, I do a Thread.sleep in Action class (to let the Runnable execute in the separate Thread)
// In Action class
try {
Thread.sleep(5000); // some time till when the runnable will get executed
} catch (InterruptedException e) {
e.printStackTrace();
}
execSvc.shutdown();
Problem is If I use Callable with a FutureTask and do a get() , that will be blocking till any result is returned. If I do a Runnable, I have to put Action class Thread to sleep till runnable has executed and tradeList is available.
Using Runnable approach, I am able to get couple of hundred records back to UI giving a 5 second Thread.sleep() in main Action class, but only partially constructed tradeList when thousands of records are to be fetched and shown in UI.
This is clearly Not a fail-proof approach.
Any better approach to suggest ? Please elucidate steps for processing in one complete request - response flow.
Yes there is a much better approach when making a standard HTTP request (with ajax you can do other things).
You want to look at the Struts2 Execute and Wait Interceptor Which has most of the functionality you've already implemented. Also look at the token interceptor... which could be useful (it prevents duplicate requests, but doesn't provide a happy wait screen like exec and wait does).
I am trying to make kind of a polling service towards a activemq queue using camel routes.
I am using routing and routing-jsm plugins for grails.
I have my route configuration set like this.
class QueueRoute {
def configure = {
from("activemq:daemon").routeId("daemonRoute")
.noAutoStartup()
.shutdownRunningTask(ShutdownRunningTask.CompleteCurrentTaskOnly)
.to('bean:daemonCamelService?method=receive')
.end()
}
}
and I am basically trying to do .suspendRoute("daemonRoute") and .resumeRoute("daemonRoute") with some time inbetween. Though after issuing suspendRoute the route is not stopped.
Anyone have tried this?, I have read something about needing to kill the exchange in progress or something similar.
if you are just trying to periodically process all messages in a queue, then another option (instead of starting and stopping the route) is to use a timer and a polling consumer bean to do retrieve all the messages in the queue...
from("timer://processQueueTimer?fixedRate=true&period=30000")
.to("bean:myBean?method=poll");
public class MyBean {
public void poll() {
// loop to empty queue
while (true) {
// receive the message from the queue, wait at most 3 sec
Object msg = consumer.receiveBody("activemq:queue:daemon", 3000);
if (msg == null) {
// no more messages in queue
break;
}
// send it to the next endpoint
producer.sendBody("bean:daemonCamelService?method=receive", msg);
}
}
}
See this FAQ how to stop/suspend a route from a route
http://camel.apache.org/how-can-i-stop-a-route-from-a-route.html
An alternative is to use a route policy
http://camel.apache.org/routepolicy
For example as we do with the throttling route policy that is provided out of the box, take a look at how its implemented, you can do similar for your route as well.