Mobilefirst 7.1 - Idle session timeout for Session-independent mode - jndi

How do I configure idle session timeout when application is configured in Session-independent mode.

The idle session time out can be configured using "serverSessionTimeout" JNDI property. You can see the list of available JNDI properties here:
List of JNDI properties

Related

Guacamole Disconnect Session on User Idle

I have google'ed a lot on this question but haven't found an answer. I have a Guacamole server that connects to a local VNC session and I would like for it to disconnect the user session if it detects no activity for an hour.
What I've tried and attempted
used xprintidle to show idle time of the user, this works so will be put in a script that might be used later to terminate an idle session.
I have looked at Guacamole's api-session-timeout and set it for a minute, I had hoped it would not reconnect to the same session once the VNC server stops abruptly. It seems this did not work.
I have tried to use guacamole's user-mappings.xml to specify the parameter "autoretry" and set that to "1". Restarted and this did not work, I'm thinking its not used this way.
I have went to the guacamole postgres database and manually inserted the entry for autoretry into the parameter table. Restarted, but this did not work.
I have went to the VNC side (I use TurboVNC) and looked at the flag -idletimeout and configuration option max-idle-timeout. This terminates the TurboVNC service when there are no active connections. It is not what I am after, I'm trying to only disconnect the session when the user is idle.
I figured that the VNC side would not work because even if the VNC session is terminated Guacamole would keep on retying forever to reconnect to that session.
From some posts on the Guacamole mailing list it seems that disabling auto reconnect is not possible without a recompile from source.
Is there a way to disconnect an active session after an idle timeout period? or maybe a way to stop Guacamole from reconnecting?

#Solace - Implementing DR for solace Java JMS publisher

I have an existing application which was running on solace jar v7.1.2 execute in pub/sub mode. Now we have upgraded to v10.1.1 and as part of implementing DR setup(Disaster Recovery), I have added one more host in the configuration with comma separated.
The application could connect to the primary host successfully, but during the switch-over, (ie from primary to DR) the application had failed to connect and i have received the below error. It connects to DR host if I restart my application.
com.solacesystems.jcsmp.JCSMPErrorResponseException: 400: Unknown Flow Name [Subcode:55]
at com.solacesystems.jcsmp.impl.flow.PubFlowManager.doPubAssuredCtrl(PubFlowManager.java:266)
at com.solacesystems.jcsmp.impl.flow.PubFlowManager.notifyReconnected(PubFlowManager.java:452)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel$ClientChannelReconnect.call(TcpClientChannel.java:2097)
... 5 more
|EAI-000376|||ERROR| |EAI-000376 JMS Exception occurred, Description: `Error sending message - unknown flow name ((JCSMPTransportException)
Need help to understand if we need to have some configuration to do the reconnect to the DR host for a smooth switch over.
In Solace JMS API versions earlier than 7.1.2.226, any sessions on which the clients have published Guaranteed messages will be destroyed after a DR switch‑over. To indicate the disconnect and loss of publisher flow the JMS API will generate this exception. Upon receiving these exceptions, the client application should create a new session. After a new session is established, the client application can republish any Guaranteed messages that had been sent but not acked on the previous session, as these message might not have been persisted and replicated.
However, this behavior was improved in version 7.1.2.226 and later so that the API handles this transparently. It is no longer required to implement code to catch this exception. Can you please verify that the application is not using an API earlier 7.1.2.226? This can be done by enabling debug-level logs.
As Alexandra pointed out, when using guaranteed messaging, as of version 7.1.2 the Solace JMS API guarantees delivery even in the case of failover. It is normal to receive INFO-level log messages that say "Error Response (400) - Unknown Flow Name", this does not indicate a problem, but exceptions (with stack traces) are a problem and indicate that delivery is not guaranteed.
Background: if the connection between the client and the broker (on the Solace server) is terminated unexpectedly, the broker maintains the flow state — but only for three minutes. The state is also copied to the HA mate broker to support failover (but not to the replication mate). If the client reconnects within three minutes, it can resume where it left off. If it reconnects after three minutes, the server will respond with the following (which will be echoed to the logs):
2019-01-04 10:00:59,999 INFO [com.solacesystems.jcsmp.impl.flow.PubFlowManager] (Context_2_Thread_reconnect_service) Error Response (400) - Unknown Flow Name
2019-01-04 10:00:59,999 INFO [com.solacesystems.jcsmp.impl.PubADManager] (Context_2_Thread_reconnect_service) Unknown Publisher Flow (flowId=36) recovered: 1 messages renumbered and resent (lastMessageIdSent =0)
That's okay: the client JMS library will automatically resend whatever messages are necessary, so guaranteed messaging is still guaranteed.
Also, just to confirm, the jar name indicates the version, so sol-jms-10.1.1.jar uses version 10.1.1.

MQTT Clean session

Assume that I connected to the broker with "clean session=false" and started receiving events, in case of disconnection ideally my application will still receive data on connection. But if the application is crashed I want to have a fresh start and clear my session.
Can I clear my session on the MQTT broker and have a fresh start?
From the documentation I concluded that if I wanted to do that I would need to do the following:
application start
connect using "clean session=true" // this will cause any current session to be removed along with its data
every thing related to the session is purged from server
disconnect
connect using "clean session=false" and start getting the data.
I got the idea from
http://www.hivemq.com/blog/mqtt-essentials-part-3-client-broker-connection-establishment
"If clean session is set to true, the broker won’t store anything for
the client and will also purge all information from a previous
persistent session."
Is this the correct way to clear a previous session?
Yes, that is the only way to clear a session for a client.

Wildfly HornetQ Remote HTTP connection memory leak

I am running Wildfly 8.2 instance with HornetQ messaging remotely accessible via HTTPS on port 8185.
For testing the connection I am running a client on the same machine connecting via https-remoting://localhost:8185
from client view everything works fine: connecting, sending / receiving messages and closing connection
on server side at first all works fine, too. However, after period set in "connection-ttl" of RemoteConnectionFactory has passed, server logs following lines:
2015-09-03 17:05:49,152 WARN [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /192.168.160.83:63937. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2015-09-03 17:05:49,154 INFO [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection
Final result after testing for a longer time (every 1-2 seconds clients are connecting, sending / receiving messages, closing the connection):
Wildfly consumes more and more heap memory and finally stops working with an OutOfMemoryError ...
As mentioned, the connections are always explicitly closed by client, and at closing time no error is logged, neither on client nor on server side. It seems that the "hornetq-failure-check-thread" just didn't get informed that the connection was already closed
Any help for this issue is appreciated!

Check Worklight Client connection state

I'm working on a native iOS app that is using IBM Worklight server adapters, in my code every time I want to invoke a procedure I'm calling the WLClient().wlConnectWithDelegate(self) and then calling the adapter, is there a way that would let me check the connection status of the client before I invoke the adapter procedure?
There is no such API provided by the Worklight framework.
The idea behind the connect API is to establish a session between the client and server, negating the possibility for example of race condition (such as two adapter requests to the server, each getting its own session, potentially causing trouble), in addition to delivering data on headers that is no available in an adapter request compared to connect request.
I think that instead of making a connect request before invocation you can do it in an early stage in the app's lifecycle, as well as whenever the app returns to the foreground, to ensure that a session was established. This, coupled together wit an appropriate session timeout set in worklight.properties on the server-side.
More here: https://developer.ibm.com/mobilefirstplatform/documentation/getting-started-7-0/hello-world/connecting-to-the-mobilefirst-server/

Resources