I have a timer task that closes a connection when it's triggered, the problem is that sometimes it is triggered before the connection actually opens, like this:
try {
HttpConnection conn = getMyConnection(); // Asume this returns a valid connection object
// ... At this moment the timer triggers the worker wich closes the connection:
conn.close(); // This is done by the timeTask before conn.getResponseCode()
int mCode = conn.getResponseCode(); // BOOOMMMM!!!! EXPLOTION!!!!
// ... Rest of my code here.
} catch(Throwable e) {
System.out.println("ups..."); // This never gets called... Why?
}
When I try conn.getResponseCode(), an exception is thrown but isn't cought, why?
I get this error: ClientProtocol(HttpProtocolBase).transitionToState(int) line: 484 and a source not found :S.
The connection lives in a different thread, and has its own lifecycle. You are trying to access it from the timer thread in a synchronous way.
To begin with, a connection is a state machine. It starts in the "setup" state, then changes to the "connected" state if some methods are called on it (any method that requires to contact the server), and finally it changes to the "closed" state when the connection has been terminated by either the server or the client. The method getResponseCode is one of those that can cause the connection to transition from the so called "setup" state to the "connected" state, if it wasn't already connected. You are trying to get the response code immediatly without even knowing whether the connection was established or not. You are not even letting the connection time to connect or close itself properly. Even if you could, have a look at what the javadocs say about the close method:
When a connection has been closed, access to any of its methods except this close() will cause an an IOException to be thrown.
If you really need to do something after it has been closed, pass a "listener" object to the connection so that it can call back when the connection has been closed, and pass back the response code (if the connection with the server was ever stablished).
Related
I followed the Vaadin tutorial (Creating Collaborative Views) for broadcasting events and register on them.
Registration eventRegistration;
#Override
protected void onAttach(AttachEvent attachEvent) {
log.debug("In attach...");
UI ui = attachEvent.getUI();
eventRegistration= Broadcaster.register(
"eventName",
message -> ui.access(() -> {
log.debug("Request to refresh grid...");
presenter.refreshGrid();
ui.push();
}));
}
#Override
protected void onDetach(DetachEvent detachEvent) {
log.debug("In detach...");
if(eventRegistration != null) {
eventRegistration.remove();
eventRegistration = null;
}
}
Everything works except the fact that when refreshing the page, the logic in the onDetach() is not executed. After refresh, however, you will enter the onAttach() method. Because of this you are actually going to register several of 'the same' listeners without removing the previous one and you actually get a doubling of listeners. The onDetach() method is only accessed if you go to another menu item, for example.
You can find an example log below.
What is the Vaadin recommended way to remove these listeners before/during refresh?
The onDetach method should be called eventually.
No event is sent to the server when you close or refresh a tab, and as such the server is not aware that the old UI should be detached.
This is where the heartbeat requests come in. The UIs send heartbeat requests every 5 minutes per default, and if the server notices that the old UI has missed three heartbeats, it will be detached. Alternatively, it will be detached when the session expires.
In other words, the onDetach method should be called after about 20 minutes.
The reason no event is sent to the server when the tab is closed or refreshed is that this could prevent the tab from refreshing/closing while the request is being handled, which is bad user experience. Also, this wouldn't cover the cases where the computer is turned off or the network disconnected.
There is something called the Beacon API that could be used to notify the server when a tab is refreshed or closed without causing a delay in the browser. There is an issue for using this to immediately detach UIs.
I'd recommend using the Unload Beacon add-on: https://vaadin.com/directory/component/unload-beacon-for-vaadin-flow or a similar approach which is demonstrated in the Cookbook: https://cookbook.vaadin.com/notice-closed - essentially, it's executing the JavaScript snippet to add an event listener for Window's unload event:
ui.getElement().executeJs(
"window.addEventListener('unload', function() {navigator.sendBeacon && navigator.sendBeacon($0)})", relativeBeaconPath);
and the beacon is sent to a custom SynchronizedRequestHandler.
Simplest way to workaround the problem would be checking if eventRegistration is null before adding one.
#Override
protected void onAttach(AttachEvent attachEvent) {
log.debug("In attach...");
UI ui = attachEvent.getUI();
if (eventRegistration == null) {
eventRegistration= Broadcaster.register(
"eventName",
message -> ui.access(() -> {
log.debug("Request to refresh grid...");
presenter.refreshGrid();
ui.push();
}));
}
}
Check the other answer by Erik why calling of onDetach is delayed.
we have a microservice which consumes a message using #RabbitListener and persist data into database, generate a response on successful processing of message and send it using #sendTO to different queue for auditing.
When running Rabbit in HA failover, while sending response if connection is lost the message currently being processed is correctly returned to the queue but database transaction (jpa transaction in our case) is not rolled back , response is never sent.
I read from this issue(https://github.com/spring-projects/spring-amqp/issues/696) that this is "best effort 1PC" transaction synchronization; RabbitMQ does not support XA transactions. The Rabbit tx is committed after the DB tx and there is a possibility the DB tx might commit and the rabbit rolled back; you have to deal with the small possibility of duplicate messages.
But in our case when we retry request, we are treating it as duplicate message and response is never created for this request. is there a way where we can only retry sending response message in case of connection lost exceptions rather than reprocessing request again? I looked at ConditionalRejectingErrorHandler.DefaultExceptionStrategy, it has access only to original request,no way to access response lost during connection failure. Please suggest what's the best way to handle this?
our code looks like:
SpringBootApplication
#EnableJpaRepositories("com.***")
#EnableJpaAuditing
#EnableTransactionManagement
#EnableEncryptableProperties
public class PcaClinicalValidationApplication {
#RabbitListener(queues = "myqueue"
#SendTo("exchange/routingKey")
#Timed) description = "Time taken to process a request")
public Message receivemessage(HashMap<String, Object> myMap, Message requestMessage)
throws Exception {
//business logic goes here
Message message = MessageBuilder.fromMessage(requestMessage)
//add some headers
return message;
}
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setRetryTemplate(new RetryTemplate());
factory.setReplyRecoveryCallback(ctx -> {
Message failed = SendRetryContextAccessor.getMessage(ctx);
Address replyTo = SendRetryContextAccessor.getAddress(ctx);
Throwable t = ctx.getLastThrowable();
//wrote to a file
serializer.serialize(failed);
return null;
});
return factory;
}
The listener container factory uses a RabbitTemplate in its replyTemplate property - this is used to send the reply.
You can configure a RetryTemplate into that RabbitTemplate to retry sending the reply.
When retries are exhausted, you can add a RecoveryCallback which will get the failed reply and you can save it off someplace and use it when the redelivery occurs.
I use mysql-native. This driver is suppport vibed's connection pool. On dlang newsgroup mysql-native developer Nick Sabalausky wrote:
"If you're using a connection pool, you shouldn't need to worry about closing the connection. The whole point is that the connections stay open until you need to use one again. When your program ends, then connections will close by themselves."
"You create the pool once (wherever/whenever you want to). Then, every time you want to use the database you obtain a connection by calling MySqlPool.lockConnection."
"Calling 'close' will always close the connection. If you got you connection from the pool, then it will automatically return to the pool when you're no longer using it. No need to do anything special for that."
The question about how pool should be done? I have read about singleton pattern and can't unserstand is it this case.
I wrote next code:
database class:
import std.stdio;
import std.string;
import mysql;
import vibe.d;
import config;
import user;
class Database
{
Config config;
MySqlPool mydb;
Connection connection;
this(Config config)
{
this.config = config;
mydb = new MySqlPool(config.dbhost, config.dbuser, config.dbpassword, config.dbname, config.dbport);
}
void connect()
{
if(connection is null)
{
connection = mydb.lockConnection();
}
scope(exit) connection.close();
}
}
users class/struct:
module user;
import mysql;
import vibe.d;
struct User
{
int id;
string login;
string password;
string usergroup;
}
void getUserByName(string login)
{
User user;
Prepared prepared = prepare(connection, `SELECT id, login, password, usergroup from users WHERE login=?`); // need to get connection accessible here to make request to DB
prepared.setArgs(login);
ResultRange result = prepared.query();
if (result.empty)
logWarn(`user: "%s" do not exists`, login);
else
{
Row row = result.front;
user.id = row[0].coerce!(int);
user.login = row[1].coerce!string;
user.password = row[2].coerce!string;
user.usergroup = row[3].coerce!string;
logInfo(`user: "%s" is exists`, login);
}
}
The problem that I can't understand what is proper way to getting access to connection instance. It seems that it's very stupid ideas to create every new database connection class inside users structure. But how to do it's in better way? To make Connection connection global? Is it's good? Or there is more correct way?
scope(exit) connection.close();
Delete that line. It's closing the connection you just received from the pool before the connect function returns. All you're doing there is opening a connection just to immediately close it again.
Change getUserByName to take a connection as an argument (typically as the first argument). Typically, whatever code needs to call getUserByName should either open a connection, or get a connenction from the pool via lockConnection, and then pass that connection to getUserByName and whatever other DB-related functions it needs to use. Then, after your code is done calling getUserByName (and whatever other DB functions it needs to call), you either just don't worry about the connection anymore and let your vibed fiber finish (if you're using vibed and got the connection from a pool) or you close the connection (if you did NOT get the connection from a vibed pool).
One way to do it is to pass the connection to your functions that need it. So you would refactor your getUserByName() to take connection as an argument.
Another alternative is to use the DAO pattern . Constructor of your DAO class would take the connection as one of the main parameters, and all the methods would use it to do the DB operation.
Problem description:
Lets have a service method which is called from controller:
class PaymentService {
static transactional = false
public void pay(long id) {
Member member = Member.get(id)
//long running task executing HTTP request
requestPayment(member)
}
}
The problem is if 8 users hit the same service in the same time and the time to execute the requestPayment(member) method is 30 seconds, the whole application gets stucked for 30 seconds.
The problem is even bigger than it seems, because if the HTTP request is performing well, nobody realizes any trouble. The serious problem is that availability of our web service depends on the availability of our external partner/component (in our use-case payment gateway). So when your partner starts to have performance issues, you will have them as well and even worse it will affect all parts of your app.
Evaluation:
The cause of problem is that Member.get(id) reserves a DB connection from pool and it keeps it for further use, despite requestPayment(member) method never needs to access DB. When next (9-th) request hits any other part of the application which requires DB connection (transactional service, DB select, ...) it keeps waiting (or timeouts if maxWait is set to lower duration) until the pool has an available connection, which can last even 30 seconds in our use case.
The stacktrace for the waiting thread is:
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:485)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1115)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
Or for timeout:
JDBC begin failed
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1167)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
... 7 more
Obviously the same issue happens with transactional service, however it makes much more sense since the connection is reserved for the transaction.
As a temporary solution its possible to increase the pool size with maxActive property on datasource, however it doesn't solve the real problem of holding an unused connection.
As a permanent solution its possible to enclose all DB operations to transactional behavior (withTransaction{..}, #Transactional), which returns the connection back to pool after commit (or to my surprise also withNewSession{..} works). But we need to be sure that the whole call chain from controller up to the requestPayment(member) method doesn't leak the connection.
I'd like to be able to throw an exception in the requestPayment(member) method if the connection is "leaked" (similar to Propagation.NEVER transactional behavior), so I can reveal the issue early during test phase.
After digging in the source code I've found the solution:
class PaymentService {
static transactional = false
def sessionFactory
public void pay(long id) {
Member member = Member.get(id)
sessionFactory.currentSession.disconnect()
//long running task executing HTTP request
requestPayment(member)
}
}
The above statement releases the connection back to pool.
If executed from transactional context, an exception is thrown (org.hibernate.HibernateException connnection proxy not usable after transaction completion), since we can't release such a connection (which is exactly what I needed).
Javadoc:
Disconnect the Session from the current JDBC connection. If the
connection was obtained by Hibernate close it and return it to the
connection pool; otherwise, return it to the application.
This is used by applications which supply JDBC connections to
Hibernate and which require long-sessions (or long-conversations)
I have modified to implement channel interceptor in spring-websocket-portfolio sample application (https://github.com/rstoyanchev/spring-websocket-portfolio). whenever the client disconnects, channel interceptor is processed twice. I have similar implementation in my production application. As it is being invoked twice so it has unwanted result for the 2nd invocation. I had put work around for the time being. But wondering why my channel interceptor is invoked twice? Any help would be highly appreciated.
modified items: WebSocketConfig.java:
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.setInterceptors(channelInterceptor());
}
#Bean
public ChannelInterceptor channelInterceptor() {
return new ChannelInterceptor();
}
ChannelInterceptor :
package org.springframework.samples.portfolio.config;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.simp.stomp.StompHeaderAccessor;
import org.springframework.messaging.support.ChannelInterceptorAdapter;
public class ChannelInterceptor extends ChannelInterceptorAdapter {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
StompHeaderAccessor sha = StompHeaderAccessor.wrap(message);
System.out.println(sha.getCommand() + " " + sha);
switch (sha.getCommand()) {
case CONNECT: {
System.out.println("connected:"+sha.getSessionId());
break;
}
case DISCONNECT: {
System.out.println("disconnected:"+sha.getSessionId());
break;
}
default:
System.out.println("default:"+sha.getCommand());
break;
}
}
}
logs:
**disconnected**:9k1hvln6
**disconnected**:9k1hvln6
Disconnect events may happen more than once for the same session, your interceptor should be idempotent and ignore duplicate events.
You may also consider using application events (SessionConnectEvent, SessionDisconnectEvent...) instead of a channel interceptor. Here's an example of an idempotent event listener: https://github.com/salmar/spring-websocket-chat/blob/master/src/main/java/com/sergialmar/wschat/event/PresenceEventListener.java
Generally a DISCONNECT frame comes the client side, is processed in the StompSubProtocolHandler, and is then propagated to the broker. However, a connection can also be closed or lost without a DISCONNECT frame. Regardless of how a connection is closed, the StompSubProtocolMessageHandler generates a DISCONNECT frame. So there is some redundancy on the server side to ensure the broker is aware the client connection is gone.
As Sergi mentioned you can either subscribe to listen for SessionDisconnectEvent (of which there should be only one) and other AbstractSubProtocol events or ensure your code is idempotent.