MDB transaction does not rollback email on timeout - timeout

I have an MDB that listens on a queue. Whenever it recieves a message, it will forward execution to a stateless session bean which has a lot of logic, updates etc. Here is the flow of logic/call chain.
queue->mdb->session bean->session bean->email->logging
The end result is an email and subsequent logging.
By default, the MDB transaction is managed by container and its time out is 30 seconds.
However, whenever timeout is reached, it throws timeout exception and retries the message, but the nested transaction and its processes are not rolled back(from session bean). As a result, multiple emails go out because of the retry but all the logging is rolled back (from session beans) EXCEPT for what is logged from the MDB itself
Shouldnt all the transactions being called from the MDB rollback including the mdb logging and especially the emails?
The session beans all have default transaction type as 'required'.
I also explicitly set the TransactionManagement type as CONTAINER with the TransactionType as REQUIRED. Emails still go out. Logging from session beans rollback but retry occurs.
I then set the TransactionType as REQUIRES_NEW. Emails still go out. Logging from session beans rollback but retry DOESNT occur.
what setting do I put to make sure the ENTIRE transaction started by the MDB and any transactions called from it, get rolled back AND retry occurs?
I do not want to use bean managed transactions because I want the retry on failure to occur.
My application server is weblogic 10.3 with ejb 3 spec.

Your email resource is not transactional, so ditch the mdb timeout and have your email sender rely on the email transport timeout, at which point, call TX setrollback only. The tx will rollback, the message will be redelivered and your email may only be sent on a successful retry. The outcome of email transport timeout may not be deterministic.

Related

Confirming Deliveries with Spring AMQP

We're using Spring AMQP in the style of Spring Remoting with AMQP. I'm setting x-message-ttl on every message so that it expires immediately if it cannot be delivered immediately to a consumer. This works great, however, it leaves the producer waiting for the specified value of replyTimeout before failing with RemoteProxyFailureException (if I recall correctly). Is there any way I can make the producer fail immeditely if the message cannot be delivered (only waiting for the timeout if the message is actually received)?
The loose coupling of the architecture means there's no indication to the producer of the expiry.
There used to be an immediate flag but it was removed in rabbitmq 3.0.
On possible solution would be to configure a DLX/DLQ so the expired message can be consumed by another consumer, which can return an exception to the client.
EDIT:
Simply have the fallback consumer implement the same interface and have it throw an exception.
See this updated test case.

Connection to db lost after some time

after login to my application and waiting some time like half and hour, somehow connection to db thrue entity framework is lost and I got this massage.
You must call the "WebSecurity.InitializeDatabaseConnection" method before you call any other method
Is there anything I could do ?
Two things:
configure timeouts of your database. For example, if you use MySQL, you can configure wait_timeout and interactive_timeout etc. Other databases have similar configurations.
Your application should handle timeout and reconnect. It is the right thing for database to timeout idle sessions, so that resources can be released and to used by active sessions.

Indy TCP Server Freezes, no idea why

I have a Server and a client (Delphi).
The client gets login details and then connects to the server sends them to the server to be validated, the server receives the data validates it then returns whether the given data is correct.
If the data was correct the client continues to the next window where they enter some data into corresponding fields and then sends the data to the server, when the server receives the data it stores it and then replies to the client that it stored it successfully. When the client has been notified that the data was stored successfully it displays a message notifying the user and then terminates.
While testing this, with the client running on four different computers, (Each computer would have opened and closed the client about 6 times) the server suddenly stops replying to the clients (Clients display message saying "Connection closed gracefully")
This is the error the server is returning:
So the error appears to be when ADOQuery opens the connection to execute the SQL, why would it cause a exception only after 30 executes?
Any Suggestions to what my problem is as I have no idea what it might be.
Thanks for your help :)
If a client receives a "Connection closed gracefully" error, it means the server closed that client's connection on the server side. If your server code is not explicitly doing that, then it usually means an uncaught exception was raised in one of the server's event handlers, which would cause the server to close the socket (if the exception is raised after the OnConnect event and before the OnDisconnect event, OnDisconnect is triggered before the socket is closed). TIdTCPServer has an OnException event to report that condition.
TIdTCPClient closes the socket during destruction if it is still open.
Update: TIdTCPServer is a multi-threaded component. Each client connection runs in its own thread. ADO uses apartment-threaded COM objects that are tied to the thread that creates them, and can only be used within that thread context unless marshaled across thread boundaries using CoMarshalInterThreadInterfaceInStream() + CoGetInterfaceAndReleaseStream(), or the IGlobalInterfaceTable interface.
In this situation, you should either:
give each client its own ADO connection and query objects. You could either:
A. create them in the OnConnect event and store them within the TIdContext for use in the OnExecute event, and then free them in the OnDisconnect eventt. Or just create and free them in the OnExecute event on an as-needed basis.
B. derive a new class from TIdThreadWithTask and override its virtual BeforeExecute() and AfterExecute() methods to create and free the ADO objects, and then assign one of the TIdSchedulerOfThread... components to the TIdTCPServer.Scheduler property and assign your thread class to the TIdSchedulerOfThread.ThreadClass property. Then, in the server events, you can use TMyThreadClass(TIdYarnOfThread(TIdContext.Yarn).Thread) to access the calling thread's ADO objects.
Create a separate pool of ADO objects. When a client needs to access the database, have it marshal the appropriate ADO objects to the calling thread's context, and then put the objects back in the pool when finished.
Either way, since ADO is COM-based, don't forget to call CoInitialize/Ex() and CoUnintialize() for each client thread that needs to access ADO objects, either in the OnConnect and OnDisconnect events, or in the TIdThreadWithTask.BeforeExecute() and TIdThreadWithTask.AfterExecute() methods.

How do I return trigger error back to breezejs client side?

Validation can happen on client side and server side, what if it happens on db side, if I want to stop an insert/update by rollback in trigger, how do I notify the client side, now it seems breezejs just ignore my error raised in trigger.
If you are using an Entity Framework or NHibernate backed server, then throwing any exception on the server should fail the entire transaction and turn into a failed save on the client ( with all changes placed back into their 'presave' state). In order for this to occur the Breeze server must detect an exception. You may need to have your trigger to raise an exception.
If you are using some other server, the behavior depends on whether the database supports tranactional semantics. ( for example MongoDB does not).
Found it does return, just need set severity to higher and parse error message from http data.

HTTP requests in transactions?

I have a model which sends a HTTP request to an external web service on creation in order to find out some information to add before it is saved.
Currently I'm doing this in a before_create callback. I recently learned that before/after callbacks happen within database transactions.
Am I opening myself up to any issues such as limiting DB throughput by doing this? Would it be better to commit the record before sending the http request and then update the record when it returns?
As long a s you keep a transaction open, all the locks it acquired are active. If you have a call to an external source that may stall you for a long period of time, be sure not to to have any unrelated locks in the same transaction.
In other words: don't put anything else into the same transaction.
If you don't mind the new row being visible before you look up the additional information, you might just commit and later update the row.
Or you fetch the information from the external web service before you even start the transaction. That would be cleanest / fastest solution for the database.
PostgreSQL lock types.
How to view locks.

Resources