WCF Request Channel Timeout Error - timeout

I have an app that calls a WCF service in high frequency. The app starts out working fine and then after a few minutes, every call starts generating this error:
System.TimeoutException: The request channel timed out attempting to send after 00:02:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding.
I've tried everything I can think of to get around this error, such as:
Setting multiple concurrencymode on the service:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class ListingService : IListingService
Setting higher limits for max concurrent calls/sessions/instances in the service web.config:
serviceThrottling maxConcurrentCalls="100000" maxConcurrentSessions="100000" maxConcurrentInstances="100000"
Setting a higher servicepointmanager defaultconnectionlimit on application_start on the service's global.asax:
protected void Application_Start(object sender, EventArgs e)
{
System.Net.ServicePointManager.DefaultConnectionLimit = 100000;
}
Making sure I'm closing client connections:
using (var client = new ListingServiceClient())
{
client.SaveListing(listing);
client.Close();
}
Here is the services web.config - http://pastebin.com/d9qtZUKN
However, I'm still getting the error. I'm sure that the service call does not take that long. Any ideas?

The issue was the db timing out which caused WCF to time out.

Related

Does calling ActiveMQConnection cleanup and changeuserinfo impact DefaultMessageListenerContainer with CACHE_CONSUMER

Here's some context: I've set up an ActiveMQBroker (version 5.13.1) with a BrokerPlugin that takes a requestId (just a UUID for tracking requests across different servers) and OAuth token (more about the OAuth token below) in the 'username' and 'password' fields, respectively, of the org.apache.activemq.command.ConnectionInfo ... it's based on this great post. On the client/consumer side, I'm wrapping the ActiveMQConnection inside of a Spring DefaultMessageListenerContainer (version 4.2.4-RELEASE) with cacheLeve=CACHE_CONSUMER (so it also caches the Session and Consumer along with the Connection).
The snag is that the client's OAuth token expires every 20 minutes, so I've set up a ScheduledExecutorService on the client side to refresh the OAuth token every 18 minutes.
My question is that if my scheduled task on the client side calls ActiveMQConnection#cleanup() followed by ActiveMQConnection#changeUserInfo(newRequestId, newAuthToken) ... will that negatively impact spring's DefaultMessageListenerContainer that is holding onto that same ActiveMQConnection? Or another way to ask, is there a "right" way for my client code to set the new "username" and "password" fields in the ActiveMQConnection without messing anything up in the DefaultMessageListenerContainer? I'm especially concerned with any multi-threading issues since the DefaultMessageListenerContainer has several consumer threads ... and my ScheduledExecutorService is, of course, running it's own thread to update the OAuth token into the ActiveMQConnection.
Would it be enough to extend DefaultMessageListenerContainer and wrap my update of the OAuth token inside a synchronized block of the 'sharedConnectionMonitor', e.g. would something the following be necessary and/or sufficient:
public class OAuthMessageListenerContainer extends DefaultMessageListenerContainer {
public void updateOAuthToken(String requestId, String authToken) throws JMSException {
synchronized(sharedConnectionMonitor) {
ActiveMQConnection amqConn = (ActiveMQConnection)getSharedConnection();
amqConn.doCleanup(true);
amqConn.changeUserInfo(requestId, authToken);
}
}
}
FWIW here's how I solved the issue:
1) I first had to upgrade ActiveMQ from 5.13.0 to 5.13.3 because 5.13.0 had a thread deadlock issue with the FailoverTransport when trying to reconnect to the broker after the transport was interrupted, e.g. network glitch, server restart, zookeeper electing a new "master" broker (we're using replicated leveldb on the server with 3 brokers)
2) After working past the thread deadlock in ActiveMQ I realized that changing the username and password on the ActiveMQConnection basically resets the whole connection anyway ... which means the DMLC is reset ... so I made a subclass of DMLC that I could reset from a TransportListener on the connection when the transport is interrupted ... but then ran into another thread deadlock in the DMLC ... and found this answer that helped me past that.
The subclass of DMLC just needs this method:
public void restart() {
try {
shutdown();
initialize();
start();
}
catch (Exception e) {
LOG.error("Could not restart DMLC", e);
}
}
and the transport listener needs to do this:
#Override
public void transportInterrupted() {
dmlc.restart();
}

Prevent keeping unused DB connection

Problem description:
Lets have a service method which is called from controller:
class PaymentService {
static transactional = false
public void pay(long id) {
Member member = Member.get(id)
//long running task executing HTTP request
requestPayment(member)
}
}
The problem is if 8 users hit the same service in the same time and the time to execute the requestPayment(member) method is 30 seconds, the whole application gets stucked for 30 seconds.
The problem is even bigger than it seems, because if the HTTP request is performing well, nobody realizes any trouble. The serious problem is that availability of our web service depends on the availability of our external partner/component (in our use-case payment gateway). So when your partner starts to have performance issues, you will have them as well and even worse it will affect all parts of your app.
Evaluation:
The cause of problem is that Member.get(id) reserves a DB connection from pool and it keeps it for further use, despite requestPayment(member) method never needs to access DB. When next (9-th) request hits any other part of the application which requires DB connection (transactional service, DB select, ...) it keeps waiting (or timeouts if maxWait is set to lower duration) until the pool has an available connection, which can last even 30 seconds in our use case.
The stacktrace for the waiting thread is:
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:485)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1115)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
Or for timeout:
JDBC begin failed
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1167)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
... 7 more
Obviously the same issue happens with transactional service, however it makes much more sense since the connection is reserved for the transaction.
As a temporary solution its possible to increase the pool size with maxActive property on datasource, however it doesn't solve the real problem of holding an unused connection.
As a permanent solution its possible to enclose all DB operations to transactional behavior (withTransaction{..}, #Transactional), which returns the connection back to pool after commit (or to my surprise also withNewSession{..} works). But we need to be sure that the whole call chain from controller up to the requestPayment(member) method doesn't leak the connection.
I'd like to be able to throw an exception in the requestPayment(member) method if the connection is "leaked" (similar to Propagation.NEVER transactional behavior), so I can reveal the issue early during test phase.
After digging in the source code I've found the solution:
class PaymentService {
static transactional = false
def sessionFactory
public void pay(long id) {
Member member = Member.get(id)
sessionFactory.currentSession.disconnect()
//long running task executing HTTP request
requestPayment(member)
}
}
The above statement releases the connection back to pool.
If executed from transactional context, an exception is thrown (org.hibernate.HibernateException connnection proxy not usable after transaction completion), since we can't release such a connection (which is exactly what I needed).
Javadoc:
Disconnect the Session from the current JDBC connection. If the
connection was obtained by Hibernate close it and return it to the
connection pool; otherwise, return it to the application.
This is used by applications which supply JDBC connections to
Hibernate and which require long-sessions (or long-conversations)

webapi odata update savechanges issue - Unable to connect to remote server

In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);

What is the timeout for SignalR persistence connection?

I created Chat application using SignalR and Asp.net MVC. if user is idle for sometime, he didn't receive any messages from the server.
Is there any timeout for SignalR persistence connection ?
If yes, how do I modify or reactivate my connection ?
This has now been upated on the Wiki https://github.com/SignalR/SignalR/wiki/Configuring-SignalR
Code
using System;
using SignalR;
namespace WebApplication1
{
public class Global : System.Web.HttpApplication
{
void Application_Start(object sender, EventArgs e)
{
// Make connections wait 50s maximum for any response. After
// 50s are up, trigger a timeout command and make the client reconnect.
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
}
}
}
SignalR's default timeout is 110 seconds for any connection. It will reconnected automatically and you shouldn't miss any messages. That sounds like you have some other problem. If you want to tweak the timeout in 0.4 like this (assuming asp.net):
// Make idle connections reconnect every 60 seconds
AspNetHost.DependencyResolver.Resolve<IConfigurtionManager>().ReconnectTimeout = TimeSpan.FromSeconds(60);
Make sure You add using SignalR.Infrastructure to get the Resolve extension method (we're fixing this in the next version).

RIA Services: Is there a limit to the JSON deserialization?

I'm using RIA Services in one of my silverlight applications. I can return about 500 entites (or about 500 kb JSON) from my service successfully, but anything much over that fails on the client side - the browser crashes (both IE and Firefox).
I can hit the following link and get the JSON successfully:
http://localhost:52878/ClientBin/DataService.axd/AgingReportPortal2-Web-Services-AgingDataService/GetAgingReportItems
... so I wonder what the deal is.
Is there a limit to how much can be deserialized? If so, is there a way to increase it? I remember having a similar problem while I was using WCF for this - I needed to set maxItemsInObjectGraph in the web.config to a higher number - perhaps I need to do something similar?
This is the code I'm using to fetch the entities:
// Executes when the user navigates to this page.
protected override void OnNavigatedTo(NavigationEventArgs e)
{
AgingDataContext context = new AgingDataContext();
var query = context.GetAgingReportItemsQuery();
var loadOperation = context.Load(query);
loadOperation.Completed += new EventHandler(loadOperation_Completed);
}
void loadOperation_Completed(object sender, EventArgs e)
{
// I placed a break point here - it was never hit
var operation = (LoadOperation<AgingReportItem>)sender;
reportDatagrid.ItemsSource = operation.Entities;
}
Any help would be appreciated - I've spent hours trying to figure this out, and haven't found anyone with the same problem.
Thanks,
Charles
Maybe try adding/increasing this as well, the default is 8192
<readerQuotas maxArrayLength="5000000" />

Resources