detecting connection lost in SignalR from client side - connection

I am connecting a simple application to a server that hosts my web-pplication. My web-application uses SignalR 2. Everything is going smooth and my little application can sync with the web-application and receives messages sent from it. But, when the web-page is updated or the server restarts and it loses its connections, the application cannot understand that the connection is lost from server. The following is my codes:
// initializing connection
HubConnection connection;
IHubProxy hub;
connection = new HubConnection(serverAddress);
hub = connection.CreateHubProxy("MyPanel");
hub.On<string>("ReciveText", (msg) => recieveFromServer(msg));
A thread checks the connection every 1 minutes, but every time it checks, the state of the connection is “Connected” while the connection from server side is lost. Is there anything that I am missing here?
if (connection.State == ConnectionState.Disconnected)
{
// try to reconnect to server or do something
}

You can try something like that :
Comes from signalR official examples.
connection = new HubConnection(serverAddress);
connection.Closed += Connection_Closed;
/// <summary>
/// If the server is stopped, the connection will time out after 30 seconds
/// the default, and the `Closed` event will fire.
/// </summary>
void Connection_Closed()
{
//do something
}
You can use StateChanged event too like this :
connection.StateChanged += Connection_StateChanged;
private void Connection_StateChanged(StateChange obj)
{
MessageBox.Show(obj.NewState.ToString());
}
EDIT
You can try to reconnect every 15 secs with something like that :
private void Connection_StateChanged(StateChange obj)
{
if (obj.NewState == ConnectionState.Disconnected)
{
var current = DateTime.Now.TimeOfDay;
SetTimer(current.Add(TimeSpan.FromSeconds(30)), TimeSpan.FromSeconds(10), StartCon);
}
else
{
if (_timer != null)
_timer.Dispose();
}
}
private async Task StartCon()
{
await Connection.Start();
}
private Timer _timer;
private void SetTimer(TimeSpan starTime, TimeSpan every, Func<Task> action)
{
var current = DateTime.Now;
var timeToGo = starTime - current.TimeOfDay;
if (timeToGo < TimeSpan.Zero)
{
return;
}
_timer = new Timer(x =>
{
action.Invoke();
}, null, timeToGo, every);
}

Related

Cloud Dataflow - how does Dataflow do parallelism?

My question is, behind the scene, for element-wise Beam DoFn (ParDo), how does the Cloud Dataflow parallel workload? For example, in my ParDO, I send out one http request to an external server for one element. And I use 30 workers, each has 4vCPU.
Does that mean on each worker, there will be 4 threads at maximum?
Does that mean from each worker, only 4 http connections are necessary or can be established if I keep them alive to get the best performance?
How can I adjust the level of parallelism other than using more cores or more workers?
with my current setting (30*4vCPU worker), I can establish around 120 http connections on the http server. But both server and worker has very low resource usage. basically I want to make them work much harder by sending out more requests out per second. What should I do...
Code Snippet to illustrate my work:
public class NewCallServerDoFn extends DoFn<PreparedRequest,KV<PreparedRequest,String>> {
private static final Logger Logger = LoggerFactory.getLogger(ProcessReponseDoFn.class);
private static PoolingHttpClientConnectionManager _ConnManager = null;
private static CloseableHttpClient _HttpClient = null;
private static HttpRequestRetryHandler _RetryHandler = null;
private static String[] _MapServers = MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.server_host").split(",");
#Setup
public void setupHttpClient(){
Logger.info("Setting up HttpClient");
//Question: the value of maxConnection below is actually 10, but with 30 worker machines, I can only see 115 TCP connections established on the server side. So this setting doesn't really take effect as I expected.....
int maxConnection = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.max_connection");
int timeout = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.timeout");
_ConnManager = new PoolingHttpClientConnectionManager();
for (String mapServer : _MapServers) {
HttpHost serverHost = new HttpHost(mapServer,80);
_ConnManager.setMaxPerRoute(new HttpRoute(serverHost),maxConnection);
}
// config timeout
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(timeout)
.setConnectionRequestTimeout(timeout)
.setSocketTimeout(timeout).build();
// config retry
_RetryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(
IOException exception,
int executionCount,
HttpContext context) {
Logger.info(exception.toString());
Logger.info("try request: " + executionCount);
if (executionCount >= 5) {
// Do not retry if over max retry count
return false;
}
if (exception instanceof InterruptedIOException) {
// Timeout
return false;
}
if (exception instanceof UnknownHostException) {
// Unknown host
return false;
}
if (exception instanceof ConnectTimeoutException) {
// Connection refused
return false;
}
if (exception instanceof SSLException) {
// SSL handshake exception
return false;
}
return true;
}
};
_HttpClient = HttpClients.custom()
.setConnectionManager(_ConnManager)
.setDefaultRequestConfig(requestConfig)
.setRetryHandler(_RetryHandler)
.build();
Logger.info("Setting up HttpClient is done.");
}
#Teardown
public void tearDown(){
Logger.info("Tearing down HttpClient and Connection Manager.");
try {
_HttpClient.close();
_ConnManager.close();
}catch (Exception e){
Logger.warn(e.toString());
}
Logger.info("HttpClient and Connection Manager have been teared down.");
}
#ProcessElement
public void processElement(ProcessContext c) {
PreparedRequest request = c.element();
if(request == null)
return;
String response="{\"my_error\":\"failed to get response from map server with retries\"}";
String chosenServer = _MapServers[request.getHardwareId() % _MapServers.length];
String parameter;
try {
parameter = URLEncoder.encode(request.getRequest(),"UTF-8");
} catch (UnsupportedEncodingException e) {
Logger.error(e.toString());
return;
}
StringBuilder sb = new StringBuilder().append(MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.api_path"))
.append("?coordinates=")
.append(parameter);
HttpGet getRequest = new HttpGet(sb.toString());
HttpHost host = new HttpHost(chosenServer,80,"http");
CloseableHttpResponse httpRes;
try {
httpRes = _HttpClient.execute(host,getRequest);
HttpEntity entity = httpRes.getEntity();
if(entity != null){
try
{
response = EntityUtils.toString(entity);
}finally{
EntityUtils.consume(entity);
httpRes.close();
}
}
}catch(Exception e){
Logger.warn("failed by get response from map server with retries for " + request.getRequest());
}
c.output(KV.of(request, response));
}
}
Yes, based on this answer.
No, you can establish more connections. Based on my answer, you can use a async http client to have more concurrent requests. As this answer also describes, you need to collect the results from these asynchronous calls and output it synchronously in any #ProcessElement or #FinishBundle.
See 2.
Since your resource usage is low, it indicates that the worker spends most of its time waiting for a response. I think with the described approach above, you can utilize your resources far better and you can achieve the same performance with far less workers.

SignalR in MVC skewing Application Insights

We just started using SignalR in an MVC application and now we're getting a bunch of alerts due to high average response time. I suspect this to be misleading as the application isn't experiencing performance degradation. It appears that SignalR uses this URL to make a connection. This url not a controller/action of the project and just the built in SignalR code in the js file. jquery.signalR-2.2.1.js is the file. I suspect that it is just leaving the websocket connection open while they are on this page and it's skewing our numbers. Is this accurate? If so, is there a way to filter it out of the application insights?
Here is the counter. Is this the expected behavior?
Here is the signalR jquery code where it builds it's url:
// BUG #2953: The url needs to be same otherwise it will cause a memory leak
getUrl: function (connection, transport, reconnecting, poll, ajaxPost) {
/// <summary>Gets the url for making a GET based connect request</summary>
var baseUrl = transport === "webSockets" ? "" : connection.baseUrl,
url = baseUrl + connection.appRelativeUrl,
qs = "transport=" + transport;
if (!ajaxPost && connection.groupsToken) {
qs += "&groupsToken=" + window.encodeURIComponent(connection.groupsToken);
}
if (!reconnecting) {
url += "/connect";
} else {
if (poll) {
// longPolling transport specific
url += "/poll";
} else {
url += "/reconnect";
}
if (!ajaxPost && connection.messageId) {
qs += "&messageId=" + window.encodeURIComponent(connection.messageId);
}
}
url += "?" + qs;
url = transportLogic.prepareQueryString(connection, url);
if (!ajaxPost) {
url += "&tid=" + Math.floor(Math.random() * 11);
}
return url;
},
I fixed this by following the instructions on https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling:
Update your ApplicationInsights Nuget package to 2.0.0 or later.
Create a class implementing ITelemetryProcessor:
public class UnwantedTelemetryFilter : ITelemetryProcessor
{
private ITelemetryProcessor Next { get; set; }
public UnwantedTelemetryFilter(ITelemetryProcessor next)
{
this.Next = next;
}
public void Process(ITelemetry item)
{
var request = item as RequestTelemetry;
if (request != null && request.Name != null)
if (request.Name.Contains("signalr"))
return;
// Send everything else:
this.Next.Process(item);
}
}
Add the processor to your Application_Start() in Global.asax.cs:
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new UnwantedTelemetryFilter(next));
builder.Build();
if the calls are coming from the C# part of the app, the easiest way is to write a custom telemetry processor:
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling
public void Process(ITelemetry item)
{
var request = item as RequestTelemetry;
if (request != null && request.[some field here].Equals("[some signalr specific check here]", StringComparison.OrdinalIgnoreCase))
{
// To filter out an item, just terminate the chain:
return;
}
// Send everything else:
this.Next.Process(item);
}
and use that to explicitly filter out the signalr calls from being sent
or if the calls are coming from JS, then the telemetry initializer there does a similar thing to filter out telemetry if you return false in the initializer.

FlumeRpcClient multithreading

I'm trying to understand the correct way to use the Flume RpcClient in a multithreaded application. Information I have found so far indicates that the components are thread safe, but the example in the Flume documentation clouds the issue when it comes to error handling. This code:
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
try {
client.append(event);
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
If more then one thread calls this method, and the exception is thrown, then there will be a problem as multiple threads try and recreate the client in the exception handler.
Is the intent of the SDK that it should only be used by a single thread? Should this method be synchronized, as it appears to be in the log4jappender that is part of the Flume source? Should I put this code in its own worker and pass it events via a queue?
Does anyone have an example of RpcClient being used by more then one thread (included the error condition)?
Would I be better off using the "embedded agent"? Is that multithread friendly?
With the embedded agent, you get the same case except you don't know what to do:
try {
agent.put(event);
} catch (EventDeliveryException e) {
// ???
}
You could stop the agent, and restart it - but you would need a synchronized block (or a ReentrantReadWriteLock, to not block thread while "reading" the client field). But since I'm not a Flume expert, I can't tell you which one is better.
Example:
class MyClass {
private final ReentrantReadWriteLocklock;
private final Lock readLock;
private final Lock writeLock;
private RpcClient client;
private final String hostname;
private final Integer port;
// Constructor
MyClass(String hostname, Integer port) {
this.hostname = Objects.requireNonNull(hostname, "hostname");
this.port = Objects.requireNonNull(port, "port");
this.lock = new ReentrantReadWriteLock();
this.readLock = this.lock.readLock();
this.writeLock = this.lock.writeLock();
this.client = buildClient();
}
private RpcClient buildClient() {
return RpcClientFactory.getDefaultInstance(hostname, port);
}
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
readLock.lock(); // lock for reading 'client'
try {
try {
client.append(event);
} catch (EventDeliveryException e) {
writeLock.lock(); // lock for reading/writing client
try {
// clean up and recreate the client
client.close();
client = null;
client = buildClient();
} finally {
writeLock.unlock();
}
}
} finally {
readLock.unlock();
}
}
}
Beside, the example will lose the event because it is not sent back. Some kind of loop + a max retry would probably do the trick:
int i = 0;
for (; i < maxRetry; ++i) {
try {
client.append(event);
break;
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
if (i == maxRetry) {
logger.error("flume client is offline, loosing events {}", event);
}
That's the idea, but I don't think that should be the task of the user (eg: us), but an option in the client or the agent to store event that could not be processed due to such errors.

commons.net FTPSClient.storeFile doesn't throw IOException if connection with server is lost

Background:
I'm attempting to add some level fault tolerance to an application that uses Apache Commons.net FTPSClient to transfer files. If the connection between the client and server fails, I'd like to capture the produced exception/return code, log the details, and attempt to reconnect/retry the transfer.
What works:
The retrieveFile() method. If the connection fails, (i.e. I disable the server's public interface), I receive a CopyStreamException caused by a SocketTimeoutException after the amount of time I specified as the timeout.
What doesn't work:
The storeFile() method. If I initiate a transfer via storeFile() and disable the server's public interface, the storeFile() method blocks/hangs indefinitely with out throwing any exceptions.
Here is a simple app that hangs if the connection is terminated:
public class SmallTest {
private static Logger log = Logger.getLogger(SmallTest.class);
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
FTPSClient client = new FTPSClient(true);
FTPSCredentials creds = new FTPSCredentials("host", "usr", "pass",
"/keystore/ftpclient.jks", "pass",
"/keystore/rootca.jks");
String file = "/file/jdk-7u21-linux-x64.rpm";
String destinationFile = "/jdk-7u21-linux-x64.rpm";
client.setTrustManager(TrustManagerUtils.getValidateServerCertificateTrustManager());
client.setKeyManager(creds.getKeystoreManager());
client.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out), true));
client.setCopyStreamListener(createListener());
client.setConnectTimeout(5000);
client.setDefaultTimeout(5000);
client.connect(creds.getHost(), 990);
client.setSoTimeout(5000);
client.setDataTimeout(5000);
if (!FTPReply.isPositiveCompletion(client.getReplyCode())) {
client.disconnect();
log.error("ERROR: " + creds.getHost() + " refused the connection");
} else {
if (client.login(creds.getUser(), creds.getPass())) {
log.debug("Logged in as " + creds.getUser());
client.enterLocalPassiveMode();
client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE);
client.setFileType(FTP.BINARY_FILE_TYPE);
InputStream inputStream = new FileInputStream(file);
log.debug("Invoking storeFile()");
if (!client.storeFile(destinationFile, inputStream)) {
log.error("ERROR: Failed to store " + file
+ " on remote host. Last reply code: "
+ client.getReplyCode());
} else {
log.debug("Stored the file...");
}
inputStream.close();
client.logout();
client.disconnect();
} else {
log.error("Could not log into " + creds.getHost());
}
}
}
private static CopyStreamListener createListener(){
return new CopyStreamListener(){
private long megsTotal = 0;
#Override
public void bytesTransferred(CopyStreamEvent event) {
bytesTransferred(event.getTotalBytesTransferred(), event.getBytesTransferred(), event.getStreamSize());
}
#Override
public void bytesTransferred(long totalBytesTransferred,
int bytesTransferred, long streamSize) {
long megs = totalBytesTransferred / 1000000;
for (long l = megsTotal; l < megs; l++) {
System.out.print("#");
}
megsTotal = megs;
}
};
}
Is there any way to make the connection ACTUALLY timeout?
SW Versions:
Commons.net v3.3
Java 7
CentOS 6.3
Thanks in advance,
Joe
I ran into this same problem, and I think that I was able to get something that seems to work with the desired timeout behavior when I unplug the ethernet cable on my laptop.
I use 'storeFileStream' instead of 'storeFile', and then use 'completePendingCommand' to finish the transfer. You can check the Apache commons docs for 'completePendingCommand' to see an example of this kind of transfer. It took about 15 mins for it to timeout for me. One other thing: the aforementioned docs include calling 'isPositiveIntermediate' to check for an error, but this wasn't working. I replaced it with 'isPositivePreliminary' and now it seems to work. I'm not sure if that's actually correct, but it's the best I've found so far.

calling a webservice from scheduled task agent class in windows phone 7.1

Can we call a webservice from the scheduled periodic task class firstly, if yes,
Am trying to call a webservice method with parameters in scheduled periodic task agent class in windows phone 7.1. am getting a null reference exception while calling the method though am passing the expected values to the parameters for the webmethod.
am retrieving the id from the isolated storage.
the following is my code.
protected override void OnInvoke(ScheduledTask task)
{
if (task is PeriodicTask)
{
string Name = IName;
string Desc = IDesc;
updateinfo(Name, Desc);
}
}
public void updateinfo(string name, string desc)
{
AppSettings tmpSettings = Tr.AppSettings.Load();
id = tmpSettings.myString;
if (name == "" && desc == "")
{
name = "No Data";
desc = "No Data";
}
tservice.UpdateLogAsync(id, name,desc);
tservice.UpdateLogCompleted += new EventHandler<STservice.UpdateLogCompletedEventArgs>(t_UpdateLogCompleted);
}
Someone please help me resolve the above issue.
I've done this before without a problem. The one thing you need to make sure of is that you wait until your async read processes have completed before you call NotifyComplete();.
Here's an example from one of my apps. I had to remove much of the logic, but it should show you how the flow goes. This uses a slightly modified version of WebClient where I added a Timeout, but the principles are the same with the service that you're calling... Don't call NotifyComplete() until the end of t_UpdateLogCompleted
Here's the example code:
private void UpdateTiles(ShellTile appTile)
{
try
{
var wc = new WebClientWithTimeout(new Uri("URI Removed")) { Timeout = TimeSpan.FromSeconds(30) };
wc.DownloadAsyncCompleted += (src, e) =>
{
try
{
//process response
}
catch (Exception ex)
{
// Handle exception
}
finally
{
FinishUp();
}
};
wc.StartReadRequestAsync();
}
private void FinishUp()
{
#if DEBUG
try
{
ScheduledActionService.LaunchForTest(_taskName, TimeSpan.FromSeconds(30));
System.Diagnostics.Debug.WriteLine("relaunching in 30 seconds");
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.ToString());
}
#endif
NotifyComplete();
}

Resources