SignalR Connections Won't Close? (IIS 10 + ASP.NET MVC) - asp.net-mvc

We have a single-page web application running on 6 servers behind a load balancer. Recently, we have been experiencing occasional server crashes, and examining stack traces, it's possibly related to winWebSocketHandler+<>c__DisplayClass5_0+<b__0>d;0 [AWAIT#0].
When I looked at one of the servers current requests, I noticed a lot of these:
Is this the normal way that IIS represents WebSocket connections (State = EndRequest), or could they be hanging requests that are needlessly using up resources? If that's the case, how can we close those connections sooner?

Related

asp mvc a long initial load of the site (fttb)

the first time the site is loaded, the site is loaded for a very long time, then the pages are loaded quickly. How can this be corrected?
The iis server and the mssql server are on the same subnet, but on different virtual machines
Each server has 8 GB of RAM and 4 processor cores
No users, these are test machines
chrome
This is very common issue with all ASP.NET application hosted on IIS server, you can check following things for better first time performance improvement.
Make sure that the Application Pools is configured as "Always
Running"
Configure IIS Warm-up services for your website
Precompile your views

What are possible Scalability options for an application supporting ONLY Single TCP Socket Connection?

There is a legacy implementation(to an extent company proprietary in Pascal, C with some java macros) which processes TCP Socket based requests from TCP client application. It supports multiple client applications(around 5K) connecting over TCP Socket, however, it only supports single socket connection with backend(database). There are two instances of the server, so in total, it supports 10K client applications over two TCP Socket connection with database. All database related communication happens in synchronous manner over single socket connection. There are massive issues in this application, especially higher RTT(Round Trip Time) and occasional outages due to back-pressure. We have an ops team for such issues. They mostly resolve them by restarting the server. Hardly, we have people in our team who know coding details of this application and there is not much documentation. As this is a critical application we can not afford messing with it. We don't want to touch the code at least for now. This even becomes more critical due to shift in business priorities. There is a need to add another 30K client applications of another business with this setup.
Task before us is to integrate it with another application which is based on microservice architecture with middleware using RabbitMQ. This is a customer facing application sensitive to higher QoS. We can not afford outage & downtime in it. As part of this integration, there is a need to process request messages coming from the above legacy application over TCP Socket before passing them to database. In other words, we want to introduce a component which would process requests of legacy application before handing over to database. This additional process is part of our client request. Some of the processing requirement is very intensive and resource hungry in terms of CPU Cycle, Memory and socket i/o. As a result, there are chances, such processing may lead to server downtime & higher RTT. Our this layer is very flexible, we can easily add more server or replace faulty ones. But, this doesn't sound very efficient in this integration as we are limited with single socket connection of legacy application. So in total at max, we can only have 2(+ 6 for new 30k client application) servers. This is our cause of concern.
I want know, what different possible options are available to address high availability, scalability and latency issues of such integration? Especially with limitation of single TCP socket connection, how can we make this integration efficient, something which can handle back-pressure, better application uptime etc.
We were thinking of leveraging RabbitMQ, Layer 4 Load balancer(like haProxy, NginX), IPVS, NAT etc.. But all lead toward making some changes(or not very efficient technique) in the legacy code, which we don't want.

Using ASP MVC as Client for MS Orleans

I want to use MS Orleans with an ASP MVC client. I want to use the mvc app as an Orleans Client Observer in this constellation. Will i get possibly problems with the threadlifetime / apppool recycling etc?
The documentation of Orleans said
The client part, usually a web front-end,...
...For example, an ASP.NET application running on a web server can be a
client part of an Orleans application. The client part executes on top
of the .NET thread pool, and is not subject to scheduling restrictions
and guarantees of the Orleans Runtime.
But I am not quite sure how to interpret this.
It simply means that your 'client' code (client being from the perspective of Orleans; it would actually be running on a web server in your case) follows the normal rules you would expect in an application in terms of thread dispatchers etc. I don't remember the specifics as it's been a while since I delved into the documentation, but I believe they guarantee certain things such as single-threaded execution per actor using some special scheduler on top of the thread pool.
Most likely your web app should not run an Orleans silo per se, but as an Orleans client should merely serve as a gateway to talk to a silo running in a separate application. That way app pool recycles would not affect operation of the silo.
See also: Developing a Client

Socket, HTTP query or Long Polling for 5000 connected clients? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've about 5000 Windows clients written in Delphi and residing in office LANs which needs to access new data updated to a "Cloud", basically PHP(IIS) + Replicated MySQL website hosted on 2 x Windows 2003 VPS machines with 1GB RAM (I can upgrade to 2GB).
End users can access via the Internet and data updated by these users needs to be used by these Windows client residing behind office firewalls.
Note: If you are asking why are the Clients behind the firewall - they contain critical company information.
Since the clients are located behind firewall, the clients must connect to the VPS directly to download data updates.
There are several different connection methods that I can think of:
1). Sockets: Run a socket server on the Windows VPS and have each of the 5000 clients connect to the socket server constantly.
Pros: No 3rd party codes.
Cons: Low level. Unknown scalability and stability for large number of clients connecting at the same time. Stuck to Windows platform for the time being unless I use Lazarus which is not stable yet.
2). RabbitMQ: Run RabbitMQ (or equivalent) on the VPS and then get each of the 5000 clients to connect to the RabbitMQ server via AMQP. On the Windows VPS, create a Delphi application that connect to RabbitMQ to send data inserted by PHP into MySQL.
Pros: Send data and forget - no need to manage queue using MySQL.
Cons: Complexities in managing RabbitMQ and possible bugs (especially for replication) while only using simple queues. Queues may use a lot of memory.
3). HTTP Query: Program the 5000 clients to send an HTTP GET to the VPS every 5 seconds or so. The HTTP server will return data if there are updates and send a "no data" response if there are none.
Firstly, IIS is definitely out - my existing IIS hangs even if 5 users is downloading files - IIS resets by itself after a couple minutes, not sure it's IIS or the VPS.
I may use Apache (or Nginx) + PHP or create a custom Delphi HTTP server if that improves performance. If I were to use PHP, I would create a flag file (or use Memcached?) for clients that unread data - this is to prevent excessive MySQL queries on the queue table. For custom Delphi HTTP server, I could query MySQL to load all changes (for all clients) into memory every 1 second.
Pros: Fool proof and easiest to implement and works with Apache/PHP so I can even switch to Linux in the future. Easy to implement security using SSL.
Cons: Scalability issue - whether 5000 clients can query every 5 seconds without hanging the server.
4). Long Polling? I'm not familiar with long polling. Similar to HTTP Query but with delayed response?
Pros: Would be promising if there's a web server that's built for long polling.
Cons: Scalability unknown.
I've read dozens of articles comparing HTTP vs Socket vs Long polling, but I'm still unsure which method I should use given the very limited server resource that I'm using and limited manpower and technical expert.
Which would you use if you were me and why?
Note: I also just read about Memcached but it doesn't support replication on Windows. Most highly scalable web platforms and servers are for Unixes so my options are limited on this respect.

Delphi 7 ADO connection pooling outside of IIS

We have a Delphi 7 application that runs as an ISAPI extension in IIS6. The code use ADO to connect to a MS SQL 2000 database and performs many reads on the database (no writes). If I watch the audit login and logout events in SQL profiler I can see that numerous requests to the app result in only 1 audit login event. However, if I run that same code from outside IIS (i.e. a test app calling the same method in the dll) I see many login and logout events. My guess is that IIS is performing some automatic connection pooling without my doing anything. I'd like to see the same behavior when I run the dll from outside of IIS for performance reasons - the app is almost 100% slower in this situation. How can I get ADO connection pooling when the dll runs outside of IIS?
EDIT - I'm actually using the SQL ole provider. The connection string looks like this:
Provider=SQLOLEDB.1;Initial Catalog=%s;Data Source=%s;Password=%s;User ID=%s;Pooling=True;Min Pool Size=5;Max Pool Size=50;Connection Lifetime=120
I tried adding the Pooling=True attribute but this doesn't change things. Also, I learned that audit login and logout events don't necessarily change for connection pooling so I started using the Logins/sec, Logouts/sec and User Connections performance counters (SQLServer:GeneralStatistics) to determine if connection pooling occurs. From inside IIS I see many logins/sec and no logouts/sec. Outside of IIS I see many logins and logouts per second and user connections fluctuates (it holds steady in IIS).
It's hard to say based on the info given, but connection pooling is definitely based off of the connection string - if the connection string is exactly the same then the connection can be pooled... it sounds like your outside application may be altering the connection string?
IIS isn't pooling ADO connections. It is likely caching the ISAPI dll though. Are you starting/stopping your outside application continuously? Or is it a single run causing multiple login events?

Resources