I'm a little confused at the memory use of my WCF service. Brief overview, my wcf service is an odata providor that allows my ipad application to talk to our sql server database.
The problem is that when a client (ipad device using objective c odata library) calls for a simple set of data (say get all customers from the database) the memory of the w3wp process goes up by a few mb's, and never really comes back down. Being the fact that all the client wants to do is one off calls (retrieve a data set, update a data set, delete a data set) than once it has finished its call the memory it used to do the action should be relinquished. This is not the case at all? I gather there is some caching happening or maybe the calling instance is not being disposed.
Can anybody steer me in the right direction so the w3wp is lean and blows the memory away after the call has completed.
Thanks in advance
does you database reside on the same machine as your web server? if your indexes are not properly applied you will end up consuming much resources. if you are using MS SQL Server check the minimum memory setting for the server. once reached the minimum memory limit MS SQL Server will probably not free it up until restarted. you should also take a look at you binding configuration. if you use a state full (session) binding and not closing the session the service instance is gonna stay in memory for 10 mins (default) waiting for new client requests from the same proxy object.
Related
I built an Web API with Delphi using FireDac and an HTTPServer component: The application is using a dbms powered by firebird.
Everything was working fine until I the moment I started to simulate multiple requests to the same API endpoint. This is causing internal server exceptions reporting that a second trasaction is being opened when there are already a transaction opened.
I know that all connections are being closed after being used and objects are being destroyed in order to prevent memory leaks but I couldn't understand why do the application triggers the exception.
Any help or toughs that might drive me to a solution?
Multiple requests will be processed concurrently by the HTTP server.
So if two clients try to access the same resource (URL) at the same time, the server will need two sets of database connections and data access components.
If your application uses distinct objects - one set per client - and does this in a thread safe way, both connection should work fine.
However if you use only one datamodule to serve all incoming HTTP requests, proper serialization is required. It does not help to close connections after use, the connections must be used only from one thread at a time.
So to understand the potential reason of the error, more information about the actual design of your server is needed.
Cowboy is webserver written in erlang. It spawns new process for each request and than using that process for subsequent requests if HTTP pipelining (sending multiple requests on same socket one after the other without waiting for the response and assuming that responses will be send back in same order as requests was sent) is used by client.
This is fine, but if you want to use that webserver for building realtime web app, it has one problem and that is when socket is closed for instance because of client network problems, the process representing that socket on the server is terminated. That means you can`t use that process for storing some session data (because in realtime web app you probably want to go behind the end of the http request (if long polling is used for instance) and have some state associated to the connected client and think about him as "he is online" even if the http request was ended.
In sock.js, it is solved by spawning one more process for each client (each session id).
So if you have 2000 clients using websockets, you will have around 4k processes (one process from cowboy that represents that socket and one more for keeping the session state alive for case that cowboy process will be terminated (for instance because of network problems).
THE QUESTION IS: i am relative new in erlang so i don`t know if it does make sense much in question of performance improvement, but i am thinking about rewriting that Cowboy webserver a bit so the process representing realtime connection will not ends until i want it (the process will be alive even when the underlying websocket socket will be terminated).
This will eliminate the needs to have one more session process for each client. So instead of 4000 processes you will have just 2000. Can it be huge performance booster in erlang?
Erlang is pretty good with processes, but, too much of anything ain't good. Using processes as direct mappings to sessions is not a good idea. Why not do it logically ? I assume you can have some IN-MEMORY storage, say, ETS, or even mnesia.
If am using Web Sockets to communicate, each user is connected via one such process, however, you simply map a certain random unique Session Key to each individual Process, hence to each individual user.
-record(client,{web_sock_pid, session_key,username}).
If the process exits, and the client end has a way pf reconnecting, once it re-identifies itself as the same user, then , the session key still holds, but the pid of the attached process has changed. it does not matter.
If it is NOT web sockets, and it is just HTTP REST/JSON/JSONP/XML services , then it is even very easy. Use ETS tables in RAM. A new session is stored and the parameters defining that session are store in RAM, then for each request, the session key can come along plus other parameters. Message delivery is by comet or frequent checks by the client end.
Sounds like you are doing some premature optimizations if you ask me.
Erlang processes are very inexpensive. You shouldn't really have to worry about spawning too manny processes.
Write it with two processes per websocket, then do some measurements to see where it is using the most memory and wasting the most cpu cycles.
I have a ISAPI DataSnap server and a client application, which communicate over the web. I have been looking for a way to show data transmission progress when the client application is retrieving data or applying updates, but so far I haven't found anything except to set ClientDataSet.PacketRecord to a small number and run a loop to retrieve packets. Since my data contains BLOB data, this method isn't very practical, as each record might grow beyond 1024KB. Is there a way to monitor the actual TCP/IP communication between my client application and the server?
Is it possible to throw a TIdHTTPProxyServer on my client application and monitor data transmission using it?
Update:
Even if that's possible, I'm concerned about the send/receive routine being executed in the main thread, and thus blocking any GUI activity. I read somewhere that I could execute these calls (Refresh and ApplyUpdates) in separate threads, but I haven't got a clue on how to do this.
I need to add a "real-time" element to my web application. Basically, I need to detect "changes" which are stored in a SQL Server table, and update various parts of the UI when a change has occured.
I'm currently doing this by polling. I send an ajax request to the server every 3 seconds asking for any new changes - these are then returned and processed. It works, but I don't like it - it means that for each browser I'll be issuing these requests frequently, and the server will always be busy processing them. In short, it doesn't scale well.
Is there any clever alternative that avoids polling overhead?
Edit
In the interests of completeness, I'm updating this to mention the solution we eventually went with - SignalR. It's OS and comes from Microsoft. It's risen in popularity, and I can heartily recommend this, or indeed WebSync which we also looked at.
Check out WebSync, a comet server designed for ASP.NET/IIS.
In particular, what I would do is use the SQL Dependency class, and when you detect a change, use RequestHandler.Publish("/channel", data); to send out the info to the appropriate listening clients.
Should work pretty nicely.
taken directly from the link refernced by Jakub (i.e.):
Reverse AJAX with IIS/ASP.NET
PokeIn on codeplex gives you an enhanced JSON functionality to make your server side objects available in client side. Simply, it is a Reverse Ajax library which makes it easy to call JavaScript functions from C#/VB.NET and to call C#/VB.NET functions from JavaScript. It has numerous features like event ordering, resource management, exception handling, marshaling, Ajax upload control, mono compatibility, WCF & .NET Remoting integration and scalable server push.
There is a free community license option for this library and the licensing option is quite cost effective in comparison to others.
I've actually used this and the community edition is pretty special. well worth a look as this type of tech will begin to dominate the landscape in the coming months/years. the codeplex site comes complete with asp.net mvc samples.
No matter what: you will always be limited to the fact that HTTP is (mostly) a one-way street. Unless you implement some sensible code on the client (ie. to listen to incoming network requests) anything else will involve polling the server for updates, no-matter what others will tell you.
We had a similar requirement: to have very fast response time in one of our real-time web applications, serving about 400 - 500 clients per web server. Server would need to notify the clients almost within 0.1 of a second (telephony & VoIP).
In the end we implemented an Async Handler. On each polling request we put the request to sleep for 5 seconds, waiting for a semaphore pulse signal to respond to the client. If the 5 seconds are up, we respond with a "no event" and the client will post the request again (immediately). This resulted in very fast response times, and we never had any problems with up to 500 clients per machine.. no idea how many more we could add before the polling requests might create a problem.
take a look at this article
I've read somewhere (didn't remember where) that using this WCF feature make the host process handle requests in a way that didn't consume blocked threads.
Depending on the restrictions on you application you can use Silverlight to do this connection. You don't need to have any UI for Silverlight, but you can use Sockets have a connection that accepts server side pushes of data.
Is it possible to check the number of existing available connections a wcf service has? programmatically?
I want to see if connections to the web service were closed properly in the ASP.NET code.
thanks
You could check out something like Windows Server AppFabric for that purpose.
In WCF, most of the time, the "connections" are only open very briefly anyway - for as long as a service call lasts. So you can't really go check if there are any connections around - they'll be gone when the call terminates.
You can also check into the WCF performance counters that are available on the server side to keep an eye on the number of concurrent sessions. You can definitely query performance counters from .NET code. The Service Performance Counters offer e.g. a number of instances (of your service class) that are in memory at any given time - that's the number of requests being handled at any given time (which is probably what you could call a "connection" to a WCF service).