Two sections highlighted above.
1st - Mini-Profiler telling me how much time execution of a Controller/Action is taking (called via ajax)
87ms
2nd - Chrome Web Inspector telling me how much time the same ajax request is taking to complete
535 ms
Using glimpse, I figured that execution of the other lifecycle events (base controller / filters) took ~22ms.
Looking for guidance to figure out where the rest of the time is going.
Thanks.
Edit
This is almost consistent (variance is ~10 - 20 ms in both values - Mini-Profiler's and Chrome Inspector's).
These results are for an online request against a production server (VPS) running IIS 7.5. When these numbers are measured on a dev machine (localhost running IIS express), difference in Mini-Profiler and Chrome Inspector results isn't as significant.
Since these requests are against an online resource you need to account for the latency.
For example take this:
Server time is merely 118ms, however the dns lookup takes 598ms, connecting takes another 205ms and the response only comes back +1173ms after I visited the page. Finally the DOM only starts rendering 1.27 seconds in.
The server bits only account for time spent on the server inside your app.
You must add to that.
Time it takes to resolve dns.
Time it takes to connect (if no keepalive is in place)
[waiting time]
Time it takes to send the TCP packet asking for the resource
Overhead on the web server / proxy front end
Server time (the bright red number)
Time it takes for the first TCP packet to find its way back to you.
[/waiting time]
Time it takes the rest of the packets to find the way back to you. (read about TCP congestion windows)
Time it takes the browser to parse the stuff it gets back
Time it takes it render
(and then there is the interdependency of JavaScript and CSS that I am not going to touch on here)
Related
One page in my Flask web application returns a list of entries based on some filter supplied by the user. When there are too many items (more specifically, when the size of the HTML page excess 150KB), the browser will take forever (over 2 minutes) to load the page. During the wait, a partial list is displayed in the browser, which confuses some users. On the other hand, smaller pages (<= 100KB) will load almost instantly. See this link for a Firefox Profiler recording made on Firefox Developer Edition.
I first suspected a web/application server misconfiguration, but the issue cannot be reproduced with the curl command copy & pasted from Firefox's development tool (I can see the HTML response right away). Then perhaps it's due to my browser's specificities? Well no, besides Firefox, such long pages also take forever to load on Firefox Developer Edition, Chrome, and Microsoft Edge.
Additionally information:
The application is deployed with Docker and meinheld-gunicorn-flask-docker.
The problem does not manifest itself when I launched the application with Flask's debug server outside of Docker.
The network latency should be extremely low since I'm sitting on the same floor as the server.
I have root access to the server, which is Linux mwfsh 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux.
How should I address the ridiculously long HTTP response time? Which commands should I run to get a better idea of the underlying issue?
I'm working on a project that uses HERE's geolocation service.
The project is basically a feature in our system that will route a list of addresses. This routing will happen every day and will have around 7000 points, at least.
Today we use the HERE service to geolocate these addresses and send them to our routing service. However, we are facing a huge bottleneck in this implementation: Of the 7000 points we use for testing, we were able to send only about 200 to geolocate, if we send a larger number of points, we simply do not receive any more response, nor the return of timeout or anything like that.
About the implementation: we do not send all points in the same request, each point to be geocoded is sent in a request. We adjusted our software to send only four requests per second thinking that there could be a QPS block, but we were not successful in solving the problem. We thought about also implementing a massage queue, but this could end up increasing the total time of geolocation + routing, which for us makes the solution unfeasible.
In the code, we have an array that stores the addresses to be geocoded, and for each position of the array we execute a GET request for the following URL: https://geocoder.ls.hereapi.com/6.2/geocode.json?apiKey=TOKEN&searchtext=ADDRESS
If you can help me find a solution.
For a large numbers of geocodes you may wish to consider the Batch Geocoder API:
https://developer.here.com/documentation/batch-geocoder/dev_guide/topics/quick-start-batch-geocode.html
I cannot replicate a problem with more than 200 Geocoder requests in a row, so we may need to see some code before we can help further.
Are you using our freemium service ? just to let you know that our 6.2 version of geocoder API is no longer support any new feature development, and hence if you are still implmenting the use case. Please try to switch to V7. Do you mean that you are not able to send entire 7000 addresses and fetch response even in chunks. It could be also due to Linux system that has restricted number of pool network connections on the same moment.try to send requests from some home endpoint (that not behind firewall ) and from Windows system
I'm using Fig and Docker to containerise a sample Rails app. Currently, it works fine, the database and server start up. When I have an active Internet connection it all works perfectly. However when I don't have an Internet connection it takes a long time to connect (20 seconds from the browser requesting the localhost page) to the Rails/WEBrick server.
I've looked into the logs and nothing is out of the ordinary. It just takes a long time for the container to receive the initial connection and furthermore a long time to transmit the data.
Okay, I tested it, and it was because of DNS resolution. When you "disable" typical Google DNS and instead use localhost, the latency goes away. This is probably because without doing this Docker assumes that 127.0.0.1 is some address that needs to be looked up via a NS, and spends a lot of time waiting for a response (presumably because it sent it via UDP, it waits longer because of lost/dropped packets). This is also why the request wasn't recorded immediately, as DNS is at a lower-level on the net stack.
I need to add a "real-time" element to my web application. Basically, I need to detect "changes" which are stored in a SQL Server table, and update various parts of the UI when a change has occured.
I'm currently doing this by polling. I send an ajax request to the server every 3 seconds asking for any new changes - these are then returned and processed. It works, but I don't like it - it means that for each browser I'll be issuing these requests frequently, and the server will always be busy processing them. In short, it doesn't scale well.
Is there any clever alternative that avoids polling overhead?
Edit
In the interests of completeness, I'm updating this to mention the solution we eventually went with - SignalR. It's OS and comes from Microsoft. It's risen in popularity, and I can heartily recommend this, or indeed WebSync which we also looked at.
Check out WebSync, a comet server designed for ASP.NET/IIS.
In particular, what I would do is use the SQL Dependency class, and when you detect a change, use RequestHandler.Publish("/channel", data); to send out the info to the appropriate listening clients.
Should work pretty nicely.
taken directly from the link refernced by Jakub (i.e.):
Reverse AJAX with IIS/ASP.NET
PokeIn on codeplex gives you an enhanced JSON functionality to make your server side objects available in client side. Simply, it is a Reverse Ajax library which makes it easy to call JavaScript functions from C#/VB.NET and to call C#/VB.NET functions from JavaScript. It has numerous features like event ordering, resource management, exception handling, marshaling, Ajax upload control, mono compatibility, WCF & .NET Remoting integration and scalable server push.
There is a free community license option for this library and the licensing option is quite cost effective in comparison to others.
I've actually used this and the community edition is pretty special. well worth a look as this type of tech will begin to dominate the landscape in the coming months/years. the codeplex site comes complete with asp.net mvc samples.
No matter what: you will always be limited to the fact that HTTP is (mostly) a one-way street. Unless you implement some sensible code on the client (ie. to listen to incoming network requests) anything else will involve polling the server for updates, no-matter what others will tell you.
We had a similar requirement: to have very fast response time in one of our real-time web applications, serving about 400 - 500 clients per web server. Server would need to notify the clients almost within 0.1 of a second (telephony & VoIP).
In the end we implemented an Async Handler. On each polling request we put the request to sleep for 5 seconds, waiting for a semaphore pulse signal to respond to the client. If the 5 seconds are up, we respond with a "no event" and the client will post the request again (immediately). This resulted in very fast response times, and we never had any problems with up to 500 clients per machine.. no idea how many more we could add before the polling requests might create a problem.
take a look at this article
I've read somewhere (didn't remember where) that using this WCF feature make the host process handle requests in a way that didn't consume blocked threads.
Depending on the restrictions on you application you can use Silverlight to do this connection. You don't need to have any UI for Silverlight, but you can use Sockets have a connection that accepts server side pushes of data.
HI,
We have a device on the field which sends TCP packets to our server once a day. I have a Windows service which constantly listens for those packets. The code in the service is pretty much a carbon copy from the MSDN example (Asynchronous Server Socket Example) – the only difference being our implementation doesn't send anything back. It just receives, processes the data and closes the socket. The service just starts a thread which immediately addresses the code linked above.
The problem is that when I goto the Task Manager of the server on which it is running, the service seems be to using all of the CPU (it says 99) all the time. I was notified of this by IT. But I don't understand what those CPU cycles are being used for, the thread just blocks on allDone.WaitOne() doesn't it?
I also made a Console Application with the same code, and that works just fine i.e. using CPU only when data is being processed. The task in each case is completed successfully each time, but the service implementation, from the looks of it seems very inefficient. What could I be doing wrong here?
Thanks.
Use a profiler to find out where your CPU is spent. That should have been yourfirst throught - profilers are one of the main tools for programmers.
It will pretty much exactly tell you what part of the code blows the CPU.
The code, btw., looks terrible. Like an example how to use async cockets, not like a good architecture for a multi connection server. Sorry to say, you possibly have to rewrite thse.