I'm looking for a way to find out how long a server has been up based only on the page it sends back. Like, if I went to www.google.com, is there some sort of response header variable that tells how long the server I connected to has been up? I'm doubting there is, but never hurts to ask...
No, because HTTP, as a protocol, doesn't really care. In any case, 50 different requests to google.com may end up at 50 different servers.
If you want that information, you need to build it into the application, something like "http://google.com/uptime" which will deliver Google's view of how long it's been up - Google will probably do this as a static page showing the date of their inception :-).
Not from HTTP.
It is possible, however, to discover uptime on many OSs by interrogating the the TCP packets received. Look to RFC 1323 for more information. I believe that a timestamp header is incremented by some value with every transaction, and reset to zero on reboot.
Caveats: it doesn't work with all OSs, and you've got to track servers over time to get accurate uptime data.
Netcraft does this: see here for a vague description:
The 'uptime' as presented in these
reports is the "time since last
reboot" of the front end computer or
computers that are hosting a site. We
can detect this by looking at the data
that we record when we sample a site.
We can detect how long the responding
computer(s) hosting a web site has
been running, and by recording these
samples over a long period of time we
can plot graphs that show this as a
line. Note that this is not the same
as the availability of a site.
Unfortunately there really isn't. You can check this for yourself by requesting the HTTP headers from the server in question. For example, from google.com you will get:
HTTP/1.0 200 OK
Cache-Control: private, max-age=0
Date: Mon, 08 Jun 2009 03:08:11 GMT
Expires: -1
Content-Type: text/html; charset=UTF-8
Server: gws
Online tool to check HTTP headers:
http://network-tools.com/default.asp?prog=httphead&host=www.google.com
Now if it's your own server, you can create a script that will report the uptime, but I don't think that's what you were looking for.
To add to what Pax said, there are a number of third party services that monitor site up-time by attempting to access server resources at specific intervals. They maintain some amount of history in their own databases, and then report back to you when those resources are inaccessible.
I use Uptime Party for monitoring a server.
Related
We have a few clients in Asia and the USA and we're seeing this strange behavior when calling request.data when handling their POST requests:
The Singapore client is super fast (> 10 ms)
The USA clients are not as fast (50 - 100 ms)
The Chinese client is the slowest (200+ ms)
We got the above data by using cProfile, so that should be accurate (I think?). The payload of each client varies between 50 - 700 bytes but does not seem to exhibit any patterns (the Singapore client has a medium sized POST payload and the Chinese one has a small sized one)
After looking in this question, I suspect we're facing something similar, where the request is processed immediately after the headers are received, so calling request.data blocks until the full POST payload is received. I am guessing that the Chinese clients are the slowest since the GFW slows down the transmission of the POST payload.
I have two questions:
Does the analysis make sense?
How can I fix this? The above behavior seems quite inefficient since my API instance is blocked for an additional amount of time and wastes CPU cycles. It seems like it would work better if the request was fully received before being sent to the API instance
FWIW, I inherited this code base and there may be some gaps in my understanding but our DCOS architecture is similar to the image below. I tried looking for configuration options in the external marathon LB to increase buffering or send only fully received requests but I didn't find such options.
Looks like I figured this one out!
Apparently Marathon LB is a wrapper around HAProxy and HAProxy has a mechanism to receive the full HTTP request payload before forwarding it on to the backend. Adding the http-buffer-request option to the Marathon-LB configuration seems to have done the trick!
I am trying to obtain statistics for an app which is hosted on my Cloud Foundry Pivotal without using any 3rd party applications like "AppDynamics" (or others).
Specifically, I want to find out the 'Requests per second' and 'Response Time'.
I know that it is possible to access memory, disk space and cpu utilization by an app because Pivotal provides these statistics. So does Pivotal also provide 'Requests per second' and 'Response Time'?
Yes this is pretty easy to use. You can use Logstash as the ingesting engine, you just need the correct parser. Check out http://scottfrederick.cfapps.io/blog/2014/02/20/cloud-foundry-and-logstash for the parser and config for ingesting cloud foundry logs. I was playing around with this before and it worked quite well. Let me know if you have any issues.
The best way to logging the requests to an external logging provider. Check out http://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html. You can actually log to any http endpoint that supports POST. You can use Splunk to calculate your response time and requests per second. The logs that come out of that are in real time and are streamed to your logging endpoint. It contains information about the requests as well as log messages from your app.
ex.
2014-11-10T11:12:47.97-0500 [App/0] OUT GET / 304 5ms
2014-11-10T11:12:48.36-0500 [App/0] OUT GET /favicon.ico 404 0ms
There are basic stats available when you run cf app <app-name>. These include the memory, cpu and disk utilization of your app. You can also access these via the REST api, documented here.
https://s3.amazonaws.com/cc-api-docs/41397913/apps/get_detailed_stats_for_a_started_app.html
That's not going to help with requests per second or response time though. #jsloyer's solution would work for that, or you could use an APM like NewRelic, which will give you a wealth of data for virtually nothing.
I am working on an internal app to do host/service discovery. The type of data I am storing looks like:
IP Address: 10.40.10.6
DNS Name: wiki-internal.domain.com
1st open port:
port 80 open|close
open port banner:
HTTP/1.1 200 OK
Date: Tue, 07 Jan 2014 08:58:45 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.3
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8
And so on. My first thought is to just put it all in one document with a string that identifies what the data is like "port","80". After initial data collection I realized that there was a lot of data duplication because web server banners and such will often get reused. Also out of 8400 machines with ssh there are only 6 different banners.
Is there a better way to do the design the database with references so certain banners only get created once. Performance is a big issue since the database size will double in the next year. If possible I would like to keep historical banner information for trending.
MongoDB's flexible schema allows you to match the needs of your application. While we often talk about denormalizing for speed, you certainly can normalize to reduce redundancy and storage costs. From your initial analysis and concern over database size, it seems clear that factoring out the redundancy fits your application, in this case, store banners separately and reference them with small ints for _ids, etc.
So do what you need for your application, and store your data in MongoDB in the form that matches the needs of your application.
Two sections highlighted above.
1st - Mini-Profiler telling me how much time execution of a Controller/Action is taking (called via ajax)
87ms
2nd - Chrome Web Inspector telling me how much time the same ajax request is taking to complete
535 ms
Using glimpse, I figured that execution of the other lifecycle events (base controller / filters) took ~22ms.
Looking for guidance to figure out where the rest of the time is going.
Thanks.
Edit
This is almost consistent (variance is ~10 - 20 ms in both values - Mini-Profiler's and Chrome Inspector's).
These results are for an online request against a production server (VPS) running IIS 7.5. When these numbers are measured on a dev machine (localhost running IIS express), difference in Mini-Profiler and Chrome Inspector results isn't as significant.
Since these requests are against an online resource you need to account for the latency.
For example take this:
Server time is merely 118ms, however the dns lookup takes 598ms, connecting takes another 205ms and the response only comes back +1173ms after I visited the page. Finally the DOM only starts rendering 1.27 seconds in.
The server bits only account for time spent on the server inside your app.
You must add to that.
Time it takes to resolve dns.
Time it takes to connect (if no keepalive is in place)
[waiting time]
Time it takes to send the TCP packet asking for the resource
Overhead on the web server / proxy front end
Server time (the bright red number)
Time it takes for the first TCP packet to find its way back to you.
[/waiting time]
Time it takes the rest of the packets to find the way back to you. (read about TCP congestion windows)
Time it takes the browser to parse the stuff it gets back
Time it takes it render
(and then there is the interdependency of JavaScript and CSS that I am not going to touch on here)
I need to add a "real-time" element to my web application. Basically, I need to detect "changes" which are stored in a SQL Server table, and update various parts of the UI when a change has occured.
I'm currently doing this by polling. I send an ajax request to the server every 3 seconds asking for any new changes - these are then returned and processed. It works, but I don't like it - it means that for each browser I'll be issuing these requests frequently, and the server will always be busy processing them. In short, it doesn't scale well.
Is there any clever alternative that avoids polling overhead?
Edit
In the interests of completeness, I'm updating this to mention the solution we eventually went with - SignalR. It's OS and comes from Microsoft. It's risen in popularity, and I can heartily recommend this, or indeed WebSync which we also looked at.
Check out WebSync, a comet server designed for ASP.NET/IIS.
In particular, what I would do is use the SQL Dependency class, and when you detect a change, use RequestHandler.Publish("/channel", data); to send out the info to the appropriate listening clients.
Should work pretty nicely.
taken directly from the link refernced by Jakub (i.e.):
Reverse AJAX with IIS/ASP.NET
PokeIn on codeplex gives you an enhanced JSON functionality to make your server side objects available in client side. Simply, it is a Reverse Ajax library which makes it easy to call JavaScript functions from C#/VB.NET and to call C#/VB.NET functions from JavaScript. It has numerous features like event ordering, resource management, exception handling, marshaling, Ajax upload control, mono compatibility, WCF & .NET Remoting integration and scalable server push.
There is a free community license option for this library and the licensing option is quite cost effective in comparison to others.
I've actually used this and the community edition is pretty special. well worth a look as this type of tech will begin to dominate the landscape in the coming months/years. the codeplex site comes complete with asp.net mvc samples.
No matter what: you will always be limited to the fact that HTTP is (mostly) a one-way street. Unless you implement some sensible code on the client (ie. to listen to incoming network requests) anything else will involve polling the server for updates, no-matter what others will tell you.
We had a similar requirement: to have very fast response time in one of our real-time web applications, serving about 400 - 500 clients per web server. Server would need to notify the clients almost within 0.1 of a second (telephony & VoIP).
In the end we implemented an Async Handler. On each polling request we put the request to sleep for 5 seconds, waiting for a semaphore pulse signal to respond to the client. If the 5 seconds are up, we respond with a "no event" and the client will post the request again (immediately). This resulted in very fast response times, and we never had any problems with up to 500 clients per machine.. no idea how many more we could add before the polling requests might create a problem.
take a look at this article
I've read somewhere (didn't remember where) that using this WCF feature make the host process handle requests in a way that didn't consume blocked threads.
Depending on the restrictions on you application you can use Silverlight to do this connection. You don't need to have any UI for Silverlight, but you can use Sockets have a connection that accepts server side pushes of data.