Measuring elapsed time remotely - network-programming

I have a program which measures the time a student takes to perform a task.
The program is connected to the Internet and the student with the lowest time gets a prize on the website.
Given that the PC could measure time incorrectly (I've seen it happen) or the student could preload a shared library to intercept the OS calls and give phony times, the timing needs to be performed on the server side.
Currently my logic goes like this:
Client signals server to start.
Server records time and sends ACK back to client.
Client starts task after receiving ACK.
Client finishes task and sends notification to server.
Server records time and calculates elapsed time.
The problem is, this includes the latency from the time the Server sends the ACK to the time the Client receives it + the time the Client finishes to the time the Server receives the message. If the latency is 400ms and the task is 6s long, it makes up a rather large percentage of the total time.
Is there any better way measure the time of the task when the student or the PC may be untrustworthy?

Some competitions require the student or competitor to upload his binary program (or mayb the source code) into your server, which you do control. You could do that. In particular, if it is student code, you probably can require to upload the source code, and restrict to some API, and e.g. run it in a chroot-ed environment.

Related

How to know server is in good health condition or not?

How can I know that my server is working fine i.e in good health condition.
My Requirement is Users are complaining that they can not access the web application (Web site) something like it taking long time to do, some times its not completing the request.
I want to know whether my web site is in good condition or not before users and to get an alert message.
I want to know how we can measure whether the server is very responsive or user is not facing any problem. Some times my site takes long time coz. millions of data records have to be retrieved in that case I can not depend upon response time.
please help me on this
Monitoring response time without any third party software can be done with scripts like webinject. Webinject is a perl script that execute some browsing scenario and tells you if it acceptable or not.
Run a script at a regular interval, say 10min, that will start a webinject scenario. If the scenario is ko (check the return code of your webinject call), your script can send you an email, a sms, start a sound alarm, ... whatever is relevant to you.
You can also add some complexity by running a diagnostic script (check network by pinging relevant hardware, check cpu/ram usage of your servers, check the number of sessions in your database, ...) and send the diagnostic by email. You can also save the response times in a database (like a rdd database) to have a graphical view and be able to do some problem analysis on it.

Cost of continuous replications vs one-shot replications (using TouchDB and Cloudant)

We have an app that uses Cloudant as a remote server. Nevertheless, Cloudant is not completely compatible with TouchDB's continuous replications from previous experience. So our alternative for now is to trigger manually one-shot replications at a fixed frequency. Nevertheless, we would like to know if that approach is going to cost us more money than continuous replications, since continuous replications use longpoll and doesn't need to query the server often. In other words, does one-shot pull replications with Cloudant as the target cost us a GET request?
Thank you,
Paul
I think the issue you refer to is [1].
Cloudant's replication is 100% compatible with CouchDB. In this
instance, TouchDB's logs indicate the iOS network stack passed
on incomplete JSON to TouchDB. It's not clear who was to blame
in this case for the replication failure.
[1] https://github.com/couchbaselabs/TouchDB-iOS/issues/241
For the cost question, a one-shot pull replication will result in a GET to the _changes
feed each time it happens, plus the other requests required to
replicate. This _changes request will be counted as a light
HTTP request against your Cloudant account.
However, whether this works out as more or fewer requests overall
depends on the number of changes coming down from the remote server.
It's also important to remember that the number of _changes calls are very small
relative to the number of other calls involved (e.g., getting the
content of the changes themselves and particularly if there are many
attachments).
While this question is specific to TouchDB, and I mention specific
behaviours of that codebase, this answer deals with the requests involved
in replication between any two systems speaking the CouchDB replication
protocol[2].
[2] http://www.dataprotocols.org/en/latest/couchdb_replication.html
Let's take a contrived example: 1 update per 10 second window to
the source database for the replication, where a TouchDB database
is the target. Let's take a 5 minute poll vs. a continuous replication.
For simplicity of call-counting, let's also take attachments out of the
picture. We'll also assume the device has a constant network connection.
For the continuous case, every 10s TouchDB will receive an update in
the _changes feed. This causes the longpoll connection to close.
TouchDB then runs through the changes, requesting the updates from the
source database; one or more GET requests on the remote server. While
this is happening, TouchDB has to open up another longpoll request
to _changes. So in a five minute period, you'd end up with perhaps
30 calls to _changes, plus all the calls to get documents and record
checkpoints.
Compare this with a one-shot replication every five minutes. You'd
receive notification of the 30 updates in one _changes feed call.
TouchDB implements an optimisation[3] whereby it will call _all_docs
to get updated documents for 1- revs, so you might end up with a single
call to get all 30 documents (not possible in the continuous case as
you've received a single change). Then you've the checkpoint documents
to record. At best fewer than 5 HTTP calls, at most about a third of
the continuous case as you've avoided extra _changes requests.
[3] https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm#performance
It comes down to the frequency of updates you expect to the source
database. One-shot replication is likely to provide a smoother price
curve as you're in better control of the number of requests you make.
A further question is how often connections will drop because of the
network disconnects which happen regularly with mobile devices.
TouchDB's continuous replications will fire back up each time the
user comes on line (if added via the _replicator database). This is a
further source of unpredictable costs.
However, the benefits from more immediate visibility of changes may
certainly be worth the uncertainty.

Web2py server problem

I am running a web2py server which handles some requests which may take a total completion time of few seconds to few minutes. Once a connection is made to the server and it is processing a request which takes about 2-3 minutes, new connections to the server have to wait untill the former's request is completed.
I don't know if we can tweak some parameters in web2py for this. Do we have any way out of this problem.
web2py does not lock the server when busy with a connection but it does lock the user session, on purpose. That means other users can connect but not the one that started the original request. In the acton that takes time you can do:
session._unlock(response)
and this problem (if diagnosis is correct) will go away.
Anyway, it is not a good idea to have requests that take so long. The web server may kill your process and it is not good for usability. You should have a db table where you queue such tasks and handle them in a background process (explained in the manual) than use ajax or html5 websockets (web2y/gluon/contrib/comet_messaging.py) to check progress on the long running task.
Please bring this up on the web2py mailing list and we will help with more concrete examples.

what would be the possible approach to go : SQS or SNS?

I am going to make the rails application which integrates the Amazon's cloud services.
I have explore amazon's SNS service which gives the facility of public subscription which i don't want to do. I want to notify only particular subscriber.
For example if I have 5 subscriber in one topic then the notification should be goes to particular subscriber.
I have also explored amazon's SQS in which i have to write a poller which monitor the queue for message. SQS has also a lock mechanism but the problem is that it is distributed so there would be a chance of getting same message from another copy of queue for process.
I want to know that what would be the possible approach to go.
SQS sounds like what you want.
You can run multiple "worker" processes that compete over messages in the queue. Each message is only consumed once. The logic behind the "lock" / timeout that you mention is as follows: if one of your workers were to die after downloading a message, but before processing it, then you want that message to eventually time out and be re-downloaded for processing on another node.
Yes, SQS is built on a polling model. For example, I have a number of use cases in which I use a minutely cron job to poll for new messages in the queue and take action on any messages found. This pattern is stupid simple to build and works wonders for a bunch of use cases -- a handy little "client" script that pushes a message into the queue, and the cron activated script that will process that message within a minute or so.
If your message pattern is extremely sparse -- eg, only a few messages a day -- it may seem wasteful to poll constantly while the queue is empty. It hardly matters.
My original calculation was that a minutely cron job would cost $0.04 (now $0.02) per month. Since then, SQS added a "Long-Polling" feature that lets you achieve sub-second latency on processing new messages by sending 1 "long-poll" message every 20 seconds to poll an idle queue. Plus, they dropped the price 50%. So per month, that's 131k messages (~$0.06), a little bit more expensive, but with near realtime request processing.
Keep in mind that a minutely cron job I described only costs ~$0.04 / month in request load (30d*24h*60m * 1c / 10k msgs). So at a minutely clip, cost shouldn't really be a concern here. Even polling every second, the price rises only to $2.59 / mo, not exactly a bank buster.
However, it is possible to avoid frequent polling using a webservice that takes an SNS HTTP message. Such an architecture would work as follows: client pushes message to SNS, which pushes message to SQS and routes an HTTP request to your webservice, triggering it to drain the queue. You'd still want to poll the queue hourly or daily, just in case an HTTP request was dropped. In the end though, I'm not sure I can think of any scenario which really justifies such complexity. I'd much rather pay $0.04 a month to have a dirt simple cron job polling my queue.

Load-testing web-app

When load testing a basic web application, what sanity checks do you do other than expected response time?
Is it fair to ask for peak memory usage?
What other checks do you make?
On the server
Requests per second the application can withstand
Requests per second that hit the database (if any, related to the number above, but it's useful to have them as separate figures)
Transferred bandwidth (separated by media type, if possible)
CPU utilization
Memory utilization
On the client
Response time
Weight of the average page
Is the CPU usage high at any time
Run something like YSlow to see what can you optimize on the output to make it quick for users
Stress testing tools usually come with most of these measures (except for Memory, CPU and database usage), as do YSlow or Firebug do on the client.
We look at a pretty wide variety of metrics when analyzing the results of a load test.
On the server, we start with these main 4 categories:
CPU (% utilization, context switches/sec, process queue length)
Memory (% use, page reads/sec, page writes/sec)
Bandwidth (incoming, outgoing, send & receive errors, # connections, connection failures, segment retransmits/sec)
Disk (Disk I/O Time %, avg service time, queue length, reads and writes/sec)
We also like look at metrics specific to the webserver and application server in use. For example, in IIS we look at IIS connection counts, cache hit rates and turnover frequency, etc. In .NET, we would be looking at ASP.NET Requests/sec, ASP.NET Last Request Execution Time, ASP.NET Current Requests, ASP.NET Queued Requests, ASP.NET Request Wait Time, ASP.NET Errors/sec and many others.
On the client side, we are primarily looking at total load time for the pages, duration and TTFB (time to first byte) for critical transactions, bandwidth usage, average page size and failure rate. We also find two metrics very useful - we call them Waiting Users and Average Wait Time. Not many tools have these - they tell you at each sample period exactly how many simulated users are in the process of retrieving a resource from the server and how long, on average, they have been waiting for the resource to arrive. We find these very useful for
determining when the server has reached its capacity
discovering that the server has stopped responding to certain types of requests (typically for certain resources, such as those requiring a database query)
Another good sanity check is to run the tests for at least 24 hours. We do that because one app ran nicely for a few hours then degraded. Discovered some issues with scheduled tasks as well as db connection pooling.
There are a number of services online that can do this type of testing for you as well. Of course, one of the downsides to this approach is that its harder to correlate the data from the service (which is what can be observed externally) with your own internal data about disk I/O, DB ops, etc. If you end up going this route I would suggest finding a vendor that will give you programmatic access to the raw test result data.

Resources