I need to verify duration spent on client between 2 pings to a server?
What is the best way to do this while achieving high accuracy?
Simply keeping the time on the server is not good enough since a lot of time can be spent in network lag on the client.
Related
This is my situation:
Im using sendAsynchronousRequest. I quickly relized that it has a default timeout of 60 seconds. My app is designed to wait for an opponent to start the game (its a word-game).
Actually, it could take hours before the opponent starts the game. Which means the async-request could be waiting for hours.
Is that bad? I mean, I can probably change the default timeout. But the question is if this iss a bad design. The thing is that I wanted to avoid pulling the server at intervals to know if the opponent has started the match or not.
If this is a bad design: can somebody suggest an alternativ way?
Your best bet is to poll the server if you want to be up and running quickly and don't have a lot of resources (time/money).
If for some reason you need more real-time then there is a high amount of complexity involved in creating an open socket to your server for communication and you are best off using an existing framework like Pusher ($), PubNub ($) or socket.io (free but you will have to handle the server side). If you want to create your own client/server notification system then you may want to check out SocketRocket from Sqaure which provides a client side WebSocket implementation for iOS.
I know the whole point of Bcrypt is to be time expensive when hashing to limit practicality of a brute force attack. But, doesn't that make it inherently unscalable? For instance, one site I read was claiming that the cost factor in Bcrypt should be adjusted so that a hash takes 0.25 seconds on the current hardware. If you had a moderately successful site with even a few hundred people logging in at any time does that 0.25 seconds compound making every user potentially have top wait several seconds? Does the server's CPU max out when hashing for the 0.25 seconds causing constant high resource usage?
As a secondary question, assuming it is scalable (which I'm sure it is), what is an appropriate cost factor these days?
This is probably more appropriate for one of the other Stackoverflow sites as it will promote discussion and isn't a specific programming question.
That said, the first issue you raise would most likely be addressed by using a service that could be scaled as required to serve concurrent authentication requests to provide an acceptable response time for the end user. One way to do this would be to have multiple servers behind a load balancer, with authentication requests routed to the load balancer for hand-off to the authentication servers. Individual authentication requests would run on essentially random servers, and provided your system architecture was correct it would be seamless from the point of view of the client.
I've currently got a ruby on rails app hosted on Heroku that I'm monitoring with New Relic. My app is somewhat laggy when using it, and my New Relic monitor shows me the following:
Given that majority of the time is spent in Request Queuing, does this mean my app would scale better if I used an extra worker dynos? Or is this something that I can fix by optimizing my code? Sorry if this is a silly question, but I'm a complete newbie, and appreciate all the help. Thanks!
== EDIT ==
Just wanted to make sure I was crystal clear on this before having to shell out additional moolah. So New Relic also gave me the following statistics on the browser side as you can see here:
This graph shows that majority of the time spent by the user is in waiting for the web application. Can I attribute this to the fact that my app is spending majority of its time in a requesting queue? In other words that the 1.3 second response time that the end user is experiencing is currently something that code optimization alone will do little to cut down? (Basically I'm asking if I have to spend money or not) Thanks!
Request Queueing basically means 'waiting for a web instance to be available to process a request'.
So the easiest and fastest way to gain some speed in response time would be to increase the number of web instances to allow your app to process more requests faster.
It might be posible to optimize your code to speed up each individual request to the point where your application can process more requests per minute -- which would pull requests off the queue faster and reduce the overall request queueing problem.
In time, it would still be a good idea to do everything you can to optimize the code anyway. But to begin with, add more workers and your request queueing issue will more than likely be reduced or disappear.
edit
with your additional information, in general I believe the story is still the same -- though nice work in getting to a deep understanding prior to spending the money.
When you have request queuing it's because requests are waiting for web instances to become available to service their request. Adding more web instances directly impacts this by making more instances available.
It's possible that you could optimize the app so well that you significantly reduce the time to process each request. If this happened, then it would reduce request queueing as well by making requests wait a shorter period of time to be serviced.
I'd recommend giving users more web instances for now to immediately address the queueing problem, then working on optimizing the code as much as you can (assuming it's your biggest priority). And regardless of how fast you get your app to respond, if your users grow you'll need to implement more web instances to keep up -- which by the way is a good problem since your users are growing too.
Best of luck!
I just want to throw this in, even though this particular question seems answered. I found this blog post from New Relic and the guys over at Engine Yard: Blog Post.
The tl;dr here is that Request Queuing in New Relic is not necessarily requests actually lining up in the queue and not being able to get processed. Due to how New Relic calculates this metric, it essentially reads a time stamp set in a header by nginx and subtracts it from Time.now when the New Relic method gets a hold of it. However, New Relic gets run after any of your code's before_filter hooks get called. So, if you have a bunch of computationally intensive or database intensive code being run in these before_filters, it's possible that what you're seeing is actually request latency, not queuing.
You can actually examine the queue to see what's in there. If you're using Passenger, this is really easy -- just type passenger status on the command line. This will show you a ton of information about each of your Passenger workers, including how many requests are sitting in the queue. If you run with preceded with watch, the command will execute every 2 seconds so you can see how the queue changes over time (so just execute watch passenger status).
For Unicorn servers, it's a little bit more difficult, but there's a ruby script you can run, available here. This script actually examines how many requests are sitting in the unicorn socket, waiting to be picked up by workers. Because it's examining the socket itself, you shouldn't run this command any more frequently than ~3 seconds or so. The example on GitHub uses 10.
If you see a high number of queued requests, then adding horizontal scaling (via more web workers on Heroku) is probably an appropriate measure. If, however, the queue is low, yet New Relic reports high request queuing, what you're actually seeing is request latency, and you should examine your before_filters, and either scope them to only those methods that absolutely need them, or work on optimizing the code those filters are executing.
I hope this helps anyone coming to this thread in the future!
While website loading speed testing I found that website is sometimes loading very quickly and some times it takes lot of time to start loading. When I checked it in detail, I found on some requests wait time was just in few hundred milliseconds, while on some other request which was slow it was actually taking 5 to 30 seconds in wait time.
What may be the cause of this kind of deviation from few milliseconds to 30 or more seconds. And how to improve it.
The site is build upon ASP.net MVC3 and Microsoft SQL Server database.
What patterns are there i.e. are the same URLs always slow, and other URLs always fast, or does it just appear to be random?
Look at what else is running on the server, is it a dedicated server or a VPS?
Look at the DB performance i.e. is it consistent, which are the queries that are taking the longest time, most CPU, most IO etc.
How busy is the site, do the slowdowns match when the app-pool is being recycled or started up?
I am having problems with some oauth_token's returning as invalid from Twitter. After doing some research I think this is due to a difference in system times between my server and Twitter's servers. I was able to get Twitter's system time by curl'ing 'https://api.twitter.com/1/help/test.json' and checking the 'filetime'. The result: My server is 8 seconds ahead of Twitter's server.
Could this 8 seconds cause Twitter to return empty oauth_token's and if this is the problem, how do I synchronize by server's time (centos server) with Twitter's time.
Any and all help is much appreciated. Thanks.
The Twitter API allows a time difference of 5 minutes, so 8 seconds off really shouldn't be the problem. Of course it is important the time is correct and it is therefore recommended that you run something like ntpd. I'm however pretty much sure the issue you're having isn't related to the time. And to be honest I don't even have a clue on what you mean with empty oauth_token's. Note that once you have working oauth access keys they'll remain valid forever unless you revoke them manually.
Synchronize your server with an official time server with ntpdate. Do not try to modify your time to meet another random server. Instead if the problem exist after your server has the correct time contact the people at twitter about this problem.