Is there a way to determine if a user is using broadband or dial-up - dial-up

We have a requirement from a customer to provide a "lite" version for dial-up and all the bells-and-whistles for a broadband user.
The solution will use Flex / Flash / Java EJB and some jsp.
Is there a way for the web server to distinguish between the two?

You don't care about the user's connection type, you care about the download speed.
Have a tiny flash app that downloads the rest the of the flash, and times how long it takes. Or an HTML page that times how long an Ajax download takes.
If the download of the rich-featured app takes too long, have the initially downloaded stub page/flash redirect to the slow download page (or download the bare-bones flash app, or whatever).

The simplest and most reliable mechanism is probably to get the user to select their connection type from a drop down. Simple, I know, but it may save you a world of grief!

There's no way to distinguish between a broadband or dial-up as a connection type, but you can make an educated guess by connection speed.
Gmail does this and provides a link to a basic HTML version of their service if they detect it.
(source: nirmaltv.com)
My guess is that there is some client side javascript polling done on AJAX requests. If the turnaround time surpasses a threshold, the option to switch to "lite" appears.
The best part about this option is that you allow the user to choose if they want to use the lite version instead of forcing them.

Here's a short code snippet from a code who attempted something similar. It's in C#, but it's pretty short and it's just the concept that's of interest.
Determine the Connection Speed of your client
Of course, they could be a temporary speed problem that has nothing to do with the user's connection at the time you test, etc, etc.
I had a similar problem a couple of years ago and just let the user choose between the hi and lo bandwidth sites. The very first thing I loaded on the page was this option, so they could move on quickly.

I think the typical approach to this is just to ask the user. If you don't feel confidant that your users will provide an accurate answer, I suspect you'll have to write an application that runs a speed test on the client. Typically these record how long it takes the client to receive x number of bytes, and use that to determine bandwidth.
Actionscript 3 has a library to help you with this task, but I believe it requires you to deploy your flex/flash app on Flash Media Server. See ActionScript 3.0 native bandwidth detection for details.

#Apphacker (I'd comment instead of answering if I had enough reputation...):
Can't guarantee the reverse, either--I have Earthlink dial-up, soon to upgrade to Earthlink DSL (it's what's available here...).

You could check their IP and see if it resolves to/is assigned to a dial up provider, such as AOL, Earthlink, NetZero. Wouldn't guarantee that those that don't resolve to such a provider are broadband users.

you could ...
ask the user
perform a speed test and ask the user if the result you found is correct
perform a speed test and hope that the result found is correct
I think a speed test should be enough.
If you only have a small well known user group it is sometimes possible to determine the connection speed by the ip. (Some providers assign different subnets to dial-up/broadband connections)

Related

WoW Lua - Get Data from URL (Vanilla)

In a World of Warcraft Vanilla Lua Addon Development, how can I issue an HTTP call to receive data back? If not, how can I get data from a web source into the game while playing?
I have a feeling the answer is tragically short, but would like the question asked and answered on Stack Overflow. My research came up lacking, and I recall doing some LUA in ~2007 and was disappointed.
Well, tragically short is an understatement. You simply can't. There was never any APIs that interacted directly with connections, let alone create any, let alone to arbitrary URLs.
Most of them just broadcast game events that occur from the game's connection, and the closest thing you can get to a "data stream" is add-on chat channels. But since bots violate the ToS, you wouldn't be able to make an account that responds to your addon's inquiries.
The closest thing you can get is building an "asynchronous mesh network", but that's only good if your addon has a considerable user base, and it's not guaranteed you'll get information timely.
The general idea is that your addon will have a public key (as in encryption), and you (only you) will detain a private key. Your addon emits a message to any connected peers, which store it on cross-realm SavedVariables, and you hope that someone will have characters on more than one realm. Upon login, the client addon will broadcast its latest packet (still encrypted) to that realm's addon channel, and hopefully within a week or so you might get the updated information across all clients.
A disadvantage is that you'll only get "push" notifications, the client won't be able to send any data back to you*.
That, or you could release a patch for the addon on Curse :P
BUT WAIT!
You mention vanilla, so I can presume you're developing this for a private server. Private servers often have one or a very small amount of realms, making the above mesh network much simpler. Instead of a mesh, just have encryption and manually login&broadcast on each realm every time you want to update the information retrieved.
Plus, you might even be able to contact the server devs to allow you an API that sends messages to the appropriate ingame addon channel (you'd have to ask).
Of course, if you pretend to make your addon server-agnostic, instead of tailored to a specific server, you're back to square one.
 * Unless you are really dedicated to make that happen, because it's a ton of work.
There is no web API in vanilla WoW. There is a web browser widget in the game currently though, albeit very limited in usage.
If you have access to the server software code, you may be able to hook listening on specific game channels for user messages in a defined format, and let the server respond in a way for the addon to parse it.

Is there any way to log all data emitted by an iPhone over a day?

This is for a visualisation project on what data gets recorded about us from our phones.
The idea would be to log as much detail as is reasonable to an internal location (probably) on the phone for later analysis, e.g. HTTP requests. It doesn’t need to be secret at all – the subject will be aware they are participating – and it doesn’t have to be 100% automatic; if the phone owner needs to perform some action regularly that’s okay too, although they need to be able to use their phone approximately normally throughout the day.
I can’t find any Apple APIs that look suitable, but that’s hardly surprising. I can find some approaches that would potentially work on OSX (tcpdump, netstat), so perhaps a jailbroken iOS device would support one of those?
Alternatively, running a custom proxy server would open up a bunch more options, but is there any way to get a mobile device to reliably route through a proxy server?
It appears this question provides a viable proxy-server-based approach:
https://apple.stackexchange.com/questions/81102/proxy-settings-for-iphone-3g-connection
Basically, it seems it is possible to route all requests through a proxy server, even over cellular.

opening and closing streaming clients for specific durations

I'd like to infrequently open a Twitter streaming connection with TweetStream and listen for new statuses for about an hour.
How should I go about opening the connection, keeping it open for an hour, and then closing it gracefully?
Normally for background processes I would use Resque or Sidekiq, but from my understanding those are for completing tasks as quickly as possible, not chilling and keeping a connection open.
I thought about using a global variable like $twitter_client but that wouldn't horizontally scale.
I also thought about building a second application that runs on one box to handle this functionality, but that seems excessive if it can be integrated into the main app somehow.
To clarify, I have no trouble starting a process, capturing tweets, and using them appropriately. I'm just not sure what I should be starting. A new app? A daemon of some sort?
I've never encountered a problem like this, and am completely lost. Any direction would be much appreciated!
Although not a direct fix, this is what I would look at:
Time
You're working with time, so I'd look at what time-centric processes could be used to induce the connection for an hour
Specifically, I'd look at running a some sort of job on the server, which you could fire at specific times (programmatically if required), to open & close the connection. I only have experience with resque, but as you say, it's probably not up to the job. If I find any better solutions, I'll certainly update the answer
Storage
Once you've connected to TweetStream, you'll want to look at how you can capture the tweets for that time period. It seems a waste to create a data table just for the job, so I'd be inclined to use something like Redis to store the tweets that you need
This can then be used to output the tweets you need, allowing you to simulate storing / capturing them, but then delete them after the hour-window has passed
Delivery
I don't know what context you're using this feature in, so I'll just give you as generic process idea as possible
To display the tweets, I'd personally create some sort of record in the DB to show the time you're pinging TweetStream that day (if it changes; if it's constant, just set a constant in an initializer), and then just include some logic to try and get the tweets from Redis. If you're able to collect them, show them as you wish, else don't print anything
Hope that gives you a broader spectrum of ideas?

Is using a Web API as dataprovider for a website efficient?

I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.
Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.
One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.
Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.
Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.

How can I determine the quality of a connection in iOS?

I'm familiar with using Reachability to determine the type of internet connection (if any) being used on an iOS device. Unfortunately that's not a decent indicator of connection quality. Wifi with low signal strength is pretty sketchy and 3G with anything less than 3 bars is a disaster (not to mention networks that only allow EDGE connections).
How can I determine the quality of my connection so I can help my users decide if they should be downloading larger files on their current connection?
A pragmatic approach would be to download one moderately large-sized file hosted on a reliable, worldwide CDN, at the start of your application. You know the filesize beforehand, you just have to measure the time it takes, make a simple computation and then you've got your estimate of the quality of the connection.
For example, jQuery UI source code, unminified, gzipped weighs roughly 90kB. Downloading it from http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.js takes 327ms here on my Mac. So one can assume I have at least a decent connection that can handle approximately 300kB/s (and in fact, it can handle much more).
The trick is to find the good balance between the original file size and the latency of the network, as the full download speed is never reached on a small file like this. On the other hand, downloading 1MB right after launching your application will surely penalize most of your users, even if it will allow you to measure more precisely the speed of the connection.
Cyrille's answer is a good pragmatic answer, but is not really in the end a great solution in the mobile context for these reasons:
It involves doing a test "at the start of your application" by which I assume he means when your app launches. But your app may execute for a long while, may go background and then back into the foreground, and all the while the user is changing network contexts with changes in underlying network performance - so that initial test result may bear no relationship to the "current" performance of the network connection.
For the reason he rightly points out, that it is "penalizing" your user by making them download a test file over what may already be constrained network conditions.
You also suggest in your original post that you want your user to decide if they should download based on information you present to them. But I would suggest that this is not a good way to approach interacting with mobile users - that you should not be asking them to make complicated decisions. If absolutely necessary, only ask if they want to download the file if you think it may present a problem, but keep it that simple - "Do you want to download XYZ file (100 MB)?" I personally would even avoid even that.
Instead of downloading a test file, the better solution is to monitor and adapt. Measure the performance of the connection as you go along, keep track of the "freshness" of that information you have about how well the connection is performing, and only present your user with a decision to make if based on the on-going performance of the connection it seems necessary.
EDIT: For example, if you determine a patience threshold that in your opinion represents tolerable download performance, keep track of each download that the user does in order to determine if that threshold is being reached. That way, instead of clogging up the users connection with test downloads, you're using the real world activity as the determining factor for "quality of the connection", which is ultimately about the end-user experience of the quality of the connection. If you decide to provide the user with the ability to cancel downloads, then you have an excellent "input" about the user's actual patience threshold, and can adapt your functionality to that situation, by subsequently giving them the choice before they start the download. If you've flipped into this type of "confirmation" mode, but then find that files are starting to download faster, you could dynamically exit the confirmation mode.
Rob's answer is very good, but for a more specific implementation start with (https://developer.apple.com/library/archive/samplecode/SimplePing/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000716)Apple's Simple Ping example source code
Target the domain for the server that you want to monitor connection quality to. Use the ping library to "ping" it on a regular basis (say 1 or 10 seconds depending upon your UI needs). Measure how long it takes to get a response to your ping (or if it never returns) to develop an estimate of the connection quality to communicate to your user.

Resources