Does Blackberry API provide any methods to determine which one, GPS or Geolocation is better in current situation (according to signal level, network bandwidth and any other environment properties)?
There's many, many different algorithms you could use to determine which is the optimal location mode to use.
A well-tuned algorithm would have to account for things like
how fast does your user need a location fix?
how accurate does the fix need to be? is it just being used to find nearby movie theatres, or is the fix used for navigation (which needs to be really accurate)?
which mobile carrier are you on? GPS results may be independent of the carrier, but other geolocation technologies will depend on the carrier, and their infrastructure (assuming you're using the cellular network, and not Wi-Fi)
is there any reason to need to limit network transmissions (e.g. for a metered data plan, where you are frequently updating the location)?
how important is battery usage?
which BlackBerry OS versions are you targeting?
I'm sure I'm missing some other factors, but hopefully you can see that it's not a simple problem that can be solved without knowing something about your app and network deployment.
Also, this kind of algorithm for BlackBerry (Java) apps has traditionally taken a lot of work to optimize. As such, many developers (or clients) would consider this a closely-guarded business secret. So, it might be hard to find someone to publish their algorithm (but it doesn't hurt to ask, right?).
That said, you might at least take a look at the BlackBerry Simple Location API, which is an open source implementation of a basic algorithm that selects between GPS and Geolocation modes (if you allow it to use both). From the Javadocs (for the Optimal mode):
Operates in both Geolocation and GPS mode based on availability. First fix is computed in Geolocation mode, subsequent fixes in
Standalone mode. However if Standalone mode fails, falls back to Geolocation mode temporarily until a retry is attempted in
Standalone mode after a certain waiting period has passed. See setRetryFactor(int).
For single fix calls to getLocation(int), Geolocation mode is used first with a fallback to
Standalone mode if Geolocation mode fails.
I see you're in Belarus, but I don't know where your clients, or users are. If they're in the US, you may also need to consider something like Nav Builder Inside for geolocation if your app will support the Verizon network.
Anyway, I know this probably isn't the answer you wanted, but maybe it's a start?
Related
I want to create a network intrusion detection system for iOS application. The main function is to allow the user to select a home network (maybe prompt them to simply enter the IP address only) and to be able to monitor the packets and if there is anything suspicious- we need to alert user via push notification or email. i wanted to use the features and functions of Snort, an open source network intrusion detection system.
Any Suggestions,Sample code ?! Where to start?
VM's do not have native hardware access, which is necessary for monitor mode. Maybe IOMMU PCI passthrough or bridged devices might work. It is probable that it is possible to compile the iOS kernel with a module that works for the wireless nic. I don't think it's a proprietary chip specific to apple, because a chip with multie technology capabilities in RF wouldn't be cost effective qt all. I'm just not sure if the filesystem blocks access in the OS framework or whatever. I have tried to compile linux/iOS ARM packages natively in the shell with the aircrack-ng source, but have not had any luck. Maybe someone would have better luck actually cross-compiling a package and sideloading it somehow.
I don't think this is possible for multiple reasons:
You wouldn't be able to compile snort for iOS.
In order to run snort you have to have the interface (NIC) in promiscuous mode, which I really don't think you can do on an iOS device (iPhone, iPad, etc) but I have never really looked into it, but Apple probably locks this down and restricts this for security purposes so if you could do it you'd likely have to jail-break the device first. It's not even possible to put the wifi card in an Apple laptop into monitor mode, which is similar.
There are a lot of dependencies for snort, most importantly the DAQ. You would probably only be able to monitor the wifi interface (even this might not be possible), not the interface used for the cellular network as this is probably a different daq than standard Ethernet nics.
This very likely is not possible on iOS, if it is it would be VERY difficult to pull off and even if you did the use case isn't really good. Even if you could get a daq for the cellular card, I don't know if promiscuous mode even exists and if it did all of the traffic on the cellular network is encrypted, so inspecting this with snort would be pointless. If you could do it for the wifi traffic it's probably not worth the effort honestly, especially since almost all traffic nowadays is encrypted, you'd have to decrypt it first, which certainly isn't possible to do.
In the view of Johnjg12's comments, I am wondering about your goal. If you want to make a NIDS, you can make it OS independent, anyway. If you want to consider only HIDS that monitors packet destined to it, we don't need it to be in promiscuous mode (a comment to Johgj12's response). so, now it is something to do with Snort on iOS. I am wondering if we can do it on a VM and then turning its promiscuous mode? Having said that I came across a link: https://www.securemac.com/macosxsnort.php
I'm considering using 2 NSStream for up/down channels. However, it looks somewhat complex. If you know simpler way (or recommendations) to do this, please let me know!
-- edit --
This is a kind of fast prototyping of internal/in-house remote controller. Low latency is best, but not required.
Binary formatted data, but not so heavy. Most of them are short control messages, and sometimes big chunks exceptionally.
On Cocoa / Cocoa touch. Platforms are limited to them.
Two peers are on LAN or at least WiFi network. So I can assume the connection is basically fast.
Compatibility to unknown hosts, high efficiency/performance/reliability and such things are no need to be considered. Just simplicity is most important now.
Without knowing acceptable latency, amount of data, type of data, and/or network topology (same LAN? routing over WAN?), it is impossible to say.
For most purposes, HTTP provides an awfully big and versatile hammer. And HTTP is supported by just about everything.
You want simple? Nothing is as simple as HTTP simply because it is a ubiquitous high level protocol that everyone and there brother has implemented anywhere from high level APIs (like NSHTTP*/NSURL*) down to less-than-$1 embedded chips.
If the devices you want to control have an option for an HTTP server, go for that. It'll be dead simple and debugging is much much easier when working with a high level protocol like HTTP.
At this point, it is hard not to buy a device with a LAN/wLAN port that doesn't also have an HTTP server in it (off the top of my head, my home theater receiver, solar controller, bbq, printer, security camera, PS3, VOIP box, and U-verse router all have HTTP servers).
However, the requirements on your non-Cocoa Touch side may dictate otherwise.
Hie members! ----am Boniface M - - a beginner in android[University student]..
My question is am planning to develop an android app/middleware that will act as a grid service .i.e an app for grid computing.. the application needs to be installed in 1....n devices. in the connection, one device must act as a server for all others. communication between the devices is via the wifi under the permission of the server device.which is determined by a certain algorithm[no problem here].
The problem is should i use a database that will keep track of all the services a device is running which are accessible to other devices or is there any way that i can directly keep all this information and then retrieve them as i request them from another app installed in another device.
and also how i can share files via wifi like blutooth
Thanks....
You're asking many questions in one and I'm actually unsure what you mean overall. Here's a few links that are sure be of some use...
http://developer.android.com/reference/android/os/Build.html This library is good for finding out information about the device you're running on.
http://developer.android.com/reference/android/location/Criteria.html - Criteria might be useful, lets you know what location based services you have running
Other than that, if you're looking to see if particular things are running check out this question: How to check if a service is running on Android?
If you're looking to keep a central hub of what devices have what available etc. you're going to need a middle man for what you want to do I suspect. If it was me, I'd do HTTP requests to a server, to php scripts I have written which would then read/write from a MySQL database to get information about other devices.
If you want to share files via wifi.. you're going to need an FTP server on the phone. There's an app swiFTP which does this to some degree (phone -> PC) but the concept should be the same. Take a look at it. It's a starting point! http://www.tested.com/news/how-to-transfer-files-wirelessly-to-your-android-phone/53/
Again, I'm unsure EXACTLY what you're looking to do but hopefully all of that is of some help. If it's not leave me a comment and I'll try and assist you further.
hope it helps!
Right this is confusing me quite a bit, i'm not sure if any of you have noticed or used the "my location" feature on google maps using your desktop (or none GPS/none mobile device). If you have a browser with google gears (easiest to use is Google Chrome) then you will have a blue circle above the zoom function in Google Maps, when clicked (without being logged into my Google Account) using standard Wi Fi to my own personal router and a normal internet connection to my ISP, it somehow manages to pinpoint my exact location with a 100% accuracy (at this moment in time).
How does it do it? they breifly mention it here but it doesn't quite explain it, it says that my browser knows where i am...
...i am baffled, how?
I am intrigued because I would love to integrate it in the future of my programming projects, just like some background understanding and it doesn't seem too well documented at the moment.
I am currently in Tokyo, and I used to be in Switzerland. Yet, my location until some days ago was not pinpinted exactly, except in the broad Tokyo area. Today I tried, and I appear to be in Switzerland. How?
Well the secret is that I am now connected through wireless, and my wireless router has been identified (thanks to association to other wifis around me at that time) in a very accurate area in Switzerland. Now, my wifi moved to Tokyo, but the queried system still thinks the wifi router is in Switzerland, because either it has no information about the additional wifis surrounding me right now, or it cannot sort out the conflicting info (namely, the specific info about my wifi router against my ip geolocation, which pinpoints me in the far east).
So, to answer your question, google, or someone for him, did "wardriving" around, mapping the wifi presence. Every time a query is performed to the system (probably in compliance with the W3C draft for the geolocation API) your computer sends the wifi identifiers it sees, and the system does two things:
queries its database if geolocation exists for some of the wifis you passed, and returns the "wardrived" position if found, eventually with triangulation if intensities are present. The more wifi networks around, the higher is the accuracy of the positioning.
adds additional networks you see that are currently not in the database to their database, so they can be reused later.
As you see, the system builds up by itself. The only thing you need is good seeding. After that, it extends in "50 meters chunks" (the range of a newly found wifi connection).
Of course, if you really want the system go banana, you can start exchanging wifi routers around the globe with fellow revolutionaries of the no-global-positioning movement.
It's a lot more simple that you think. You've signed into both your mobile and Chrome on your desktop using the same Google account. Google simply expect you will have your mobile with you most of the time. They take the location data from your phone and assume the location of your current desktop session is the same.
I proved this by RDPing into my Windows machine at home from work and checking Google maps remotely. It show my location as the same as Chrome on Linux at work.
If you don't have a mobile that is signed into Google then all they can do is lookup GeoIP data for the IP address assigned by your ISP. It will typically be wildly inaccurate.
They use a combination of IP geolocation, as well as comparing the results of a scan for nearby wireless networks with a database on their side (which is built by collecting GPS coordinates alongside wifi scan data when Android phone users use their GPS)
I've finally worked it out. The biggest issue is how they managed to work out what Wireless networks were around me and how do they know where these networks are.
It "seems" to be something similar to this:
skyhookwireless.com [or similar] Company has mapped the location of many wireless access points, i assume by similar means that google streetview went around and picked up all the photos.
Using Google gears and my browser, we can report which wireless networks i see and have around me
Compare these wireless points to their geolocation and triangulate my position.
Reference: Slashdot
According to Google Maps' own help:
Rejecting the WiFi networks idea!
Sorry folks... I don't see it. Using WiFi networks around you seems to be a highly inaccurate and ineffective method of collecting data. WiFi networks these days simply don't stay long in one place.
Think about it, the WiFi networks change every day. Not to mention MiFi and Adhoc networks which are "designed" to be mobile and travel with the users. Equipment breaks, network settings change, people move... Relying on "WiFi Networks" in your area seems highly inaccurate and in the end may not even offer a significant improvement in granularity over IP lookup.
I think the idea that iPhone users are "scanning and sending" the WiFi survey data back to google, and the wardriving, perhaps in conjunction with the Google Maps "Street View" mapping might seem like a very possible method of collecting this data however, in practicality, it does not work as a business model.
Oh and btw, I forgot to mention in my prior post... when I originally pulled my location the time I was pinpointed "precisely" on the map I was connecting to a router from my desktop over an ethernet connection. I don't have a WiFi card on my desktop.
So if that "nearby WiFi networks" theory was true... then I shouldn't have been able to pinpoint my location with such precision.
I'll call my ISP, SKyrim, and ask them as to whether they share their network topology to enable geolocation on their networks.
I know you can look up IP address to get approximate location, but it's not always accurate. Perhaps they're using that?
update:
Typically, your browser uses
information about the Wi-Fi access
points around you to estimate your
location. If no Wi-Fi access points
are in range, or your computer doesn't
have Wi-Fi, it may resort to using
your computer's IP address to get an
approximate location.
It is possible get your approximate locate based on your IP address (wireless or fixed).
See for example hostip.info or maxmind which basically provide a mapping from IP address to geographical coordinates. The probably use many kinds of heuristics and datasources. This kind of system has probably enough accuracy to put you in right major city, in most cases.
Google probably uses somewhat similar approach in addition to WiFi tricks.
So Google keep records of Wifi router location by using any cellphone
GPS that connected to that router when you use Google maps or
location on cellphone. then google knows every device that connected
to that Wifi router uses the same location.
when GPS off or no cellphone connected to router Google uses IP
geolocation
What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.