I have seen from multiple sources that it is possible to access an iPhone's infrared proximity sensor but I can only find a way to access close to and not close to values (binary (https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIDevice_Class/#//apple_ref/occ/instp/UIDevice/proximityState). I was wondering (hoping) if there was someway to access the raw data values e.g. a range.
Any and all feedback greatly appreciated!
Related
I'm looking for some advice.
I'm developing a system with geographic triggers, these enable my device to perform certain actions depending on where it is. The triggers are contained within polygons that are stored in my database I've explored multiple options to get this working, however, I'm not very familiar with geo-spacial systems.
An option would be to use the current location of the device and query the DB directly to give me all the polygons that contain that point, thus, all the triggers since they are linked together. A potential problem with this approach, I think, would be the possible amount of polygons stored, and the frequency of the queries, since this system serves multiple devices simultaneously and each one of them polls every few seconds.
An other option I'm exploring is to encode the polygons to an array of geo-hashes and then attach the trigger to each one of them.
Green is the geohashes that the trigger will be attached to, yellow are areas that need to be recalculated with a higher precision. The idea is to encode the polygon in the most efficient way down to X precision.
An other optimization I came up with is to only store the intersection of polygons with roads since these devices are only use in motor vehicles.
Doing this enable the device to work offline performing it's own encoding and lookup, with a potential disadvantage being that the device will have to implement logic to stay up-to-date with triggers added or removed ( potentially every 24 hours )
I'm looking for the most efficient way to implement this given some constrains such as:
Potentially unreliable networks ( the device has LTE connectivity )
Limited processing power, the devices for now are based on a raspberry pi 3 Compute module, however, they perform other tasks such as image processing.
Limited storage, since they store videos and images.
Potential large amount of triggers/polygons
Potential large amount of devices.
Any thoughts are greatly appreciated.
I have a project in which I want to export measured sensor signals into a HDF5 file.
The data I have consists of time,value pairs per sensor. The sensor data is sparse, meaning each sensor sends its data at different times and intervals. So there can be a sensor which sends data each second, as well as a sensor which sends data each minute. All data is 64 bit floating point for now.
What is the best format for this? Should I create a Nx2 signal and store it in a group called sensors and store the timestamp and value next to eachother? Or should I create a group per sensor and store the values and timestamps in seperate arrays?
I'm looking for best practices here. I would like to be able to plot the signals easily in python.
In case anyone is wondering what I'm doing, this is the project in question: https://github.com/windelbouwman/lognplot
I'm currently searching for an application to visualize data from different sensors. The idea is that the sensor picks up movement and sends the angle which the object is at to the application. With about 4 of these sensors, they should display movement at their vicinity.
An example would be a car driving on a street. On both sides of the street there are sensors which pick up the angle(if the car was right in front of you, the angle would be 90 degrees) and send it to the application. Then the application should be able to take the input information from the sensors and plot a moving car/object on a canvas.
So far i've found Power BI / Azure IoT hub and cumulocity which although gather sensor data, but do not have the ways to transform it into the earlier specified form.
Is there anything which is capable of this?
I found this Sample Code at Apple's Developer Site:
https://developer.apple.com/library/ios/samplecode/footprint/Introduction/Intro.html
The discription says:
Use Core Location to take a Latitude/Longitude position and project it
onto a flat floorplan. Demonstrates how to do the conversions between
a Geographic coordinate system (Latitude/Longitude), a floorplan PDF
coordinate system (x, y), and MapKit.
I have tried it and it works really well.
Basically, you provide a map image for a building and specify two coordinates manually. Then, using CoreLocation, it is converting latitude/longitute into (x,y) position.
My question is - how is it possible to grab latitude/longitude while indoors?
I have watched some Apple's videos and they said they vastly improved CoreLocation, but how is my iPhone getting a correct informations?
TL;DR: It works. I am just wondering how.
Big companies, especially map providers such as Apple, Google, etc. gather information about all Wi-Fi access points (AP). They use so-called crowdsourcing technology in order to estimate position of AP by combining GPS coordinates with recieved signal strength (RSS) from all visible APs.
Once user requests a fix on their location, they send to server a list of all the MAC (media access control) addresses associated with wireless hot spots available within range to be checked against a database of those addresses. Then trilateration technique is used, that is fused with positional data provided by smartphone internal sensors (accelerometer, gyros, magnetometer, barometer). But this approach still suffer from lack of accuracy that is 7-20 meters so far depending from number of visible APs and quality of the sensors.
Learm more here, or here.
In order to have 1-5 meters accuracy, it's required to have additional correcting information. State of the art is to use bluetooth beacons. Given their coordinates it is possible to estimate user's position. Nowadays there are plenty of companies who develop this technology e.g. Navigine, indoors, nextome.
CoreLocation uses GPS when outdoors, and WiFi access points (APs) when indoors (when mapped, otherwise you're getting GPS which isn't very good when indoors). CoreLocation uses iBeacons for proximity positioning, not for giving you a lat/lon. That is, you can use CoreLocation to say "When I get close to this iBeacon let me know". For WiFi positioning to work, you must upload floor plans to Apple, get them converted to their IMDF format, then use a surveying tool to fingerprint your indoor location. Only then will CoreLocation actually leverage the WiFi APs to give you an accurate indoor location (3-5 meter accuracy).
I know that BLE RSSI values are based on decibel values but I was wondering if there was a way to convert this into a more meaningful value that I could use (even a float would be fine).
I've looked at Kalmon filters but I'm struggling to understand them. Any help with this would be appreciated.
With the limitations of the iphones and ipads' hardware I dont believe there is a method for getting more precise data from RSSI values. Perhaps if you say what you are using it for I can help advise an alternative route to solve your problem. I understand that it is difficult to gain qualitative information from RSSI values from Bluetooth LE, however, there are also other methods such as coupling different systems with Bluetooth LE that may produce the data you are seeking.