Application for gathering real-time data from sensors and composing multiple sources of data - stream

I'm currently searching for an application to visualize data from different sensors. The idea is that the sensor picks up movement and sends the angle which the object is at to the application. With about 4 of these sensors, they should display movement at their vicinity.
An example would be a car driving on a street. On both sides of the street there are sensors which pick up the angle(if the car was right in front of you, the angle would be 90 degrees) and send it to the application. Then the application should be able to take the input information from the sensors and plot a moving car/object on a canvas.
So far i've found Power BI / Azure IoT hub and cumulocity which although gather sensor data, but do not have the ways to transform it into the earlier specified form.
Is there anything which is capable of this?

Related

Radio Frequency Hardware

I have a project consists of an autonomous robot and it needs to communicate with a base station and in this way the base station has a GSM module to send data to a website. My problem is the transmission of data between the robot and the base station, which in this case will be radio frequency, and I am looking for a hardware that allows me to send data from the robot to the base station, such as video frames and sensory data. In this way the base station must also send data to the robot, so bidirectional communication is required. The distance is about 5/6km and we are currently considering line of sight, that is, very few obstacles. Can anyone help me with the hardware for radio frequency communication?

Point in polygon based search vs geo hash based search

I'm looking for some advice.
I'm developing a system with geographic triggers, these enable my device to perform certain actions depending on where it is. The triggers are contained within polygons that are stored in my database I've explored multiple options to get this working, however, I'm not very familiar with geo-spacial systems.
An option would be to use the current location of the device and query the DB directly to give me all the polygons that contain that point, thus, all the triggers since they are linked together. A potential problem with this approach, I think, would be the possible amount of polygons stored, and the frequency of the queries, since this system serves multiple devices simultaneously and each one of them polls every few seconds.
An other option I'm exploring is to encode the polygons to an array of geo-hashes and then attach the trigger to each one of them.
Green is the geohashes that the trigger will be attached to, yellow are areas that need to be recalculated with a higher precision. The idea is to encode the polygon in the most efficient way down to X precision.
An other optimization I came up with is to only store the intersection of polygons with roads since these devices are only use in motor vehicles.
Doing this enable the device to work offline performing it's own encoding and lookup, with a potential disadvantage being that the device will have to implement logic to stay up-to-date with triggers added or removed ( potentially every 24 hours )
I'm looking for the most efficient way to implement this given some constrains such as:
Potentially unreliable networks ( the device has LTE connectivity )
Limited processing power, the devices for now are based on a raspberry pi 3 Compute module, however, they perform other tasks such as image processing.
Limited storage, since they store videos and images.
Potential large amount of triggers/polygons
Potential large amount of devices.
Any thoughts are greatly appreciated.

ROS Human-Robot mapping (Baxter)

I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.
I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.
So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter.
I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right?
Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this.
Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need?
Sorry if my question is quite broad but I've been having trouble finding an explanation that I understand... Any insight is very much appreciated!
You might find my former student's work on motion mapping using a kinect sensor with a pr2 informative. It shows two methods:
Direct joint angle mapping (eg if the human has the arm in a right angle then the robot should also have the arm in a right angle).
An IK method that controls the robot's end effector based on the human's hand position.
I know I need to use IK services which essentially calculate the
co-ordinates required to achieve a pose (given the desired location of
the end effector). But it isn't as simple as just plugging in the
PoseStamped data from the node publishing controller data to the
ik_service right?
Yes, indeed, this is a fairly involved process! In both cases, we took advantage of the kinects api to access the human's joint angle values and the position of the hand. You can read about how Microsoft research implemented the human skeleton tracking algorithm here:
https://www.microsoft.com/en-us/research/publication/real-time-human-pose-recognition-in-parts-from-a-single-depth-image/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2F%3Fid%3D145347
I am not familiar with the Vive device. You should see if it offers a similar api for accessing skeleton tracking information since reverse engineering Microsoft's algorithm will be challenging.

How to recognizie number of people from photo taken from above?

I have 3 person sofa in room. I have camera storing images to Azure Blob storage.
I have tested that Azure Cognitive Services Emotion API can count number of faces if camera is located in front of user. It can count that 1,2 or 2 person is sitting. I'm using blog triggered Azure functions to process data to Emotion API.
I realize now that camera must be installed to roof because of physical requirement of room. Roof is only place where camera can be mounted.
Emotion API cannot be used since camera from above cannot see faces. Camera sees top of head.
1) Do you know any methods for counting number of people from above?
2) Do you know if I'm able to determine which position is used. Let say that person sits in left side and middle and right side is unused.

How is Apple able to provide indoor location using CoreLocation?

I found this Sample Code at Apple's Developer Site:
https://developer.apple.com/library/ios/samplecode/footprint/Introduction/Intro.html
The discription says:
Use Core Location to take a Latitude/Longitude position and project it
onto a flat floorplan. Demonstrates how to do the conversions between
a Geographic coordinate system (Latitude/Longitude), a floorplan PDF
coordinate system (x, y), and MapKit.
I have tried it and it works really well.
Basically, you provide a map image for a building and specify two coordinates manually. Then, using CoreLocation, it is converting latitude/longitute into (x,y) position.
My question is - how is it possible to grab latitude/longitude while indoors?
I have watched some Apple's videos and they said they vastly improved CoreLocation, but how is my iPhone getting a correct informations?
TL;DR: It works. I am just wondering how.
Big companies, especially map providers such as Apple, Google, etc. gather information about all Wi-Fi access points (AP). They use so-called crowdsourcing technology in order to estimate position of AP by combining GPS coordinates with recieved signal strength (RSS) from all visible APs.
Once user requests a fix on their location, they send to server a list of all the MAC (media access control) addresses associated with wireless hot spots available within range to be checked against a database of those addresses. Then trilateration technique is used, that is fused with positional data provided by smartphone internal sensors (accelerometer, gyros, magnetometer, barometer). But this approach still suffer from lack of accuracy that is 7-20 meters so far depending from number of visible APs and quality of the sensors.
Learm more here, or here.
In order to have 1-5 meters accuracy, it's required to have additional correcting information. State of the art is to use bluetooth beacons. Given their coordinates it is possible to estimate user's position. Nowadays there are plenty of companies who develop this technology e.g. Navigine, indoors, nextome.
CoreLocation uses GPS when outdoors, and WiFi access points (APs) when indoors (when mapped, otherwise you're getting GPS which isn't very good when indoors). CoreLocation uses iBeacons for proximity positioning, not for giving you a lat/lon. That is, you can use CoreLocation to say "When I get close to this iBeacon let me know". For WiFi positioning to work, you must upload floor plans to Apple, get them converted to their IMDF format, then use a surveying tool to fingerprint your indoor location. Only then will CoreLocation actually leverage the WiFi APs to give you an accurate indoor location (3-5 meter accuracy).

Resources