Number of access points for wifi fingerprinting - wifi

I'm making project for positioning using WiFi fingerprint , but the building I'm testing the project in has only 3 access point that covers the whole building(the building is 5 floors),
Is the 3 access point are enough or not and how many access points I need to do the WiFi fingerprinting ?

The short answer is "it depends". Two major considerations:
How well does RF propagate through this building? (lots of wiring, steel & concrete, or mainly wood frame & drywall)
How precise do your positioning tools need to be?
Typically, additional Access Points can reduce the likelihood of deadspots and variation due to changes in environmental factors, and improve precision. If you are able to test your fingerprinting system before it is implemented, you should be able to determine whether 3 access points will be sufficient. Likely, for a steel/concrete commercial building with 5 floors, you'll want additional access points for better coverage. What number will you need? It depends on your answers to #1 and #2.
For further discussion, see the paper "Indoor Location Using Trilateration Characteristics"

Related

ARKit with multiplayer experience to share same planes [duplicate]

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

Are Hololens VR ready?

The question is already quite direct and short:
Can the Hololens be used as a virtual reality glasses?
Sorry beforehand if the question is clear for those who have tried them out, but I had not yet the chance.
From what I read I know that they have been designed to be a very good augmented reality tool. This approach is clear for everybody.
Just thinking that may be applications where you simply don't want the user to have any spatial contact with the reality for some moments, or others where you want the user to forget in the complete experience about were s-he is, then a complete environment should be shown as we are used to with the virtual reality glasses.
How are the Hololens ready for this? I think there are two key sub-questions that may be answered for this:
How solid are the holograms?
Does the screen where holograms can be placed covers the complete view?
As others already pointed out, this is a solid No due to the limited viewing window.
Apart from that, the current hardware capabilities of the Hololens is not capable of providing a full immersive experience. You can check the specifications here.
As of now, when the environment is populated with more than a few holograms (depends on the triangle count of each hologram) the device's fps count drops and a certain lag is visible. I'm sure more processing power would be added to the device in future versions, but as of right now, with the current power of the device, I seriously doubt its capabilities to populate an entire environment to give a fully immersive experience.
1) The holograms quality is defined by the following specs:
- Holographic Resolution: 2.3M total light points
- Holographic Density: 2.5k light points per radian
It is worth to say that Microsoft holograms disappear under a certain distance indicated here in 0.85m
Personal note: in the past I worked also on Google Project Tango and I can tell you from these personal experiences that the stability of Microsoft holograms is absolutely superior. Also, the holograms are kept once the device is turned off, so if you place something and you reboot the device you will find them again where you left them, without the need to restart from scratch
2) Absolutely not: "[The field of view] amounts to the size of a monitor in front of you – equivalent to 15 inches" as stated here. And it will not be addressed as reported also here. So if the holograms size exceeds this space they will be shown partially [i.e. cut]. Moreover the surrounding environment is always visible because the device purpose is interacting with the real environment adding another layer on top
Hololens is not intended to be a VR rig, there is no complete immersion that I am aware of, yes you can have solid holograms, but you can always see the real world.
VR is related with substituting the real world that is why VR goggles are always blind. HoloLens are type of see-through so you can see the hologram and the real world. There are created for augmented reality where you augment the real world. That is why you can't use HoloLens for VR purpous
Actually my initial question is: can the Hololens be used AS WELL for VR applications?
No is the answer because of its small window (equivalent to 15'' screen) where the holograms can be placed to.
I am sure this will evolve sooner or later in order to improve the AR experience. As soon as the screen does not cover toe complete view VR won't be possible with the Hololens.
The small FOV is a problem for total immersion, but there is an app for HoloLens called HoloTour, which is VR (with a few AR scenes in the beginning). In the game, the user can travel to Rome and Peru. While you can still see through the holograms, in my personal experience, people playing it will really get into it and will forget about the limitations. After a scene or two, they feel immersed. So while it certainly isn't as good at VR as a machine designed for that, it is capable, and it is still typically enjoyable to the users. There are quite a few measures to prevent nausea in the users (I can use mine for hours at a time with no problem) so I would actually prefer it to poorer VR implementations, such as a GearVR (which made me sick after 10 minutes of use!). Surely a larger FOV is in the works, so this will be less of a limitation in future releases.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Computer vision application for automotive telematics application

What sort of application can be considered to be the really business winner for automotive telematics applications related to image processing/computer vision ?
here are the criteria :
1. Innovative
2. Social
3. Fun.
Have you read the articles from the DARPA grand challenge winners?
DARPA site
Google Scholar
I believe the "DARPA Grand Challenge" style of automation meets your .1 requirement as there are plenty of innovation on that front.
But I still think that we are a good decade away from a fully autonomous vehicle, even though the technology is almost there. The main reason is that people are still very afraid of relenting control to the computer, even though it might be the safest choice.
The transition will be slow. More and more models will bring small chunks of automation, such as smarter cruise control systems (that's a big winner right now), autonomous parking (in the market for a while now) and anti collision systems.
Which brings us to your .2 and .3
The above mentioned systems are not fun, they are necessary [for increased safety]. Nowadays, Social Media and Fun don't really mix with driving because they distract the driver from its main task. In the future, when you're on the freeway in auto-pilot mode, you will be able to open your laptop and be free to do whatever you want, since computers will be always connected to the internet. So I don't believe the car itself needs to provide you that aspect of entertainment.
What I do believe it's a killer functionality for cars is the enhancement of intelligent comfort systems integrated with biometrics. Nowadays, cars already have things like personal keys that will make it adjust things like seat height and etc according to your preferences, but it would be much nicer if it could automatically identify who is the driver by some biometric feature (iris, etc) and adjust multiple parameters automatically. That's the end of the key. I'm not talking about seat and pedals adjustment, but transmission style (husband likes a more aggressive transmission), performance limiters (daughter cannot exceed 90% of posted limit... the car knows what the limit is according to where it is).
In my opinion, if you implement biometric recognition + autonomous navigation, the possibilities are endless.
Although none of the applications here use computer vision, they are probably the best once out there yet. They have received quite a bit of media hype.

Custom robotics for building an auto CD-loading arm

Where would you recommend that I find a company to develop or buy a CD/DVD loading arm similar to: http://www.dextimus.com/
Preferably programmable via USB but if I only can get one with a serial interface that would be fine. Drivers dont matter - I can interface directly with the unit as my situation is very unique.
If you have some experience with electronics, you can give it a shot and build it yourself, like this or this.
I should add that the schematics and the source code are included, and in more details in the first project.
I suppose I might just shorten this by giving a list of resources first:
http://www.embedinc.com/ I trust this company to do good work. Expensive (actually, they are reasonably priced in the design community, but would be considered expensive by most hobbyists and individuals). Not great at people skills, but very very very good at what they do.
You should check out the various microcontroller communities and forums for hobbyists and professionals that can do this. Search for microchip, atmel, msp430, arm, powerpc, etc.
Sparkfun is a supplier to the electronics community - they have great forums where you can post your request, and you'll find people who might do it for fun with only the cost of materials. Might take longer, might not be as 'professional' or well packaged and delivered, but it might be your best low cost option.
There are many electronic design companies that could do this (for instance, I can do this sort of thing).
But there are many questions you haven't answered (and may not have researched) that could prevent success:
Is this patented?
What CD loading/unloading methods are not patented, are out of patent, or otherwise available?
What is your design goal - a one off just for you, or a device that can be built in the hundreds for industrial use, or a device meant for general office workers/consumers that is built in the millions?
Do you realize that this design qould surely cost mroe than simply buying one, if one is all you need?
As an example, assuming you don't need the nice enclosure and don't mind a 'prototype' look, just the mechanicals, electronics, and firmware design (no software on the PC) would likely be 100-250 billable hours for a design firm. At a cheap $90/hr, that's $9k to nearly $25k for one prototype. Add PC software and the nice enclosure, etc and you'll double that.
If you can find a local 'Make' group (techshop, GoTech, or similar) then you might be able to find a hobbyist that is willing to play with this idea for the cost of materials.
But if you define what your goal is, and give us an idea of your resources you may find a better answer.
-Adam
You can create a very nice simple solution using radio control servos. They come in many sizes, but even the small ones have enough torque to move a big arm to move a cd.
The real bonus with servos is that they normally have 180 degrees of rotation and internally have a variable resistor (rheostat) for positioning feedback. Positioning accuracy is normally within 1 degree of rotation which should be fine for a cd loader.
For picking up the CDs, nothing will beat a vacuum. I recommend a small battery powered vacuum cleaner. Funnel the suction into a 1/4 inch pipe. At the other end of the pipe a one inch diameter cup should provide more than enough lift from the small amount of suction.
As for the pile of blank CDs to be burnt, I would advise in moving the pile up rather than an arm down to it. probably having the top blank cd about 1/4 inch higher than the cd tray - By doing this, the arm only needs to rotate in one axis and the vacuum should be enough to suck the cd back out of the tray.
Now, for the electronics. For the servo control I suggest an rs232 serial servo controller. I've used the one from http://www.basicx.com/Products/servo/servo8t.htm as it also gives back torque information from the current draw.
For the low sample rate digital i/o, i suggest (for windows) inpout32.dll which is a dll to give you direct access to the bits of a parallel port. This will allow you to turn on the vacuum at the correct time and possibly sense when cd's have run out. Note that a parallel port can sink more current than it sources so for outputs you should connect to the 5v power line and set the output pin to 0 to turn on the output and 1 to turn it off.
The other nice option, which is very, very simple to interface and very cheap is to get hold of a picaxe from http://www.rev-ed.co.uk/picaxe/. These use a very simple programming language (a BASIC spin off) allowing you to read serial data in and control the servos and digital I/O on one chip. Last time I used one, the language was a bit simple - if statements had to jumped labels, else didn't exist.
If you do use a microcontroller and servos, it is best to use a dual voltage power supply as servos are noisy and can cause the microcontrollers to reset.
As for switching loads such as the vacuum on, you'll need to use a mosfet or (if money is no object) the simpler option, a solid state relay.
All digital inputs you use on the microcontroller should be pulled either to +V or ground with say a 5k resistor so they never float.
I cannot stress how simple and cheap the picaxes are. They have a built in interpreter so although code space is minimal on the small 8 pin units, they are programmable via a simple serial lead.
Good luck. Once you get into automation control, you'll never be able to stop. I'm in the middle of building a 3 axis CNC router so I can cut parts for other projects (I tell my girlfriend it's so she can cut out xmas decorations!).
You might want to contact Aaron Shephard about his Florian project.
I've found that a really easy board to control stepper motors or sorvos are produced by phidgets - the API is incredibly easy, and available for a vast array of platforms.

Resources