I am looking for the SDK for the Perceptive pixel touch drivers. It looks like Microsoft took over the company, but there seems to be very little documentation on where I can find the sdk for the drivers.
After finishing the project. What we found out is the following, Microsoft purchased PPI. After purchasing PPI the SDK became very locked down. We were unable to gain a copy, however, that is not to say somebody in the future may be more fortunate.
Some of what we lost because of this was using two monitors with touch enabled, didn't allow for us to be able to touch two screens simultaneously unless we added careful placement of e.handled on touch events.
Related
A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)
Does release of ARcore means that there will be no updates on TangoSDK?
There was no update TangoSDK for few months but on the other hand, if I unzip Google Constructor APK, I see it is built with SDK none other than Google has.
The answer is a clear no because ARcore really is Tango. As in, installing the ARCore preview apk gives you a 'Tango Core' service which you can see on your non-Tango phone (I use a Pixel XL).
Clay Bavor has even confirmed this in an interview: "“There’s a lot of things that need to happen to make it successful though,” Bavor admits. “We’ve always known that it’s got to work at scale, so we’ve been investing in software-only solutions like ARCore, building on all of the Tango technology, just without the additional sensors. ..."
However, if you're asking whether the (previously required) hardware stack for Tango (fisheye cam & IR depth sensor) is 'dead', we're in the realm of speculation. My guess is that ARcore might actually save the hardware stack. With ARcore, >100 million devices will soon run Tango, which means that there will finally be a strong incentive for developers to release high quality apps and games. Then there's a really good reason for device manufacturers to offer specialized Tango hardware, because such hardware will result in a better AR experience (better tracking, additional features etc). But this hardware will probably be more varied than the previous Tango hardware stack.
To help confirm Wendelin's answer, I found running the that if you forget to install the arcore-preview.apk to your device you will see an error of
E/Tango: Java version of Tango Service not found, falling back to tangoservice_d.
E/art: No implementation found for int com.google.atap.tango.TangoJNINative.Initialize(android.content.Context) (tried Java_com_google_atap_tango_TangoJNINative_Initialize and Java_com_google_atap_tango_TangoJNINative_Initialize__Landroid_content_Context_2)
com.google.ar.core.examples.java.helloar D/AndroidRuntime: Shutting down VM
com.google.ar.core.examples.java.helloar E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.google.ar.core.examples.java.helloar, PID: 21663
java.lang.UnsatisfiedLinkError: No implementation found for int com.google.atap.tango.TangoJNINative.Initialize(android.content.Context) (tried Java_com_google_atap_tango_TangoJNINative_Initialize and Java_com_google_atap_tango_TangoJNINative_Initialize__Landroid_content_Context_2)
Which I feel shows that they just took the Tango software stack and integrated it to a form that doesn't require the depth camera. I mean a lot of the Tango SDK revolved around it getting you point clouds, finding planes with those points, and creating area description files with it, etc. I feel "Tango" is only dead if OEM's just stop trying to add more hardware to phones and sticking with the RGB camera. Also I speculate the reason for no new Tango release is Apple dropping the ball on ARKit and Google needing to make move as well.
Google announced that they will stop supporting Tango on March 1, 2018. They will be focusing all their AR efforts into ARCore.
https://twitter.com/projecttango/status/941730801791549440
The question is already quite direct and short:
Can the Hololens be used as a virtual reality glasses?
Sorry beforehand if the question is clear for those who have tried them out, but I had not yet the chance.
From what I read I know that they have been designed to be a very good augmented reality tool. This approach is clear for everybody.
Just thinking that may be applications where you simply don't want the user to have any spatial contact with the reality for some moments, or others where you want the user to forget in the complete experience about were s-he is, then a complete environment should be shown as we are used to with the virtual reality glasses.
How are the Hololens ready for this? I think there are two key sub-questions that may be answered for this:
How solid are the holograms?
Does the screen where holograms can be placed covers the complete view?
As others already pointed out, this is a solid No due to the limited viewing window.
Apart from that, the current hardware capabilities of the Hololens is not capable of providing a full immersive experience. You can check the specifications here.
As of now, when the environment is populated with more than a few holograms (depends on the triangle count of each hologram) the device's fps count drops and a certain lag is visible. I'm sure more processing power would be added to the device in future versions, but as of right now, with the current power of the device, I seriously doubt its capabilities to populate an entire environment to give a fully immersive experience.
1) The holograms quality is defined by the following specs:
- Holographic Resolution: 2.3M total light points
- Holographic Density: 2.5k light points per radian
It is worth to say that Microsoft holograms disappear under a certain distance indicated here in 0.85m
Personal note: in the past I worked also on Google Project Tango and I can tell you from these personal experiences that the stability of Microsoft holograms is absolutely superior. Also, the holograms are kept once the device is turned off, so if you place something and you reboot the device you will find them again where you left them, without the need to restart from scratch
2) Absolutely not: "[The field of view] amounts to the size of a monitor in front of you – equivalent to 15 inches" as stated here. And it will not be addressed as reported also here. So if the holograms size exceeds this space they will be shown partially [i.e. cut]. Moreover the surrounding environment is always visible because the device purpose is interacting with the real environment adding another layer on top
Hololens is not intended to be a VR rig, there is no complete immersion that I am aware of, yes you can have solid holograms, but you can always see the real world.
VR is related with substituting the real world that is why VR goggles are always blind. HoloLens are type of see-through so you can see the hologram and the real world. There are created for augmented reality where you augment the real world. That is why you can't use HoloLens for VR purpous
Actually my initial question is: can the Hololens be used AS WELL for VR applications?
No is the answer because of its small window (equivalent to 15'' screen) where the holograms can be placed to.
I am sure this will evolve sooner or later in order to improve the AR experience. As soon as the screen does not cover toe complete view VR won't be possible with the Hololens.
The small FOV is a problem for total immersion, but there is an app for HoloLens called HoloTour, which is VR (with a few AR scenes in the beginning). In the game, the user can travel to Rome and Peru. While you can still see through the holograms, in my personal experience, people playing it will really get into it and will forget about the limitations. After a scene or two, they feel immersed. So while it certainly isn't as good at VR as a machine designed for that, it is capable, and it is still typically enjoyable to the users. There are quite a few measures to prevent nausea in the users (I can use mine for hours at a time with no problem) so I would actually prefer it to poorer VR implementations, such as a GearVR (which made me sick after 10 minutes of use!). Surely a larger FOV is in the works, so this will be less of a limitation in future releases.
We are working in cloud recognition. In this, we have to restrict recognition of the particular image target not more than 2 recognitions in each device.
We know, we have to use VWS API for that. But our question is how we can restrict recognition of image target only in particular device, but it has to recognize in other devices which is not exceeding 2 recognitions.
How we can achieve this?
I thought this was impossible, but then after updating to Vuforia 4, I noticed in their prefab scripts they have this function RequireComponent now this has a lot of interesting applications to deal with.
Vuforia basically uses it to make sure the device has a camera, so you can notice that in their prefab scripts you can see RequireComponent(typeof(Camera))
With respect to your problem you could do something like RequireComponent(iPhone) because while playing with it, I noticed that was an option they gave me for the brackets.
Check it out and let us all know. I haven't been able to try it out, so can't confirm the same.
I'm trying to get the position of finger on laptop touchpad in Delphi. Not the position of cursor on screen. So I can use it for drawing purposes. Is this possible? How can I do this? Is there any Windows API or any component for this?
Thanks for your help.
Update
I found a software for Lenovo touchpad that does the exact thing. It only shows the position of fingers on touchpad and PEiD says it's been written with Visual C++. So I guess it's a possible thing but as David Heffernan said it depends on manufacturer of the touchpad and it's hardware specific.
Coincidentally, I've just spent the last 30 minutes researching this very thing.
Windows supports this through the touch and gestures APIs. These were introduced in Windows 7 but touchpad drivers didn't tend to offer the necessary support until Windows 8 arrived and made it a logo requirement.
Synaptics and Alps seem to be the principal touchpad manufacturers and they have both released updated drivers for Windows 8 which also work on Windows 7. "Multitouch" is the keyword to search for. This is touchpad-model dependent though; I can't find an update for older Alps devices.
In short, this should work on a "Designed for Windows 8" laptop. It may work on Windows 7 and if it doesn't you may be able to get an updated driver.
The short answer is generally no, this is not possible. Touchpad drivers present to the operating system such that they appear and behave like a mouse does. Absolute coordinates are not available. For this application you need a proper touchscreen device or tablet, at least if you are looking for a general solution that is supported by the operating system.
Some touchpads may provide this information through a hardware-specific driver, of course, but you would need to support, where this is even an option, each device independently. Synaptics, for example, provides an SDK and drivers that can expose the absolute coordinate information.
For tablets or other full-screen digitizers that are supported as "Pen and Touch" inputs, this information is usally obtained through the WM_TOUCH message. Some advanced touchpads may support this - you can always query to discover what features are supported. For those that are, you have to register your application's window to recieve touch messages as detailed here :
Getting Started with Windows Touch Messages
Upon receiving a WM_TOUCH message you can obtain detailed information by immediately passing the touch handle to GetTouchInputInfo. Which returns an array of TOUCHINPUT structures, each carrying information about each active touch point on the digitizer surface.