Is Google Tango dead? - arcore

Does release of ARcore means that there will be no updates on TangoSDK?
There was no update TangoSDK for few months but on the other hand, if I unzip Google Constructor APK, I see it is built with SDK none other than Google has.

The answer is a clear no because ARcore really is Tango. As in, installing the ARCore preview apk gives you a 'Tango Core' service which you can see on your non-Tango phone (I use a Pixel XL).
Clay Bavor has even confirmed this in an interview: "“There’s a lot of things that need to happen to make it successful though,” Bavor admits. “We’ve always known that it’s got to work at scale, so we’ve been investing in software-only solutions like ARCore, building on all of the Tango technology, just without the additional sensors. ..."
However, if you're asking whether the (previously required) hardware stack for Tango (fisheye cam & IR depth sensor) is 'dead', we're in the realm of speculation. My guess is that ARcore might actually save the hardware stack. With ARcore, >100 million devices will soon run Tango, which means that there will finally be a strong incentive for developers to release high quality apps and games. Then there's a really good reason for device manufacturers to offer specialized Tango hardware, because such hardware will result in a better AR experience (better tracking, additional features etc). But this hardware will probably be more varied than the previous Tango hardware stack.

To help confirm Wendelin's answer, I found running the that if you forget to install the arcore-preview.apk to your device you will see an error of
E/Tango: Java version of Tango Service not found, falling back to tangoservice_d.
E/art: No implementation found for int com.google.atap.tango.TangoJNINative.Initialize(android.content.Context) (tried Java_com_google_atap_tango_TangoJNINative_Initialize and Java_com_google_atap_tango_TangoJNINative_Initialize__Landroid_content_Context_2)
com.google.ar.core.examples.java.helloar D/AndroidRuntime: Shutting down VM
com.google.ar.core.examples.java.helloar E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.google.ar.core.examples.java.helloar, PID: 21663
java.lang.UnsatisfiedLinkError: No implementation found for int com.google.atap.tango.TangoJNINative.Initialize(android.content.Context) (tried Java_com_google_atap_tango_TangoJNINative_Initialize and Java_com_google_atap_tango_TangoJNINative_Initialize__Landroid_content_Context_2)
Which I feel shows that they just took the Tango software stack and integrated it to a form that doesn't require the depth camera. I mean a lot of the Tango SDK revolved around it getting you point clouds, finding planes with those points, and creating area description files with it, etc. I feel "Tango" is only dead if OEM's just stop trying to add more hardware to phones and sticking with the RGB camera. Also I speculate the reason for no new Tango release is Apple dropping the ball on ARKit and Google needing to make move as well.

Google announced that they will stop supporting Tango on March 1, 2018. They will be focusing all their AR efforts into ARCore.
https://twitter.com/projecttango/status/941730801791549440

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

ARCORE on 96 hickey board

So I want to run ARcore on hickey board, I am currently running Android 9.0 AOSP on it. Is it possible to run an AR application on hickey board with an external USB camera? do I need a specific camera or is there anything else I need in order to run AR applications on hickey board? or hickey even supports the AR core? if you could answer this it would really help me.
Thank you.
ARCore has some minimum system requirements including sensors, cameras etc.
I don't believe there is an official minimum requirements set published openly at this point but there is a list of supported devices: https://developers.google.com/ar/discover/supported-devices
Google actually test and certify these devices so I don't think you will find 'official' support for you set up - they say:
To certify each device, we check the quality of the camera, motion sensors, and the design architecture to ensure it performs as expected. Also, the device needs to have a powerful enough CPU that integrates with the hardware design to ensure good performance and effective real-time calculations.

Are Hololens VR ready?

The question is already quite direct and short:
Can the Hololens be used as a virtual reality glasses?
Sorry beforehand if the question is clear for those who have tried them out, but I had not yet the chance.
From what I read I know that they have been designed to be a very good augmented reality tool. This approach is clear for everybody.
Just thinking that may be applications where you simply don't want the user to have any spatial contact with the reality for some moments, or others where you want the user to forget in the complete experience about were s-he is, then a complete environment should be shown as we are used to with the virtual reality glasses.
How are the Hololens ready for this? I think there are two key sub-questions that may be answered for this:
How solid are the holograms?
Does the screen where holograms can be placed covers the complete view?
As others already pointed out, this is a solid No due to the limited viewing window.
Apart from that, the current hardware capabilities of the Hololens is not capable of providing a full immersive experience. You can check the specifications here.
As of now, when the environment is populated with more than a few holograms (depends on the triangle count of each hologram) the device's fps count drops and a certain lag is visible. I'm sure more processing power would be added to the device in future versions, but as of right now, with the current power of the device, I seriously doubt its capabilities to populate an entire environment to give a fully immersive experience.
1) The holograms quality is defined by the following specs:
- Holographic Resolution: 2.3M total light points
- Holographic Density: 2.5k light points per radian
It is worth to say that Microsoft holograms disappear under a certain distance indicated here in 0.85m
Personal note: in the past I worked also on Google Project Tango and I can tell you from these personal experiences that the stability of Microsoft holograms is absolutely superior. Also, the holograms are kept once the device is turned off, so if you place something and you reboot the device you will find them again where you left them, without the need to restart from scratch
2) Absolutely not: "[The field of view] amounts to the size of a monitor in front of you – equivalent to 15 inches" as stated here. And it will not be addressed as reported also here. So if the holograms size exceeds this space they will be shown partially [i.e. cut]. Moreover the surrounding environment is always visible because the device purpose is interacting with the real environment adding another layer on top
Hololens is not intended to be a VR rig, there is no complete immersion that I am aware of, yes you can have solid holograms, but you can always see the real world.
VR is related with substituting the real world that is why VR goggles are always blind. HoloLens are type of see-through so you can see the hologram and the real world. There are created for augmented reality where you augment the real world. That is why you can't use HoloLens for VR purpous
Actually my initial question is: can the Hololens be used AS WELL for VR applications?
No is the answer because of its small window (equivalent to 15'' screen) where the holograms can be placed to.
I am sure this will evolve sooner or later in order to improve the AR experience. As soon as the screen does not cover toe complete view VR won't be possible with the Hololens.
The small FOV is a problem for total immersion, but there is an app for HoloLens called HoloTour, which is VR (with a few AR scenes in the beginning). In the game, the user can travel to Rome and Peru. While you can still see through the holograms, in my personal experience, people playing it will really get into it and will forget about the limitations. After a scene or two, they feel immersed. So while it certainly isn't as good at VR as a machine designed for that, it is capable, and it is still typically enjoyable to the users. There are quite a few measures to prevent nausea in the users (I can use mine for hours at a time with no problem) so I would actually prefer it to poorer VR implementations, such as a GearVR (which made me sick after 10 minutes of use!). Surely a larger FOV is in the works, so this will be less of a limitation in future releases.

General GPU programming on iPhone [duplicate]

With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess

3rd-party iOS SDKs?

My team and I were just starting to get the evaluation version of AirPlay SDK up and running when their pricing structure changed dramatically, along with changing their name to Marmalade. I don't think we can afford them at this time since we just purchased a MacBook Pro and still need to pay for the Apple Developer Program and local business licensing fees.
Can you point me in the direction of any other inexpensive 3rd-party SDKs that might provide similar features? Right now, we don't care so much about compiling for other platforms - I feel like when we are ready for that we will also be ready to license Marmalade or some other SDK. I am aware of GameSalad, but I do come from a programming background and am also aware of cocos2D but was hoping for the option of 3D graphics libraries.
Depending on your 3D requirements, I would recommend cocos2d because there is an additional library in fairly early development called cocos3d, which as you'd expect, adds 3d capability to cocos2d.
May be Shiva3d http://www.stonetrip.com/.
Perhaps they will change the price because they cooperated with airplay.

Resources