using three.js can I place a viewport in a scene between near field and far field - webgl

I am unsure if what I am trying to achieve is possible using three.js
What I aim to do is place the viewport within the frustum.
I want to create a 3d environment in which objects can appear between the viewport and the camera.
So if I move the camera then it appears that the object is rendered in 3d in front of the screen - i.e. the generated parallax gives the impression that it is projected out of the screen when things move.
I just can't find a way to do it. I am certain I did it a number of years ago in flash, I thought I would try three.js and webGL.

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Marker based Augmented Reality not planing properly to marker on mobile devices only

I’m working on creating a marker based AR game using AFrame 1.2.0 and ar.js 3.3.3. The display shows 2D images of animals that the user has to “find”. The whole game functions well now, but I was running into an issue of photos appearing distorted or warped. I figured out that the issue is the marker’s plane is not being read correctly by mobile devices. The pictures below include a red cube to show the issue better. The top one is on a PC’s webcam and shows correctly the box is mounted to the marker. The bottom one shows the box is not mounted to the marker.
I figure that the issue is either mobile device’s gyroscope features or that the screen dimensions are affecting the aspect ratio of the screen.
I’ve tried a few properties on Aframe’s a-entity, such as look-controls=‘Enabled:false’ and look-controls=‘magicWindowTrackingEnabled: false’. Neither of those made a difference. I haven’t found properties within ar.js to use. Just wondering if anyone has come across this issue and found a fix.
images planing correctly with the marker
images not planing correctly
arjs comes in two different, mutually exclusive builds - Image + location based tracking, and marker tracking (link).
Importing the wrong one may/will cause incorrect behavior like the one you experience.

Placing objects below the ground in AR Quick Look on iOS

I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.

How to turn off/on AR in an augmented reality app using ARkit?

I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.

DeepEnd(cydia tweak) like 3D background for iOS

I am trying to emulate 3D background in one of the application we are developing.
Check this video on what i am trying to do : http://www.youtube.com/watch?v=429kM-yXGz8
Here is what i am trying to do to emulate this 3D illusion in our
application for iPad.
I have a RootView with 3 rounded buttons centered on the screen which animates in circular motion.
On bottom screen i have some banners of size (600*200) which keeps rotating with flip animation.
I also have some graphical text that is part of the background which contains the "Welcome message"
All elements are individual graphics, and hence when the user moves the iPad we only move the background based on the position of iPad using x,y,z coordinates of accelerometer.
the background moves accordingly, however this is not enough to have 3D illusion, so we decided to add some shadows to graphical elements(buttons, banners, text) and move the shadow accordingly with the iPad's position.
However the result is not convincing, and accelerometer is not updating value if user moves iPad to left and right on stand up position facing iPad straight to the head.
I was wondering if anyone have tried to achieve something similar with success? or any resource to help on how to achieve this? i am just confused whether by using only accelerometer will work or should i go with gyroscope?
Using face detection for simulating a 3D effect already has been done (by me). You can download a complete sample from http://evict.nl/code/face-tracking See the video on that page for a quick demo.
You should definitely use both. accelerometer (movement) and gyroscope (device angle). But for a true 3d effect you probably need to use the camera + face detection.

Resources