I want to create a website homepage made up of only a 360 viewer showing a photorealistic 3d model which you can navigate around and interact with (WEBVR style) however, what i am trying to identify is will it be possible to embed a 2nd 360 viewer into this so that while navigating and you get to a specific point, you may want to view another 360 image / video within the current one.
looking at using A Frame for 1st 360 image and an embedded viewer into that. A
solution has been proposed using advanced techniques within WebGL, but i dont know enough about it to make this decision.
Site needs to be responsive and compatible with standard browsers, tablets and mobile and (fingers crossed) option to use headset
any advise / suggestion would be greatly appreciated !!!
Imo the biggest part here is the design.
As for compatibility, keep in mind that PC (mouse) / mobile (gaze) / HMD (controllers) need different UI's.
As for 360 in 360. There is no need to embed anything.
1) Have your model in the middle
2) when navigated somewhere show a sphere around the camera with the image + keep some controls (like a "back" button)
You can achieve it like this:
HTML
<a-scene>
<a-entity id="model"></a-entity>
<a-entity id="button" show="src:myPic.png"></a-entity>
<a-sphere id="photosphere" material="side:back" visible="false"></a-sphere>
</a-scene>
JS
AFRAME.registerComponent("show", {
init: function() {
let sphere = document.querySelector("a-sphere")
this.el.addEventListener("click", (e)=>{
let pos = document.querySelector("[camera]").getAttribute("position")
sphere.setAttribute("position", pos)
sphere.setAttribute("visible", true)
})
}
})
Check it out in my fiddle.
You can hide the model, when the photosphere + UI is seen, the execution is up to you.
Related
Is there any proof of concept of how to implement multiple AR markers w/ A-Frame?
Ex. Something like this: https://www.youtube.com/watch?v=Y8WEGGbLWlA
The first video in this post from Alexandra Etienne is the effect I’m aiming for (multiple distinct AR "markers" with distinct content): https://medium.com/arjs/area-learning-with-multi-markers-in-ar-js-1ff03a2f9fbe
I’m a bit unclear if when using multiple markers they need to be close to each-other/exist in the same camera view
This example from the ar.js repo uses multiple markers but they're all of different types (ie one is a Hiro marker, one is a Kanji marker, etc): https://github.com/jeromeetienne/AR.js/blob/master/aframe/examples/multiple-independent-markers.html
tldr: working glitch here. Learn the area (the image is in the assets), click the accept button, and toggle the marker helpers.
Now, a bit of details:
1) Loading saved area data
Upon initialisation, when ar.js detects, that you want to use the area marker preset, it tries to grab a localStorage reference:
localStorage.get("ARjsMultiMarkerFile")
The most important data there is an array of pairs {markerPreset, url.patt} which will be used to create the area.
Note: By default it's just the hiro marker.
2) Creating an area data file
When you have debugUIEnabled set to true:
<a-scene embedded arjs='sourceType: webcam; debugUIEnabled: true'>
There shows up a button "Learn-new-marker-area".
At first glance it redirects you to a screen where you can save the area file.
There is one but: by default the loaded learner site is on another domain.
Strictly speaking: ARjs.Context.baseURL = 'https://jeromeetienne.github.io/AR.js/three.js/
Any data saved there won't be loaded on our website, for local storage is isolated per origin.
To save and use the marker area, you have to create your own learner.html. It can be identical to the original, just keep in mind you have to keep it on the same domain.
To make the debugUI button redirect the user to your learner html file, you need to set
ARjs.AnchorDebugUI.MarkersAreaLearnerURL = "myLearnerUrl.html"
before the <a-marker>s are initialized. Just do it in the <head>.
Once on the learner site, make sure the camera sees all the markers, and approve the learning.
It should look like this:
Once approved, you will be redirected back to your website, the area file will be loaded, and the data will be used.
As #mnutsch stated, AR.js does what you want.
You can display two different models on two different markers. If the camera doesn't see one of the markers the model vanishes (or stays where it was last, depending on your implementation.
The camera doesn't need to see both.
Screenshot:
https://www.dropbox.com/s/i21xt76ijrsv1jh/Screenshot%202018-08-20%2011.25.22.png?dl=0
Project:
https://curious-electric.com/w/experiments/aframe/ar-generic/
Also, unlike Vuforia, there is no 'extended tracking' - once the code is out of sight, you can't track anymore.
I have a requirement to show a text on a paper when mobile device place it on the paper ,this is basically an augmented reality feature.
There will be an maker in the paper , and the app it self will recognise the marker and should place a dynamic text which receives from server . So this text will change
So as a start i decided to move with Vuforia SDK since it has more support than any other sdks available. So i managed to run there sample apps and show there "tea pot" in my own marker ,
But now comes the hardest part , that i need to render text on the marker instead of tea pot. so seems i have two options
1) Using unity create the 2d text object
2) Using openGL render text as in teapot
So my question is what is the appropriate way to do this ?? i know OpenGL is not easy to implement , even the Unity will provide multiple unnecessary files both have pros and cons
what is the best way to this ??
You get your text from server, let's say json file. Parse it and apply the result to a text mesh object or a Text object in a world canvas.
https://docs.unity3d.com/Manual/class-TextMesh.html
https://unity3d.com/fr/learn/tutorials/topics/user-interface-ui/ui-text
You can place those object in place of the tea pot under the image target or modify the DefaultTrackableEventHandler so that instead of affecting Collider and Renderer on child object it would do action on any object.
Look for the OnTrackingFound/Lost methods.
So after reading few articles i managed to put a GuiLabel on top of real world object. It was all about using WorldToScreenPoint and set it to the position
Camera cam=Camera.current;
GUIStyle textStyle = new GUIStyle ();
textStyle.fontSize = 40;
textStyle.normal.textColor = Color.yellow;
Vector3 pos=cam.WorldToScreenPoint(targetPosition);
GUI.Label(new Rect(pos.x-40,Screen.height-pos.y,250,200),"Product Price",textStyle);
I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)
I have a simple form that loads and plays a video using the media player, and media player control components. Is there a way to resize the video played, using a scrollbar or something similar in RAD Studio XE6.
Resizing is usually done by resizing the form hosting the viewport, not with scrollbars. Try putting the media player in a sizable form or panel with anchors on all four sides so it expands and contracts as you adjust the corners of the form.
You can implement sort of ZoomIn or ZoomOut functionalitay by adjusting the DisplayRect property:
http://docwiki.embarcadero.com/Libraries/XE6/en/Vcl.MPlayer.TMediaPlayer.DisplayRect
First you would probably want to set DisplayRect to the size of the control/component which you have selected as rendering target using Display property:
http://docwiki.embarcadero.com/Libraries/XE6/en/Vcl.MPlayer.TMediaPlayer.Display
In order to avoid uneven wideo streching I recomend you add necessary code to be able to calculate proper DisplayRect dimensions while maintaing aspect ratio.
If you want you can make DisplayRect even larger than your rendering conroll. By doing this you realy achieve that ZoomIn effect.
Do note that this only strectches the video content so you may expirience loss of quality, becouse TMediaPlayer doesen't use any special filters for doing so as many comercial Media Players do.
EDIT: I don't have expirience about using of TMediaPlayer on FireMonkey platform but after looking at documentation it seems that things have been changed quite a bit.
For instance on FMX there is special component called TMediaPlayerControll which is required for rendering video on.
http://docwiki.embarcadero.com/Libraries/XE7/en/FMX.Media.TMediaPlayerControl
But looking at documentation I can't find any special properties or methods to controll the video size. So I gues that implementing of ZoomIn or ZoomOut functionality would be using the same approach as it can be used with normal FireMonkex components.
I'm trying to make a very simple 3D model viewer in a phonegap app for use on an iPhone 4. I'm using three.js which is working fine when I make a simple website. The problem is that when I try it out on the phone the 3D object doesn't appear. Simple geometrical shapes like a cube and cylinder will load on the canvas but obj files won't.
I use an objLoader to bring in the .obj file and have all relevant files in the same directory in the app just in case. I think the problem might lie with using webGL on iOS but I'm not really sure.
Thanks very much for your help. If anyone has any suggestions for building a model viewer in phonegap for display in iOS I'd be delighted to hear them.
As very few mobile browsers support WebGL at the moment I opted to use the canvas to render the 3D models. I used a simple web 3D object viewer called JSC3D to create a model viewer in PhoneGap on iOS. It can use webGL but I just went with rendering using the 2D canvas.
I tested my app on an iPhone 4 and the result was that the model took between 2 and 5 seconds to load up and when you go to rotate the object it takes some time to redraw it depending on how complex it is. While not the most satisfactory result it did do the job. I'm going to try it out on a more advanced phone in Android and I'll let you know the result.
I suggest you try use XDK Intel if you are packaging for iphone but for android use AIDE Phonegap. Make sure you use only var renderer = new THREE.CanvasRenderer();and avoid using
Anything that has to do with WebGL for its not supported on most devices except BB Playbook and BB 10.
I think IOS render ability is better than Android, some scene renders well in IOS, but not in the Android.
Usually the movile have more render ability, than tha mobile app. I use the firefox to render the three.js obj demo, it works well. But when I use the webview in the app, it renders nothing.
I've make an Android app to render stl models. First when I use the mobile browser to render the scene, it not render the full scene, and when I remove the shadow effect, it renders. Then I try to use webview with WebGlRenderer or CanvasRenderer, no works. Last I refer to the XWalkView of crosswalk a web engine which can used as an addin of the app, to replace the webview, I also close the shadow effect, I renders well.
You can refer this answer for more info.
Here is the render result.
You definitely should not use WebGL renderer of three.js as it's not supported on iOS. Try Canvas or SVG renderer.