I know an architects firm who create walk through videos of their designs using Sketchup. So a 3D model of the designs are already in place, is there a way in Sketchup to create a 360 degree video that we can upload to Youtube.
The intention is for them to be able to show clients round their design using something like Google Cardboard etc. The camera would probably follow a slow track, but you could move you head and look round wherever you want.
This type of thing:
https://www.youtube.com/watch?v=6uG9vtckp1U
Just not Star wars.
University of Minnesota VR lab had something called SketchupVR that mirrored sketchup through an OpenGL Renderer running. They published it here.
http://www-users.cs.umn.edu/~interran/acadia06.pdf
It was later extended to run using Elumenati's omnimap spherical camera library which does 360x180 rendering.
Without being an expert in Sketchup, I'm pretty sure it can't do what you ask
since it doesn't have the rendering capablities.
I'm surprised nobody has corrected this in the past year. SketchUp does have rendering capabilities. It can render and export video of models in a few formats. See Lost In The Woods - https://vimeo.com/125412870
However, SU does not give a way to move subject or objects in the scene. My around that has been to insert jpg images, either by overlay or actually inserting them in the SU model.
Without being an expert in Sketchup, I'm pretty sure it can't do what you ask since it doesn't have the specific rendering capablities. More likely, the model is created in Sketchup and imported into a modelling/rendering package where the video is rendered.
If you don't have a commmercial package, you can do it in Blender (free, open source) for example.
What you want to do is render with an Equirectangular camera:
https://www.blender.org/manual/render/cycles/camera.html
Related
I'd like to know how to create a target for architectural large scale AR on a real site.In other words, I need that Google superimposed my 3d model on a specific place.
I have tried Google tango Area Learning tutorials (https://developers.google.com/tango/apis/unity/unity-codelab-area-learning), but after showing the message WALK AROUND TO RELOCALIZE the tablet does nothing, although I walk around to detect the real space, then after few minutes the message Unity project has stopped appears on the Google Tango tablet screen.
Could ADF file used instead of relocalizing the environment?
I've detected some interior scenes by Tango explorer and saved them,but I'm not able to use them for environment recognition purpose
I work on Unity and Google Tango tablet.
Thank you in advance for your response.
For anyone else facing this problem - the likely cause is not having a recent ADF file already on the device.
You need to first create a Area Description file (ADF) by scanning, and then you can separately Localise to that ADF - so you cannot "use an ADF instead of relocalising."
The tutorial you link above needs you to have separately created an ADF for your location - it simply chooses the most recent one you have.
You can use the Area Learning example to create your ADFs, and try localising to them. It also shows superimposing 3D models.
Also, look at the augmented reality one to see how to have objects load already in a specific place.
Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!
I'm looking into a solution that will allow to use OpenStreetMap data to render a 2D top-view vector-based map in iOS, instead of using pre-rendered tiles from a server. Similar to Apple and Google Maps in iOS6+.
I've done extensive research on this matter, but didn't found too much information.
There are a number of iOS apps that do this, but no information on how they implement it. A couple of these apps are:
ForeverMap 2 by skobbler
Galileo Offline Maps
OffMaps 2
The first 2 apps work similar to Apple and Google Maps. The map is drawn in real time whenever the zoom changes.
The last one appears to be using a slightly different approach. It renders the vector data at specific zoom levels and creates tiles which are then used as normal tiles downloaded from a tile server. So the rendering engine could actually be a tile source for the Route-Me library, but instead of downloading the tiles it renders them on the fly.
The first method is preferred.
[Q] I guess one could switch between methods fairly easy, once the OpenGL ES renderer is in place. I mean you could use the renderer as a source for Route-Me to create tiles, or you could use it as a real-time drawer, similar to a game. Am I right?
The closest solution I found is OpenStreetPad. However, it is using Core Graphics instead of OpenGL ES, so the rendering is not hardware accelerated.
Mapbox stated they are working on vector tiles and they'll probably provide an iOS solution for rendering, however it may use Mapnik so I am not sure how efficient will that be. And there's no ETA on since mid 2013.
[Q] Do you know of any other libraries, papers, guides, examples, or some other useful information on how to approach this? Basically how to handle the OSM data and how to actually use OpenGL ES / GLKit to draw that data on the device. Maybe some of the people who have done it can share a few things?
Old question, but there's a new answer.
WhirlyGlobe-Maply will render tile based vector maps on iOS. http://mousebirdconsulting.blogspot.com/2014/03/vector-maps-introduction.html
The technology that powered skobbler's ForeverMap 2 and their current GPS Nav & Maps app is now available on a pay-per use basis. See their developer platform.
Note: they also have a free tier that can be used to develop/launch small apps.
They render the map using OpenGL and "vector data tiles". This vector data tiles contain information regarding road geometry (so you can have routing), POI data & other map features. (eg. boundary limits).
There is a list of OSM-based applications for iOS. It also includes a few open source projects, for example Navit. Navit seems to render the map using SDL/OpenGL. See the Navit iOS wiki page for more information.
I'm searching for a face tracking system to use in an augmented reality project. I'm trying to find an open source and multi-platform application for it. The goal is to return the direction where the face is looking to interact with the virtual environment, (something like this video).
I've downloaded the sources of the above Johnny Lee's application and tried to use Free Track too, making my own headset (some kind of monster, hehe). But it's not good to be limited to infrared points in your head.
These days I've download FaceTrackNoIR, but when I launch the program I get "No DLL was found in the Waterfall procedure." that I'm actually trying to solve.
Anyone knows a good application, library, code, lecture, anything that could help me to find a good path for this?
Thank you all!
I'll try to post results someday :-)
I would take a look at OpenCV. It is a general purpose machine-learning and computer vision C++ library. One of the examples in the download is a real-time face tracker that connects to a video camera connected to your computer and draws squares around any faces in the camera view.
I want to create a 3D model using set of 2D images on windows which can be send through webservice to iphone to display on it.
I know it can be done through Opengl but don't know how to start and also if I succeeded in creating it,is it compatible with iphone as iphone uses opengl es.
Thanks in advance.
What kind of transformation do you have in mind to create the 3D models? I once worked on an application using such a concept to create a model from three images of an object. It didn't really work well. The models that could be created were very limited.
OpenGL does not have a built in functionality to do such stuff. Are there any reasosns why you do not want to use a real 3D-model? It sounds as if you are looking for a fast solution for your problem. But I'm afraid if you do not have any OpenGL experience, you should prepare prepare for lots of stuff to learn.
If you want to create 3D models automatically from 2D photos, you're going to have a fair bit of work to do. AFAIK, this is not something where you can get a cheap pre-packaged solution. Autodesk charge a small fortune for ImageModeler.
MeshLab may be a good starting point, but even that can't automatically convert photos to a 3D model AFAIK.
Take a look at David Lowes site. I found the "Distinctive image features from scale-invariant keypoints" paper quite interesting, though I haven't re-read it in a while. If nothing else, this should give you some idea why this is far from a trivial problem.