I have an scenario where i want to render product image on markers. But all the details of that particular product (like its .obj , .mtl file) will come from webservice. So is it possible to render image of product at runtime using artoolkit ? Is dynamic image rendering is possible using artoolkit native ?
In short, it depends of your tool, not of ARToolKit. ARToolKit provides pose information for markers, what you do with them depends of the tool you're using.
I assume you are using Unity, then you can have a plane with a texture and make the texture dynamic based on the downloaded file.
I haven't done that in Unity, but I'll be very surprised if it wasn't possible.
Related
How can I download a 3d model from web and then augment it on user defined target image in Vuforia at run time?
Using UserDefineTarget image in Vuforia we can only augment predefined 3d model.
But what I want to do is to augment any desired 3d model that can be download from web and then can augment on user defined target image at run time.
Your question is not really related to Vuforia or AR specifically - UserDefinedTarget allows you to know where to draw what you want. This is a pure OpenGl question.
Basically, this should not be a big deal - download the 3d model file, and load it from that file and parse it (rather than load the model from a hard-coded file) to get all the model data.
Here you can find an example using the Rajawali library (Android), but you should be able to find other ways quite easily:
how to add 3d models dynamically
I know an architects firm who create walk through videos of their designs using Sketchup. So a 3D model of the designs are already in place, is there a way in Sketchup to create a 360 degree video that we can upload to Youtube.
The intention is for them to be able to show clients round their design using something like Google Cardboard etc. The camera would probably follow a slow track, but you could move you head and look round wherever you want.
This type of thing:
https://www.youtube.com/watch?v=6uG9vtckp1U
Just not Star wars.
University of Minnesota VR lab had something called SketchupVR that mirrored sketchup through an OpenGL Renderer running. They published it here.
http://www-users.cs.umn.edu/~interran/acadia06.pdf
It was later extended to run using Elumenati's omnimap spherical camera library which does 360x180 rendering.
Without being an expert in Sketchup, I'm pretty sure it can't do what you ask
since it doesn't have the rendering capablities.
I'm surprised nobody has corrected this in the past year. SketchUp does have rendering capabilities. It can render and export video of models in a few formats. See Lost In The Woods - https://vimeo.com/125412870
However, SU does not give a way to move subject or objects in the scene. My around that has been to insert jpg images, either by overlay or actually inserting them in the SU model.
Without being an expert in Sketchup, I'm pretty sure it can't do what you ask since it doesn't have the specific rendering capablities. More likely, the model is created in Sketchup and imported into a modelling/rendering package where the video is rendered.
If you don't have a commmercial package, you can do it in Blender (free, open source) for example.
What you want to do is render with an Equirectangular camera:
https://www.blender.org/manual/render/cycles/camera.html
I am a student and I am making my major project is about augmented reality and I have a good background in programming and my plan to make a very huge project in augmented reality
I have download the vuforia SDK and I have make some samples using unity
my question is the vuforia SDK support the 3d tracking
I have seen the "Sesame Street Augmented Reality Dolls" in YouTube but I couldn't find under which Section it has made
Please inform me how to start doing this
This is the Visit http://www.youtube.com/watch?v=U2jSzmvm_WA/
According to the moderators on the Vuforia forum:
You cannot detect arbitrary 3D objects, but you can detect 3D objects
made up of planar image targets (e.g. a cereal box). Look for the
MultiImageTargets section of the Developer Guide and the AR Extension
for Unity 3 documentation
(https://ar.qualcomm.com/qdevnet/sdk/unity/ar). You can create
simple cube objects using the My Trackables system, or you can edit
the config.xml file by hand to arrange image targets into the desired
configuration.
A recent update allows Vuforia to track 3D cylinder targets.
The Sesame Street example was experimental and research is ongoing. There are no official plans to release it as a standard component of Vuforia yet.
3D object tracking has now been officially added to the Vuforia 4.0 SDK:
https://developer.vuforia.com/library/articles/training/object-recognition
Note that it only works with small objects and the objects must be scanned using an Android app.
I'm looking into a solution that will allow to use OpenStreetMap data to render a 2D top-view vector-based map in iOS, instead of using pre-rendered tiles from a server. Similar to Apple and Google Maps in iOS6+.
I've done extensive research on this matter, but didn't found too much information.
There are a number of iOS apps that do this, but no information on how they implement it. A couple of these apps are:
ForeverMap 2 by skobbler
Galileo Offline Maps
OffMaps 2
The first 2 apps work similar to Apple and Google Maps. The map is drawn in real time whenever the zoom changes.
The last one appears to be using a slightly different approach. It renders the vector data at specific zoom levels and creates tiles which are then used as normal tiles downloaded from a tile server. So the rendering engine could actually be a tile source for the Route-Me library, but instead of downloading the tiles it renders them on the fly.
The first method is preferred.
[Q] I guess one could switch between methods fairly easy, once the OpenGL ES renderer is in place. I mean you could use the renderer as a source for Route-Me to create tiles, or you could use it as a real-time drawer, similar to a game. Am I right?
The closest solution I found is OpenStreetPad. However, it is using Core Graphics instead of OpenGL ES, so the rendering is not hardware accelerated.
Mapbox stated they are working on vector tiles and they'll probably provide an iOS solution for rendering, however it may use Mapnik so I am not sure how efficient will that be. And there's no ETA on since mid 2013.
[Q] Do you know of any other libraries, papers, guides, examples, or some other useful information on how to approach this? Basically how to handle the OSM data and how to actually use OpenGL ES / GLKit to draw that data on the device. Maybe some of the people who have done it can share a few things?
Old question, but there's a new answer.
WhirlyGlobe-Maply will render tile based vector maps on iOS. http://mousebirdconsulting.blogspot.com/2014/03/vector-maps-introduction.html
The technology that powered skobbler's ForeverMap 2 and their current GPS Nav & Maps app is now available on a pay-per use basis. See their developer platform.
Note: they also have a free tier that can be used to develop/launch small apps.
They render the map using OpenGL and "vector data tiles". This vector data tiles contain information regarding road geometry (so you can have routing), POI data & other map features. (eg. boundary limits).
There is a list of OSM-based applications for iOS. It also includes a few open source projects, for example Navit. Navit seems to render the map using SDL/OpenGL. See the Navit iOS wiki page for more information.
I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.