How can I load a 3d model to an IOS app, and scale/transform/rotate it to place on an image? - ios

I should load a 3d model (let's say a fridge) and scale or rotate it to locate on a kitchen photo (just for an example).
I have seen several SDKs but all was for 3d games. What I need is to put and play with my object in a native IOS app.
Where should I start? GLKit is the answer for that?

iOS 3d rendering is made by OpenGL ES, that is a pretty difficult topic. Apple provides GLKit to help developer to integrate OpenGL ES but none of is a model parser or scene manager, is hard anyway. I can suggest you to use a 3d engine, I took a look at two engines:
1-Nineveh GL
2-Irrlicht
The first integrate absolutely fine in iOS projects and exposes objective c API. It uses only OpenGl 2.0 shaders. The problem is that is a beta version.
The latter is written in C++ and supports both 1.0 and 2.0, but is pretty hard to integrate.

Related

How to Render Navisworks(*.nwd) file to IOS

I want to export Navisworks 3D navigation models to my IPhone device , Is there any API available to achieve this. I want to create my Own App to read models into IOS - similar to Navisworks Freedom viewer for IOS.
I have lots searched on internet but couldn't find any useful.
There is no Navisworks viewer for iOS, but there is a WebGL viewer that can be embedded on mobile apps (or web or desktop too).
There is a live sample at https://360.autodesk.com/viewer
See the API at http://developer.autodesk.com
iOS sample at https://github.com/Developer-Autodesk/workflow-ios-view.and.data.api
I recommend developing your own native or web app to build a mobile 3d model viewer.
Web App - you could use Unity3d or Three.js. These communities are strong and there are plenty of resources available. The benefit here is that it would work on desktop too.
Native app - You could make a model viewer in Swift using Apple's Metal library. I am not familiar with Android 3d shader libraries.
Both of these endeavours are huge amounts of works. I hope you would keep any eye out for code you can copyright (or open source), perhaps even patent if you develop a new, complex algorithm for converting/displaying 3d data.

GLKit.framewok vs OpenGLES.framework vs QuartzCore.framework

I am adding a GLView to my app and along the way I have learned that I need to add the GLKit framework.
So when I was adding this framework, I realised there is a very similarly named framework called OpenGLES.framework.
I tried to look up the difference between these two frameworks on google, and I landed on this page.
This article seems to suggest I need an extra framework called QuartzCore.framework, which confused me even more.
I have the following questions:
1) How do these frameworks relate to each other?
2) It seems like GLView framework alone will enable GLView to work. When am I required to use the other two frameworks?
OpenGL ES is a cross-platform C API for GPU-accelerated drawing, particularly useful for 3D graphics and image processing. On iOS, you link against OpenGLES.framework, providing access to the cross-platform API and to the most basic iOS-specific APIs (EAGLContext and CAEAGLLayer) for using OpenGL ES in your app.
GLKit is an Apple-specific framework that adds extra features for making development of OpenGL ES-based apps easier, as nicely summed up in the tutorial you linked to:
GLKView/GLKViewController. These classes abstract out much of the boilerplate code it used to take to set up a basic OpenGL ES project.
GLKEffects. These classes implement common shading behaviors used in OpenGL ES 1.0, to make transitioning to OpenGL ES 2.0 easier. They’re also a handy way to get some basic lighting and texturing working.
GLMath. Prior to iOS 5, pretty much every game needed their own math library with common vector and matrix manipulation routines. Now with GLMath, most of the common math routines are there for you!
GLKTextureLoader. This class makes it much easier to load images as textures to be used in OpenGL. Rather than having to write a complicated method dealing with tons of different image formats, loading a texture is now a single method call!
If you link GLKit.framework, you get OpenGLES.framework for free — likewise if you import the GLKit headers, the OpenGL ES headers come along for the ride.
QuartzCore is for working directly with Core Animation layers. Before GLKit was introduced, you had to set up your own layers for getting OpenGL content onscreen — now GLKView does this on your behalf, so there's no need for QuartzCore unless you want to do extra fun stuff with Core Animation.

Real time vector-based OSM renderer in iOS (using OpenGL ES)

I'm looking into a solution that will allow to use OpenStreetMap data to render a 2D top-view vector-based map in iOS, instead of using pre-rendered tiles from a server. Similar to Apple and Google Maps in iOS6+.
I've done extensive research on this matter, but didn't found too much information.
There are a number of iOS apps that do this, but no information on how they implement it. A couple of these apps are:
ForeverMap 2 by skobbler
Galileo Offline Maps
OffMaps 2
The first 2 apps work similar to Apple and Google Maps. The map is drawn in real time whenever the zoom changes.
The last one appears to be using a slightly different approach. It renders the vector data at specific zoom levels and creates tiles which are then used as normal tiles downloaded from a tile server. So the rendering engine could actually be a tile source for the Route-Me library, but instead of downloading the tiles it renders them on the fly.
The first method is preferred.
[Q] I guess one could switch between methods fairly easy, once the OpenGL ES renderer is in place. I mean you could use the renderer as a source for Route-Me to create tiles, or you could use it as a real-time drawer, similar to a game. Am I right?
The closest solution I found is OpenStreetPad. However, it is using Core Graphics instead of OpenGL ES, so the rendering is not hardware accelerated.
Mapbox stated they are working on vector tiles and they'll probably provide an iOS solution for rendering, however it may use Mapnik so I am not sure how efficient will that be. And there's no ETA on since mid 2013.
[Q] Do you know of any other libraries, papers, guides, examples, or some other useful information on how to approach this? Basically how to handle the OSM data and how to actually use OpenGL ES / GLKit to draw that data on the device. Maybe some of the people who have done it can share a few things?
Old question, but there's a new answer.
WhirlyGlobe-Maply will render tile based vector maps on iOS. http://mousebirdconsulting.blogspot.com/2014/03/vector-maps-introduction.html
The technology that powered skobbler's ForeverMap 2 and their current GPS Nav & Maps app is now available on a pay-per use basis. See their developer platform.
Note: they also have a free tier that can be used to develop/launch small apps.
They render the map using OpenGL and "vector data tiles". This vector data tiles contain information regarding road geometry (so you can have routing), POI data & other map features. (eg. boundary limits).
There is a list of OSM-based applications for iOS. It also includes a few open source projects, for example Navit. Navit seems to render the map using SDL/OpenGL. See the Navit iOS wiki page for more information.

NURBS surfaces in OpenGL 3.2 core profile

Is it possible to draw NURBS (Non-uniform rational B-spline) surfaces in OpenGL 3.2 core profile?
I assume that the NURBS rendering using GLU library does not support the core profile.
Is there any open source libraries that implement the same functionality as GLU?
Using GLU with core profiles from OpenGL 3.1 onwards, won't really work. GLU is layered on top of a many deprecated OpenGL functions, and your application most likely won't either link or work correctly.
As for NURBS implementation in GLU, the source code is available in the open source from SGI at http://oss.sgi.com/projects/ogl-sample/. You could probably fix the library to use more modern OpenGL methods.
Also more details are given in this post.

Open source augmented reality framework for BlackBerry

Anyone know any open source framework for augmented reality in BlackBerry or a good tutorials for creating an augmented reality application from scratch?
Here is an interface prototype for the free LayarPlayer for third party BlackBerry7 apps: https://gist.github.com/1219438. Not sure if Wikitude will have a lib or not.
If you wanna roll your own AR lib (not recommended, unless you have tons of time and energy) OpenGL ES is platform independent, just use ComponentCanvas for overlaying it on top of the camera view.
BlackBerry OS 7 SDK apparently includes APIs to assist in developing augmented reality applications.
I am working on an OpenGL application for BlackBerry and I too have realised there are not many OpenGL tutorials for it. But you can always use Android ones. They are not really very different.
And I think we should profit from the new BlackBerry graphics card and CPU to create some exciting 3D application for the patform.
You can find OpenGL basic samples on the BlackBerry website and in the BlackBerry SDK.
Notice: All BlackBerry devices that run on Os 7 have a dedicated graphics card and 1.2ghz of CPU frequency.

Resources