Creating 3D model using set of 2D images on Windows - opengl-to-opengles

I want to create a 3D model using set of 2D images on windows which can be send through webservice to iphone to display on it.
I know it can be done through Opengl but don't know how to start and also if I succeeded in creating it,is it compatible with iphone as iphone uses opengl es.
Thanks in advance.

What kind of transformation do you have in mind to create the 3D models? I once worked on an application using such a concept to create a model from three images of an object. It didn't really work well. The models that could be created were very limited.
OpenGL does not have a built in functionality to do such stuff. Are there any reasosns why you do not want to use a real 3D-model? It sounds as if you are looking for a fast solution for your problem. But I'm afraid if you do not have any OpenGL experience, you should prepare prepare for lots of stuff to learn.

If you want to create 3D models automatically from 2D photos, you're going to have a fair bit of work to do. AFAIK, this is not something where you can get a cheap pre-packaged solution. Autodesk charge a small fortune for ImageModeler.
MeshLab may be a good starting point, but even that can't automatically convert photos to a 3D model AFAIK.
Take a look at David Lowes site. I found the "Distinctive image features from scale-invariant keypoints" paper quite interesting, though I haven't re-read it in a while. If nothing else, this should give you some idea why this is far from a trivial problem.

Related

Photo editing app iOS

I am trying to make a photo editing application for iOS, but am not sure where to start looking. I have attached an image made in Word... that hopefully simply depicts what I am trying to achieve. It will involved manipulating individual pixels of a shape/image and masking/clipping. WHow should I start and what resources are available to me other than the developer docs?
Cheers
If you are not new to programming I would suggest a trial and run kind of approach. If it was me, I would follow a approach like this
Figuring out what to do/ what not to do
Do I need to develop the tech I want from scratch or can I use some pods ?
What are the good reads and example apps - (Try this)
Development approach
Build a photo gallery to pick images from
Build a EDIT mode screen
Get set of template overlay images
Figure out how to overlay them on top of each other
Export the final picture as one picture
The developer documentation is essential when it comes to learning new APIs, but sometimes it can be a little overwhelming. You can try reading raywenderlich.com tutorials on Core Image first to get an idea (link here) or find a book on computer graphics. It is essential to understand at least the underlying techniques to efficiently program image processing code. In many cases you'll find there is a more elegant technique than just looping on pixels and modifying one-by-one.
Then you can continue with reading on image compositing using core image for example.

Do we have a way to implement face detection and recognition offline on browser?

I need to find a way to implement face detection and recognition completely offline using a browser. Trained model specific to each user maybe loaded initially. We only need to recognize one face per device. What is the best way to implement this?
I tried tracking.js to implement face detection. It works. But couldn't get a solution to implement recognition. I tried face-recognition.js. But it needs a node server.
Take a look at: face-api.js it can both detect and recognize faces in realtime completely in the browser! It's made by Vincent Mühler, the same creator of face-recognition.js.
(Face-api.js Github)
Thing to note:
It's realtime, my machine gets ~50ms (using MTCNN model)
It's JavaScript but uses WebGL GPU acceleration under the hood which is why it performs so well
It can also work on mobile! (tested on my S8+)
I recommend looking at the included examples as well, these helped me a lot
I have used the package to create a working project, it was surprisingly easier than I thought and this is coming from a student that just started web development. (used it in a ReactJs App)
Just like you I was searching and trying things such as tracking.js but to be honest they didn't work well.

webgl viewer/game engine for low poly turntables?

I am looking for the best tool to achieve something like this (this is Blender's game engine, no real reflections, etc.) in an webgl viewer.
http://youtu.be/9-n12ZH5O6k
The idea is to prepare several basic scenes like this and then for the user to upload his design and have it previewed on a car (or other far more basic objects).
While p3d is nice, I don't think it does the job. There's no API for these cases yet. What are some options to pull this off? The requirement would be to have a library that doesn't have a too large footprint, since the feature/product is planned for the Asian market, so internet speed has to be considered.
you should look into three.js/babylon.js maybe? But you surely won't achieve that app just by a fingersnap, so read the tutorials as well, but these libs will surely ease your task by much.

Can augmented reality be realized in a website?

Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!

iOS graphics engines

I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.

Resources