2D to 3D face reconstruction using OpenGL [closed] - image-processing

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
I am new to 3D Graphics and rendering. Can anyone tell me how to do 2D to 3D face reconstruction from 2D image. What are the algorithms used and how can it be done using OpenGL? Related functions.
Input: 2D frontal face image
Output: 3D reconstruction of the input

Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
You can't. OpenGL is a 3D rasterizer drawing API. It's used to turn 3D geometry into 2D pictures, i.e. goes in the opposite direction of what you want to do.
Input: 2D frontal face image
Output: 3D reconstruction of the input
This is the real world, not CSI!
You need at least some additional input to turn this into 3D data. Each pixel of the image is a point in 2D space, i.e. provides 2 variables into an equation. You want to turn this into 3 variables. Or in other words, this is an underdetermined system of equations.
Additional input could be:
Time: If you had a movie, then by the movements of the visible objects over time their position in space can be inferred.
Images from multiple angles
Taking the picture with special illumination, so that the pixel's color provides additional information about the depth.
Research on this has been done at the University of Washington. I recommend looking at all their papers!
An trained observer model, i.e. what our brain does to infer objects depth in a 2D picture is very unreliable. Just look at any arbitrary perspective illusion to understand why. Our brain is much better in interpreting images than any computer program and still can be fooled.

Related

straight line detection from a image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am doing a project which is hole detection in road. I am using a laser to emit beam on the road and using a camera to take a image of the road. the image may be like this
Now i want to process this image and give a result that is it straight or not. if it curve then how big the curve is.
I dont understand how to do this. i have search a lot but cant find a appropriate result .Can any one help me for that?
This is rather complicated and your question is very broad, but lets have a try:
Perhaps you have to identify the dots in the pixel image. There are several options to do this, but I'd smoothen the image by a blur filter and then find the most red pixels (which are believed to be the centers of the dots). Store these coordinates in a vector array (array of x times y).
I'd use a spline interpolation between the dots. This way one can simply get the local derivation of a curve touching each point.
If the maximum of the first derivation is small, the dots are in a line. If you believe, the dots belong to a single curve, the second derivation is your curvature.
For 1. you may also rely on some libraries specialized in image processing (this is the image processing part of your challenge). One such a library is opencv.
For 2. I'd use some math toolkit, either octave or a math library for a native language.
There are several different ways of measuring the straightness of a line. Since your question is rather vague, it's impossible to say what will work best for you.
But here's my suggestion:
Use linear regression to calculate the best-fit straight line through your points, then calculate the mean-squared distance of each point from this line (straighter lines will give smaller results).
You may need to read this paper, it is so interesting one to solve your problem
As #urzeit suggested, you should first find the points as accurately as possible. There's really no way to give good advice on that without seeing real pictures, except maybe: try to make the task as easy as possible for yourself. For example, if you can set the camera to a very short shutter time (microseconds, if possible) and concentrate the laser energy in the same time, the "background" will contribute less energy to the image brightness, and the laser spots will simply be bright spots on a dark background.
Measuring the linearity should be straightforward, though: "Linearity" is just a different word for "linear correlation". So you can simply calculate the correlation between X and Y values. As the pictures on linked wikipedia page show, correlation=1 means all points are on a line.
If you want the actual line, you can simply use Total Least Squares.

Clear path detection using edge detection [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to create an application where a user can detect an unobstructed path using an Android mobile phone. I used a Gaussian filter for smoothing and then used canny edge detection to detect edges.
I thought if I could find the concentration of edges in the video I can direct the user towards an area with a low concentration of edges, hence a clear path.
Is there a way to find the concentration of the edges? I'm new to image processing so any help would be appreciated.
Im using the OpenCV Android library for my implementation.
Thank you.
The edge detection filter will leave you with a grey-scale image showing where edges are, so you could just throw another smoothing/averaging filter on this to get an idea of the concentration of edges in the neighbourhood of each pixel.
This is a pretty big topic though, and there are a lot of better and more complex alternatives. You should probably look up academic papers, textbooks and tutorials which talk about 'image segmentation', 'region detection' and mention texture or edges.
Also, consider the cases where edges are not indicative of clearness of path, such as plain white walls, and brick foot paths.

Augmented Reality SDK with OpenCV [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am developing an Augmented Reality SDK on OpenCV. I had some problems to find tutorials on the topic, which steps to follow, possible algorithms, fast and efficient coding for real-time performance etc.
So far I have gathered the next information and useful links.
OpenCV installation
Download latest release version.
You can find installation guides here (platforms: linux, mac, windows, java, android, iOS).
Online documentation.
Augmented Reality
For begginers here is a simple augmented reality code in OpenCV. It is a good start.
For anyone searching for a well designed state-of-the-art SDK I found some general steps that every augmented-reality based on marker tracking should have, considering OpenCV functions.
Main program: creates all classes, initialization, capture frames from video.
AR_Engine class: Controls the parts of an augmented reality application. There should be 2 main states:
detection: tries to detect the marker in the scene
tracking: once it is detected, uses lower computational techniques for traking the marker in upcoming frames.
Also there should be some algorithms for finding the position and orientation of the camera in every frame. This is achieve by detecting the homography transformation between the marker detected in the scene, and a 2D image of the marker we have processed offline. The explanation of this method here (page 18). The main steps for Pose Estimations are:
Load camera Intrinsic Parameters. Previously extracted offline through calibration.
Load the pattern (marker) to track: It is an image of the planar marker we are going to track. It is necessary to extract features and generate descriptors (keypoints) for this pattern so later we can compare with features from the scene. Algorithms for this task:
SIFT
FAST
SURF
For every frame update, run a detection algorithm for extracting features from the scene and generate descriptors. Again we have several options.
SIFT
FAST
SURF
FREAK: A new method (2012) supossed to be the fastest.
ORB
Find matches between pattern and the scene descriptors.
FLANN matcher
Find Homography matrix from those matches. RANSAC can be used before to find inliers/outliers in the set of matches.
Extract Camera Pose from homography.
Sample code on Pose from Homography.
Sample code on Homography from Pose.
Complete examples:
aruco
Mastering OpenCV samples
Since AR applications often run on mobile devices, you could consider also other features detector/descriptor:
FREAK
ORB
Generally if you can chose the markers you first detect a square target using an edge detector and then either Hough or simply contours - then identify the particular marker from the internal design. Rather than using a general point matcher.
Take a look at Aruco for well written example code.

Photoshop-like Curves tool in Objective-C [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to adjust an image like curves tool in photoshop. It changes image color, constrast, etc in each R,G,B channel or all RGB.
any idea to do this task in objective C?
I found this link http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=68577&lngWId=-1 but it only adjust curves in all image using VB, not support each color channel like photoshop
The way the curves work in Photoshop use histogramming methods. Essentially one grabs the histogram by counting the amount of each value (the values that can be assigned are on the X axis of the histo) there are throughout the whole image. One can perform this operation to gain the histogram for each color channel.
Look here for image histogramming
http://en.wikipedia.org/wiki/Image_histogram
After one has the histogram, a curve can be applied (to each color channel if you like). The standard curve is a one-2-one or linear curve. What this means is when the actual pixel value is a 10, the value assigned to your edited image is a 10.
One could imagine any curve or even a random distribution. While there are many methods a standard method is log based histogram methods. What this does is essentially looks at the image histogram, and applies the greatest transform curve slopes to the histogram areas with the highest input pixel counts thus providing good contrast for the most amount of pixels.
In terms of a curve, the curve you place on top of the histogram simply defines the mapping function of input pixel value to edited pixel value. You can apply a curve without doing a histogram, but the histo is a good reference for your user so that they know where they want to edit the curve for best effect.

Detecting broken defective biscuits in a binary image [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am new to OpenCV and I'm trying to count and locate broken or cracked biscuits
(those which are not perfect circles as they should be) in the image.
What kind of strategy should I follow?
Any suggestion can help to open my mind. Regards.
Your question is very abstract (it would be better if you provide some picture), but I can try to answer it.
First of all you have to detect all busquits on your image. To do this you have to find bisquit color (maybe HSV color space would be better for your aim) on your picture and convert input image to single channel image (or matrix), each element of this matrix can be:
1 (or 255) if this pixel belongs to bisquit
0 if not.
[OpenCV function inRange may help you to do such conversion.]
When bisquits are detected you can:
Use HoughCircles to detect normal bisquits (if normal bisquit is round).
Find contours of each bisquit (take a look at findContours) and compare each contour with normal bisquit (if it's not round) using cross-corellation or maybe other methods (Euclidean distance etc).
Also look at HoughCircle tutorial to detect just circles if your image doesn't contain other circles (except bisquits).

Resources