direction on image pattern description and representation - image-processing

I have a basic question regarding pattern learning, or pattern representation. Assume I have a complex pattern of this form, could you please provide me with some research directions or concepts that I can follow to learn how to represent (mathematically describe) these forms of patterns? in general the pattern does not have a closed contour nor it can be represented with analytical objects like boxes, circles etc.

By mathematically describe I'm assuming you mean derive from the image a vector of values that represents the content of the image. In computer vision/image processing we call this an "image descriptor".
There are several image descriptors that could be applied to pixel based data of the form you showed, which appear to be 1 value per pixel i.e. greyscale images.
One approach is to perform "spatial gridding" where you divide the image up into a regular grid of a constant size e.g. a 4x4 grid. You then average the pixel values within each cell of the grid. Then concatenate these values to form a 16 element vector - this coarsely describes the pixel distribution of the image.
Another approach would be to use "image moments" which are 2D statistical moments. Use this equation:
where f(x,y) is they pixel value at coordinates (x,y). W and H are the image width and height. The mu_x and mu_y indicate the average x and y. The values i and j select the order of moment you want to compute. Various orders of moment can be combined in different ways for example in the "Hu moments" we can compute 7 numbers using combinations of image moments:
The cool thing about the Hu moments is you can scale, rotate, flip etc the image and you still get the same 7 values which makes this a robust ("affine invariant") image descriptor.
Hope this helps as a general direction to read more in.

Related

Is there a known algorithm to find groups of adjacent pixels with similar color?

I'd like to know if this is a known algorithm with a name.
I've never done any image processing, but I'm picturing an image as a 2-d matrix of 3-d vectors (ignore transparency).
The only input parameter is distance. Every pixel is tested against its neighbors. If they are closer than the parameter, they join a group and their values are averaged. As groups grow by gaining new pixels all pixels get the average value of the group.
For your typical selfie the result might resemble quantizing or posterizing, but unlike quantizing or posterizing, there is no fixed count of output colors. If absolutely no pixels are close enough to their neighbors, the result is a 1:1 mapping of every pixel to its own group.
Is there a name for this?

Map one camera's colour profile to another

I have two cameras (A & B), which I've taken photos of a calibration scene then corrected distortion and used feature mapping to get a pixel precise registration resulting in the following:
As you can see, the colour response is quite different. What I would like to do now is take a new photo with A and answer the question: what would it look like if instead I had used camera B?
Is there some existing technique or algorithm to convert between the colour spaces/profiles of two cameras like this?
From the image you provided it is not to hard to segment them to small squares. After that take the mean(or even better median) of each square in the both images. Now you have 2*m*n value which are as follow: MeansReference_(m*n) , MeansQuery_(m*n). Using the linear color correction matrix which is:
You can construct this linear system:
MeansReference[i][j]= C * MeansQuery[i][j]
Where:
MeansReference[i][j] is a vector (3*1) of the color (R,G,B) of the square [i,j] in the Reference image.
MeansQuery[i][j] is a vector (3*1) of the color (R,G,B) of the square [i,j] in the Query image.
C is the 3*3 Matrix (a11,a12,... ,a33)
Now, For each i,j you will got 3 linear equations (for R,G,B). Since there are 9 variables (a11...a33) you need at least 9 equations which mean at least 3 squares (each square provide you with 3 equation). However, the more equation you construct, the more accuracy you got.
How to solve linear system with the number of equations more than the number of variables? Use Batch-LSE. You can find great details about it in Neuro-Fuzzy-and-Soft-Computing-Jang-Sun-Mizutan book or any online source.
After you find the 9 variables you have a color correction matrix. Just apply it on any image from the new camera and you will got an image that look like it was taken by the old camera. If you want the opposite, apply C^-1 instead.
Good Luck!

How to apply box filter on integral image? (SURF)

Assuming that I have a grayscale (8-bit) image and assume that I have an integral image created from that same image.
Image resolution is 720x576. According to SURF algorithm, each octave is composed of 4 box filters, which are defined by the number of pixels on their side. The
first octave uses filters with 9x9, 15x15, 21x21 and 27x27 pixels. The
second octave uses filters with 15x15, 27x27, 39x39 and 51x51 pixels.The third octave uses filters with 27x27, 51x51, 75x75 and 99x99 pixels. If the image is sufficiently large and I guess 720x576 is big enough (right??!!), a fourth octave is added, 51x51, 99x99, 147x147 and 195x195. These
octaves partially overlap one another to improve the quality of the interpolated results.
// so, we have:
//
// 9x9 15x15 21x21 27x27
// 15x15 27x27 39x39 51x51
// 27x27 51x51 75x75 99x99
// 51x51 99x99 147x147 195x195
The questions are:What are the values in each of these filters? Should I hardcode these values, or should I calculate them? How exactly (numerically) to apply filters to the integral image?
Also, for calculating the Hessian determinant I found two approximations:
det(HessianApprox) = DxxDyy − (0.9Dxy)^2 anddet(HessianApprox) = DxxDyy − (0.81Dxy)^2Which one is correct?
(Dxx, Dyy, and Dxy are Gaussian second order derivatives).
I had to go back to the original paper to find the precise answers to your questions.
Some background first
SURF leverages a common Image Analysis approach for regions-of-interest detection that is called blob detection.
The typical approach for blob detection is a difference of Gaussians.
There are several reasons for this, the first one being to mimic what happens in the visual cortex of the human brains.
The drawback to difference of Gaussians (DoG) is the computation time that is too expensive to be applied to large image areas.
In order to bypass this issue, SURF takes a simple approach. A DoG is simply the computation of two Gaussian averages (or equivalently, apply a Gaussian blur) followed by taking their difference.
A quick-and-dirty approximation (not so dirty for small regions) is to approximate the Gaussian blur by a box blur.
A box blur is the average value of all the images values in a given rectangle. It can be computed efficiently via integral images.
Using integral images
Inside an integral image, each pixel value is the sum of all the pixels that were above it and on its left in the original image.
The top-left pixel value in the integral image is thus 0, and the bottom-rightmost pixel of the integral image has thus the sum of all the original pixels for value.
Then, you just need to remark that the box blur is equal to the sum of all the pixels inside a given rectangle (not originating in the top-lefmost pixel of the image) and apply the following simple geometric reasoning.
If you have a rectangle with corners ABCD (top left, top right, bottom left, bottom right), then the value of the box filter is given by:
boxFilter(ABCD) = A + D - B - C,
where A, B, C, D is a shortcut for IntegralImagePixelAt(A) (B, C, D respectively).
Integral images in SURF
SURF is not using box blurs of sizes 9x9, etc. directly.
What it uses instead is several orders of Gaussian derivatives, or Haar-like features.
Let's take an example. Suppose you are to compute the 9x9 filters output. This corresponds to a given sigma, hence a fixed scale/octave.
The sigma being fixed, you center your 9x9 window on the pixel of interest. Then, you compute the output of the 2nd order Gaussian derivative in each direction (horizontal, vertical, diagonal). The Fig. 1 in the paper gives you an illustration of the vertical and diagonal filters.
The Hessian determinant
There is a factor to take into account the scale differences. Let's believe the paper that the determinant is equal to:
Det = DxxDyy - (0.9 * Dxy)^2.
Finally, the determinant is given by: Det = DxxDyy - 0.81*Dxy^2.
Look at page 17 of this document
http://www.sci.utah.edu/~fletcher/CS7960/slides/Scott.pdf
If you made a code for normal Gaussian 2D convolution, just use the box filter as a Gaussian kernel and the input image will be the same original image not integral image. The results from this method will be same with the one you asked.

SIFT matches and recognition?

I am developing an application where I am using SIFT + RANSAC and Homography to find an object (OpenCV C++,Java). The problem I am facing is that where there are many outliers RANSAC performs poorly.
For this reasons I would like to try what the author of SIFT said to be pretty good: voting.
I have read that we should vote in a 4 dimension feature space, where the 4 dimensions are:
Location [x, y] (someone says Traslation)
Scale
Orientation
While with opencv is easy to get the match scale and orientation with:
cv::Keypoints.octave
cv::Keypoints.angle
I am having hard time to understand how I can calculate the location.
I have found an interesting slide where with only one match we are able to draw a bounding box:
But I don't get how I could draw that bounding box with just one match. Any help?
You are looking for the largest set of matched features that fit a geometric transformation from image 1 to image 2. In this case, it is the similarity transformation, which has 4 parameters: translation (dx, dy), scale change ds, and rotation d_theta.
Let's say you have matched to features: f1 from image 1 and f2 from image 2. Let (x1,y1) be the location of f1 in image 1, let s1 be its scale, and let theta1 be it's orientation. Similarly you have (x2,y2), s2, and theta2 for f2.
The translation between two features is (dx,dy) = (x2-x1, y2-y1).
The scale change between two features is ds = s2 / s1.
The rotation between two features is d_theta = theta2 - theta1.
So, dx, dy, ds, and d_theta are the dimensions of your Hough space. Each bin corresponds to a similarity transformation.
Once you have performed Hough voting, and found the maximum bin, that bin gives you a transformation from image 1 to image 2. One thing you can do is take the bounding box of image 1 and transform it using that transformation: apply the corresponding translation, rotation and scaling to the corners of the image. Typically, you pack the parameters into a transformation matrix, and use homogeneous coordinates. This will give you the bounding box in image 2 corresponding to the object you've detected.
When using the Hough transform, you create a signature storing the displacement vectors of every feature from the template centroid (either (w/2,h/2) or with the help of central moments).
E.g. for 10 SIFT features found on the template, their relative positions according to template's centroid is a vector<{a,b}>. Now, let's search for this object in a query image: every SIFT feature found in the query image, matched with one of template's 10, casts a vote to its corresponding centroid.
votemap(feature.x - a*, feature.y - b*)+=1 where a,b corresponds to this particular feature vector.
If some of those features cast successfully at the same point (clustering is essential), you have found an object instance.
Signature and voting are reverse procedures. Let's assume V=(-20,-10). So during searching in the novel image, when the two matches are found, we detect their orientation and size and cast a respective vote. E.g. for the right box centroid will be V'=(+20*0.5*cos(-10),+10*0.5*sin(-10)) away from the SIFT feature because it is in half size and rotated by -10 degrees.
To complete Dima's , one needs to add that the 4D Hough space is quantized into a (possibly small) number of 4D boxes, where each box corresponds to the simiéarity given by its center.
Then, for each possible similarity obtained via a tentative matching of features, add 1 into the corresponding box (or cell) in the 4D space. The output similarity is given by the cell with the more votes.
In order to computethe transform from 1 match, just use Dima's formulas in his answer. For several pairs of matches, you may need to use some least squares fit.
Finally, the transform can be applied with the function cv::warpPerspective(), where the third line of the perspective matrix is set to [0,0,1].

Generating Filter for Spacial Filtering

I read on Wikipedia and see that if we need to perform spatial filtering on an image, we have to have a filter, for example 3x3, what I don't understand here is how can we choose the value for the filter? Let say that the original image is grey scale so its intensity goes from 0 to 255 (8 bits).
Another question is that if the image is 9x9, how can we apply the filter to boundary pixels of that image? If we choose to pad the image so the filter can work with all boundary pixels, what would be the value for new padded pixels?
Thank you very much
The value of the filter depends on what you want to achieve by filtering. There are a lot of filter design to perform a specific task. For example the simplest filter f=[-1 1 -1] kind of perform image derivation by performing first degree differencing on each pixel in horizontal direction (x-derivative) while f' perform the same thing in the vertical (y-derivative). The values -1,1,-1 are choose for such purpose. The same goes for 3*3 filters. In general the choose of the values come from a 2D(bi directional) designing of finite impulse response (FIR) and infinite impulse response (IIR) filters.
You should keep in mind that the value of filter operation on the boarders are not that much accurate. Filtering operation for boarder pixel are done interpolating out-of range pixel by a process called boarder interpolation.In OpenCV and similar image processing/computer vision libraries there are ways to do it. For example as the following in opencv
Various border types, image boundaries are denoted with '|'
BORDER_REPLICATE: aaaaaa|abcdefgh|hhhhhhh
BORDER_REFLECT: fedcba|abcdefgh|hgfedcb
BORDER_REFLECT_101: gfedcb|abcdefgh|gfedcba
BORDER_WRAP: cdefgh|abcdefgh|abcdefg
BORDER_CONSTANT: iiiiii|abcdefgh|iiiiiii with some specified 'i'
Thus according to you choose you pad the boarder pixels.

Resources