Array Conditional Filtering (ArrayFire) - arrayfire

I have two arrays A (2D image) and B (1d intensity values) and trying to make third one (2D) with will be size of A but contain only values of A that B has. What will be the the right way to do that without moving data to the host?
PS: Trying to find a way to work with result of function 'regions' (mask out certain regions and keep others).
Thanks.

Related

How to multiply two pixels together

I'm reading a paper that involves finding the mean squared error of blocks of pixels. It uses the formula below. I is one image, I' is another image, and x and y are the pixel coordinates in each image.
What is confusing me is exactly how to do this math. Right now I have my images in RGB values. But how do I do this image math properly?
What is the correct way to square my resulting difference image? Is it by squaring the individual RGB channels alone, or should I be converting this to an int representation first?
Ideally I want to be able to compare several MSE's of different images, so keeping all of this data in individual channels doesn't seem to make sense. Is my intuition correct that I should just covert everything to an int representation, then square and divide by N^2 and find the smallest resulting value?
Formula
From this answer to a related question.
It really depends on what you want to detect. For example do you just want a single metric about how different are images that are substantially the same? Do you want to compare discolorations for two images that are not substantially the same, spatially?
So you could use any of a variety of approaches to determine what a value of I actually is. For example, it could be the R value, or G, or B, or something like the sum R+G+B.
I would try a bunch of these and see how your results are turning out, in addition to doing more research on color image differentiation.

OpenCV: Matching closest image for large set of images using divide and conquer

Say I have an image A and another set of images C={B(1)...B(n)}.
I want to find the closest matching image to A in C image set.
I found this can be done as explained in the sample "matching_to_many_images.cpp"
https://github.com/kipr/opencv/blob/master/samples/cpp/matching_to_many_images.cpp
Now I want to scale this problem when n is large number.
I was hoping to divide the C set info sub-sets , detect the most closest image to A in each sub-set separately (parallel in different nodes) and finally aggregate all the sets to find the most matching images in the entire C set.
But I found out that , "DMatch" count for each image changes depend on the sub-set its belongs to. Means the count of DMatch is a relative value and not an absolute value.
How to solve this issue using divide and conquerer?

Implementation of image dilation and erosion

I am trying to figure out an efficient way of implementing image dilation and erosion for binary images. As far as I understand it, the naive way would be:
loop through the image
if pixel is 1
loop through the neighborhood based on the structuring element's
height and width
(dilate) substitute each pixel of the image with the value in the
corresponding location of the SE
(erode) check if all neighborhood is equal to the SE, if so keep all
the pixels, else delete the centre
so this means that for each pixel I have to loop through the SE as well making this a O(NMW*H).
Is there a more elegant way of doing this?
Yes there are!!!
First you want to decompose (if possible) your structuring element into segments (a square being composed by a vertical and an horizontal segment). And then you perform only erosion/dilation on segments, which already decreases the complexity.
Now for the erosion/dilation parts, you have different solutions:
If you work only on 8-bits images and do not C/C++, you use an implementation with histograms in order to keep track of the minimum/maximum value. See this remarkable work here. He even adds "landmarks" in order to reduce the number of operations.
If you use C/C++ and work on different types of image encodings, then you can use fast comparisons (SSE2, SSE4 and auto-vectorization), as it is the case in the SMIL library. In this case, you compare row with row, instead of working pixel by pixel, using material accelerations. It seems to be the fastest library ever.
A last way to do, slower but works for all types of encoding, is to use the Lemmonier algorithm. It is implemented by the fulguro library.
For structuring elements of type disk, there is nothing "fast", you have to use the basic algorithm. For hexagonal structuring elements, you can work row by row, but it cannot be parallelized.

Detect the two highest Peaks from Histogram

I was trying to understand on how to detect the two peaks from the histogram. There can be multiple but I need to pick the two highest. Basically what I need to to is that although I will have these peaks shifted left or right, I need to get hold of them. Their spread can vary and their PEAK values might change so I have to find a way to get hold of these two peaks in Matlab.
What I have done so far is to create a 5 value window. This window is populated with values from the histogram and a scan is performed. Each time I move 5-steps ahead to the next value and compare the previous window value with current. Which ever is greater is kept.
Is there a better way of doing this?
The simplest way to do this would be to first smooth the data using a gaussian kernel to remove the high frequency variations.
Then use the function localmax to find the local maximums.
Return data from hist (or histc) function to a variable (y = hist(x,bin);) and use PEAKFINDER FileExchange submission to find local maximums.
I have also used PEAKDET function from Eli Billauer. Works great. You can check my answer here with code example.

Working with decision trees

I know tl;dr;
I'll try to explain my problem without bothering you with ton's of crappy code. I'm working on a school assignment. We have pictures of smurfs and we have to find them with foreground background analysis. I have a Decision Tree in java that has all the data (HSV histograms) 1 one single node. Then tries to find the best attribute (from the histogram data) to split the tree on. Then executes the split and creates a left and a right sub tree with the data split over both node-trees. All the data is still kept in the main tree to be able to calculate the gini index.
So after 26 minutes of analysing smurfs my pc has a giant tree with splits and other data. Now my question is, can anyone give me a global idea of how to analyse a new picture and determine which pixels could be "smurf pixels". I know i have to generate a new array of data points with the HSV histograms of the new smurf and then i need to use the generated tree to determine which pixels belong to a smurf.
Can anyone give me a pointer on how to do this?
Some additional information.Every Decision Tree object has a Split object that has the best attribute to split on, the value to split on and a gini index.
If i need to provide any additional information I'd like to hear it.
OK. Basically, in unoptimized pseudo-code: In order to label pixels in a new image:
For each pixel in the new image:
Calculate pixel's HSV features
Recursively, starting from the tree's root :
Is this a leaf? if it is, give the pixel the dominant label of the node.
Otherwise, check the splitting criterion against the pixel's features, and go to the right or left child accordingly
I hope this makes sense in your context.

Resources