how to use block processing for image? - image-processing

I am in kind of newbies in matlab. I am trying to write a code which divide the image in nonoverlaping blocks of size 3*3 and I am supposed to do an operation of the specific block like getting the value of the center pixel of block and do some operations. But I don't know where to start from. Using command like blockproc won't help. Can anyone suggest me where to start from?

You could easily use blockproc for this:
http://www.mathworks.com/help/toolbox/images/ref/blockproc.html
But if that isn't working for you, what errors do you get?
If you want to do it manually (like extracting the value of the center pixel of each block) you could simply use two loops for this.. but be aware, this is rather an unelegant and not really fast way to do it...
image = imread('image.png');
s = size(image);
for i=2:3:s(1)-1
for j=2:3:s(2)-1
%% here you have the midpoint of each 3x3 block...
%% you could then easily crop the image around it if you
%% really need separated blocks...
end
end
This isn't a really fast way though... but it works...
Hope that helps...

Related

Create/Apply grunge-vintage-worn- old-scratchy filters in iOS

Does anybody knows how to create/apply grunge or vintage-worn filters? I'm creating an iOS app to apply filters to photos, just for fun and to learn more about CIImage. Now, I'm using Core-Image to apply CIGaussianBlur, CIGloom, and the like through commands such as ciFilter.setValue(value, forKey:key) and corresponding commands.
So far, core image filters such as blur, color adjustment, sharpen , stylize work OK. But I'd like to learn how to apply one of those grunge, vintage-worn effects available in other photo editing apps, something like this:
Does anybody knows how to create/apply those kind of filters?
Thanks!!!
You have two options.
(1) Use "canned" filters in a chain. If the output of one filter is the input of the next, code things that way. It won't waste any resources until you actually call for output.
(2) Write your own kernel code. It can be a color kernel that mutates a single pixel independently, a warp kernel that checks the values of a pixel and it's surrounding ones to generate the output pixel, or a general kernel that isn't optimized like the last two. Either way, you can use GLSL pretty much for the code (it's pretty much C language for the GPU).
Okay, there's a third option - a combination of the two above options. Also, in iOS 11 and above, you can write kernels using Metal 2.

What should I do for multiple histograms?

I'm working with openCV and I'm a newbie in this field. I'm researching about Camshift. I want to extend this method by using multiple histograms. It means when tracking an object has many than one apperance (ex: rubik cube with six apperance), if we use only one histogram, Camshift will most likely fail.
I know calcHist function in openCV (http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist) has a parameter is "accumulate", but I don't know how to use and when to use (apply for camshiftdemo.cpp in opencv samples folder). This function can help me solve this problem? Or I have to use difference solution?
I have an idea, that is: create an array histogram for object, for every appearance condition that strongly varies in color, we pre-compute and store all to this array. But when we compute new histogram? It means that the pre-condition to start compute new histogram is what?
And what happend if I have to track multiple object has same color?
Everybody please help me. Thank you so much!

Can someone explain the parameters of OpenCV Stitcher?

I'm trying to reduce the calculation time of my stitching algorithm. I got some images which I want to stitch in a defined order but it seems like cv::stitcher.stitch() function tries to stitch every image with every other image.
I feel like I might find the solution in the parameters of OpenCV Stitcher. If not maybe I have to modify the function or try something else to reduce calculation time. But since I'm pretty much a beginner, I don't know how. I know that using GPU might be a possibility but I just don't get CUDA running on Ubuntu at the moment.
It would be great if you could give me some advice!
Parameters for OpenCV Stitcher module:
Stitcher Stitcher::createDefault(bool try_use_gpu) {
Stitcher stitcher;
stitcher.setRegistrationResol(0.6);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(ORIG_RESOL);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(try_use_gpu));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
from stitcher.cpp:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/stitching/src/stitcher.cpp?rev=7244
I want to stitch in a defined order but it seems like
cv::stitcher.stitch() function tries to stitch every image with every
other image.
cv::stitcher does not have a parameter to fulfil your requirement.
However, in the stitching_detailed.cpp sample you have the --rangewidth parameter. By setting it to 1, the algorithm will only consider adjacent image pairs (e.g. for pair 1-2 matches would be computed but not for pair 1-3)

Improve code with NEON on iOS - use VCEQ then VBIT

I am writing a histogram like function which looks at vector data and then puts the elements in predefined "histogram" buckets based on which range they are closest to.
I can obviously do this using if condition but I am trying to improve it using NEON because these are image buffers.
One way to do this would be with VCEQ then VBIT but sadly enough I could not find VBIT in the header of neon. Alternatively I figured I could take the VCEQ results and do an exclusive AND with a vector of 1s and then use VBIF :-) but VBIF is not there either!
Any thoughts here?
Thanks
VBIT, VBIF, and VBSL all do the same operation up to permutation of the sources; you can use the vbsl* intrinsics to get any of the three operations.

HTML5 canvas. Get points from path. beginPath / ClosePath

Is there a way to get the points of a path created with beginPath-closePath, as XY coordinates?
Something like Context.getPath -> Array of(x,y).
As the path is implemented in real code it would be way faster, than using a bezier function written in javascript.
There is no trivial way to do what you are suggesting. You'll likely need to come up with some kind of a solution of your own. If you can describe your problem better, we'll probably be able to provide you something more concrete.
At the worst case you'll need to implement the bezier algo yourself and accumulate the data you'll need. Alternatively you could render using the native method and then redo the math without rendering to get the data you are after (might be faster even, not sure).

Resources