Is there Inter_area texture minFilter in three.js like in opencv? - image-processing

I have a three.js canvas to which I uploaded a 2730x4096 resolution image. I have given options to download different resolutions of images from the canvas. When I download a lower resoltuion image - say 960x1440 image, I get a very blurry image with jagged edges.
I tried to increase the sharpness by setting anisotropy to max(16 in my case) and also tried using map.minFilter= THREE.LinearFilter, which also sharpened the image further but the edges are still jagged.
I tried to run it through an FXAA anti aliasing composer, but the anti-aliasing is not great. May be I'm not giving correct parameters.
All the while antialias from renderer is active (renderer.antialias=true)
When I try to do the same in opencv, I used cv2.INTER_AREA interpolation for downsizing the 2730x4096 image which gives me a very sharp images with absolutely no jagged edges.
So I was thinking if implementing INTER_AREA interpolation for the minFilter instead of THREE.LinearFilter might yield better results. Is there something existing already in three.js that I'm not utilizing, or if I have to use this new interpolation method, how to go about it?
Illustration:
PFA two files - one file is downloaded using three.js canvas directly at 960x1440 resolution (bottom one) and other is an image which is downsized from 2730x4096 to 960x1440 using opencv (top one). In the opencv downsized image, the details are sharper and the edges are cleaner than the three.js image. I'm starting to believe this is because of the INTER_AREA interpolation for downsizing in opencv. Is that replicable in three.js?
The original high resolution image can be downloaded from here

Not sure what your setup is, but by enabling THREE.LinearFilter, you're no longer using Mipmapping, which is very effective at downsampling large textures. The default is LinearMipMapLinearFilter but you have several other options, as outlined in the texture constants docs page. I've found that using anything but the default gives you crunchy textures.
Also, make sure you're enabling antialiasing with renderer.antialias = true; if you need it. Not sure if you're already doing this step, given the limited scope of the code in your question.

Related

Edge Detection in a particular frame of entire image

I am using GPUImageSobelEdgeDetectionFilter from project GPUImage for edge detection.
My requirement is that I want to detect edges in an image but only at centre frame of 200 x 200 and rest of the image should not be touched.
There is no direct api in framework to provide CGRect for edge detection coordinates. I do have an alternate approach of cropping down the original image and passing it for Edge Detection and finally super-imposing on the original one. But this sounds like a hack to me.
Any idea if there is a direct way to do it?
Only way to do that is as you suggest, do a crop and work with the cropped image.
If you're willing to switch over to the newer GPUImage 2, this is one of the core new features in that version of the framework. Filters can be partially applied to any region of an image, leaving the remainder of the image untouched. This includes the Sobel edge detection, and the masking of the image can be done using arbitrary shapes:
To partially apply a Sobel edge detection filter, you'd set up the filter chain as normal, then set a mask to the filter. In the below, the mask is a circle, generated to match a 480x640 image:
let circleGenerator = CircleGenerator(size:Size(width:480, height:640))
edgeDetectionFilter.mask = circleGenerator
circleGenerator.renderCircleOfRadius(0.25, center:Position.center, circleColor:Color.white, backgroundColor:Color.transparent)
The area within the circle will have the filter applied, and the area outside will simply passthrough the previous pixel colors.
This uses a stencil mask to perform this partial rendering, so it doesn't slow rendering by much. Unfortunately, I've pretty much ceased my work on the Objective-C version of GPUImage, so this won't be getting backported to that older version of the framework.

How to stitch images with differing orientations

I have a set of images collected from a drone that I now want to stitch together. The approach I started going towards is to rotate all the images to the proper orientation, then try to stitch those together. However since the rotated images are no longer rectangular, I have large empty areas with no image data. As expected, these dark areas have caused poor stitching results.
I have seen the api for stitch:
Status stitch (InputArrayOfArrays images, const std::vector< std::vector< Rect > > &rois, OutputArray pano)
I like that there is an "ROI" parameter, however the data type is a "Rect" instead of something like a "RotatedRect" or a mask. So this looks like it won't work either.
The only other approach I can think of is to further crop the image to remove the no-data areas of the image (which would require more images to make up for the lost data).
I am not an expert in OpenCV nor in image stitching, so I'm looking for some awesome ideas. Is there a better way to approach this?

Blending artifacts in OpenCV image stitching

I am using OpenCV to blend a set of pre-warped images. As input I have some 4-channel images (*.png or *.tif) from where I can extract a bgr image and an alpha mask with the region related to the image (white) and the background (black). Both image and mask are the inputs of the Blender module cv::detail::Blender::blend.
When I use feather (alpha) blending the result is ok, however, I would like to avoid ghosting effects. When I use multi-band, some artifacts are appearing on the edges of the images:
The problem is similar to the one raised here, and solved here. The thing is, if the solution is creating a binary mask (that I already extract from the alpha channel), it does not work for me. If I add padding to the ovelap between both images, it takes pixels from the background and messes up even more the result.
I guess that probably it has to do with the functions pyrUp and pyrDown, because maybe the blurring to create the Gaussian and Laplacian pyramids is applied to the whole image, and not only to the positive alpha region. In any case, I don't know how to fix the problem using these functions, and I cannot find another efficient solution.
When I use another implementation of multiresolution blending, it works perfectly, however, I am very interested in integrating the multi-band implementation of OpenCV. Any idea of how to fix this issue?
Issue has been already reported and solved here:
http://answers.opencv.org/question/89028/blending-artifacts-in-opencv-image-stitching/

UIImage nonlinear resizing

I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.

Using OpenCV to correct stereo images

I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model

Resources