I am stuck with reading, processing and displaying sample.png image which
contains RGB and an additional Alpha layer.
I have manually removed background in this image and only foreground appears in
windows image slideshow propgram. I couldnt find any useful information
anywhr... when i read it from opencv usng functions imread or cvloadimage it
creates a white background by itself... i have read documentation of highgui
which states that these functions only deal wth RGB not RGBA...any help or idea
will be helpful...
Thanks
Saleh...
AFAIK only current solution is to load alpha channel as separate image and then join it together. You can use cvtColor() to add alpha channel to Mat with image and e.g. mixChannels() to mix it together with loaded aplha image.
You can use cv::imread() with IMREAD_UNCHANGED option, to read the data to a cv::Mat structure. If you still need an IplImage to work with, it is possible to convert from cv::Mat to IplImage without losing the alpha channel.
Related
I am using OpenCV to blend a set of pre-warped images. As input I have some 4-channel images (*.png or *.tif) from where I can extract a bgr image and an alpha mask with the region related to the image (white) and the background (black). Both image and mask are the inputs of the Blender module cv::detail::Blender::blend.
When I use feather (alpha) blending the result is ok, however, I would like to avoid ghosting effects. When I use multi-band, some artifacts are appearing on the edges of the images:
The problem is similar to the one raised here, and solved here. The thing is, if the solution is creating a binary mask (that I already extract from the alpha channel), it does not work for me. If I add padding to the ovelap between both images, it takes pixels from the background and messes up even more the result.
I guess that probably it has to do with the functions pyrUp and pyrDown, because maybe the blurring to create the Gaussian and Laplacian pyramids is applied to the whole image, and not only to the positive alpha region. In any case, I don't know how to fix the problem using these functions, and I cannot find another efficient solution.
When I use another implementation of multiresolution blending, it works perfectly, however, I am very interested in integrating the multi-band implementation of OpenCV. Any idea of how to fix this issue?
Issue has been already reported and solved here:
http://answers.opencv.org/question/89028/blending-artifacts-in-opencv-image-stitching/
I have 3 CIImage objects that are gray 8-bpp images that are meant to be the 8-bit R, G, and B channels of a new image. Aside from low-level image pixel data operations, is there a way to construct the CIImage (from filters or some other easier way)
I realize I can do this by looping through the pixels of a new RGB image and setting it from the gray channels I have -- I was wondering if there was a more idiomatic way to work with channels.
For example, in Pillow for Python, it's Image.merge([rChannel, gChannel, bChannel]) -- I know how to code the pixel access way if there is no built in way.
The book, Core Image for Swift, covers how to do this and provides the code to do it here:
https://github.com/FlexMonkey/Filterpedia/blob/master/Filterpedia/customFilters/RGBChannelCompositing.swift
The basic idea is that you need to provide a color kernel function in GPU shader language and wrap it in a CIFilter subclass.
NOTE: The code is not copied here because it's under GPL, which is an incompatible license with StackOverflow answers. You can follow the link if you want to see how it's done, and use it if it's compatible with your license.
Hi I am a pure novice in image processing especially with openCV. I want to write a program on blob detection that takes an image as an input and returns the color and centroid of the blob. My image consists purely of regular polygons in a black background. For eg. my image might consist of a green triangle(equilateral) or a red square in a black background. I want to use the simpleBlobDetection class in opencv and its 'detect' function for this purpose. Since I'm a novice a full program will be a lot of help to me.
I suggest you to use the complementary openCV library cvblob. It has an example to automatically obtain blobs in an image, centroid, contour, etc.
Here is the source code, i tried it in OSX and works really fine.
Link: https://code.google.com/p/cvblob/
Using OpenCV 2.4.2 C/C++
I am trying to use the copyTo function to add a binary CV_8UC1 image to a RGB CV_8UC3 image. However, it seems to crash the program whenever I do this. I'm assuming that the difference in the number of channels doesn't allow me to add them. Is there some type of conversion that can allow me to use this copyTo function? I'm stitching a camera feed with its thresholded image side by side.
I'm using src.copyTo(dst(Rect(x,y,w,h))); as the copying code, and inRange(src,Scalar(#,#,#),Scalar(#,#,#),dst) as the thresholding operation.
I've tried to use the convertTo function but not having much luck with it. Can anyone give some advice?
Thanks
You should use cv::cvtColor function, which can convert from one color space to another. Look here for details.
I have a program that will load an image from the hard disk. The program is written using emgu cv and the image is a Bgr image. I want to allow the user to increase/decrease the brightness/contrast of the image. How can I do this? Some sample code would be appreciated (because I am still a newbie). Thanks.
It depends on your image adjustment requirements.
You can start using some basic techniques already wrapped in emguCV such as histogram equalization and gamma correction. You can also combine them to achieve better result.
Image<Bgr, byte> inputImage;
inputImage._EqualizeHist();
inputImage._GammaCorrect(1.8d);