I need to do following:-
1. Extract R,G,B plane from a color image
2. After doing all calculation superimpose planes and get color image back.
Thank you so much for your time.
Related
I'm using an app for face redaction that doesn't allow access to the source code but only allows me to pass pixel values for red, green and blue channel upon which it creates a matrix with the same average RGB values for every ROI pixel value. For eg. if I give Red=32,Blue=123 and Green=233 it will assign these RGB values for every pixel of the ROI and then draws a colored patch on the face.
So I was wondering is there a general combination of RGB values of a pixel to distort it and make it look like it's blurred. I can also set the opacity value in the app.
Thanks.
Given an image with say just two coloured points in it.. Is it possible to crop the image from the coordinates of the first colured point to the coordinates of the second coloured point .
A sample image where i have to crop between two green points
This is possible, if the colored points have a distinct range of color when compared with the rest of the image.
Algorithm:
1. Convert the image to HSV color space
2. Scan the image while looking for pixels in the range of hue and saturation of the color/s of the points.
3. Record minimum and maximum X,Y coordinates of the points that match.
4. calculate the bounding box of the region using the coordinates.
5. Crop the image using the bounding box.
You can try to follow these steps and edit the question with code if/when you come up with errors. Uploading a sample image somewhere and linking to it will help us provide better answers.
I want to do following thing within in my iOS app:
user can draw something on white background paper.
my app allows user to capture the drawn image. Here the image will capture with background white color.
finally from the captured image i need to mask the white background color and just get the image alone into UIImage object.
I completed the steps 1 and 2. But i do not have any idea how to do the last step. Is there any openCV library that i can use it with my iOS app?.
Any help that might be really appreciated.
Well, since OpenCV itself is THE library, I guess that you are looking for a way to do that with OpenCV.
First, convert the input image to Mat, which is the data type OpenCV uses to represent an image;
Then, assuming the background is white, threshold the Mat to separate the background from whatever the user draw. According to the example below, the result of this operation makes the background black, and every pixel that is not black will represent something the user has draw:
Finally, convert the resulting Mat to UIImage: for this, iterate on the Mat and copy every pixel that is not black to the UIImage to have an UIImage that contains only what the user draw.
A better idea is to iterate on the thresholded Mat, figure out which pixel is not black, and instead of copying it directly to the new UIImage, copy that pixel (x,y) from the original UIImage, so you have a colored pixel at the end, which gives a more realistic result.
I am working on a project where I read an image load the data into Mat data type. Then, I do some operations on it.
All my operations are done assuming the color space is RGB (BGR as opencv stores in that way). Everything is working fine. I was doing experiment on converting the output image to YUV format. But when I transform the output image from BGR2YUV using the following command I found that the resulting image color is changed completely.
cvtColor(img,out,CV_RGB2YCrCb);
For example, my output RGB image is green. When I convert this to YUV format and show the resulting image I found it blue and NOT green.
I want a way to convert so that the output also become green.
How can I change the color space from RGB to YUV without changing the colors in the image?
The colors of the image have not changed, just the coding.
If you convert the color to YUV, then use imshow(), it assumes the color is still RGB so it displays it incorrectly.
If you have a YUV image and you want to display it, you first have to convert it back to RGB.
When you ask "How can I change the color space from RGB to YUV without changing the colors in the image?" you are essentially saying "how can I do color conversion without doing color conversion?" which of course is impossible.
If I have an RGB image and a binary mask (1 channel), and I want to create contours for the RGB image based on the connected pixels of the binary mask. After that I want to compare the pixel values (e.g. check if each pixel in the contours is having a blue value > 150), then how can I implement the above by using OpenCV?
Thanks a lot!
Assuming the images are the same size and shape then simply scan over the pixels in the binary image looking for the contours and check the pixel values at the same row/col in the color image
See Fastest way to extract individual pixel data? for details