I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring
Related
My aim is to draw a set of texures (128x128 pixels) as (gap-less) tiles without filtering artifacts in XNA.
Currently, I use for example 25 x 15 fully opaque tiles (alpha is always 255) in x-y to create a background image in a game, or a similar number of semi-transparent tiles to create the game "terrain" (foreground). In both cases, the tiles are scaled and drawn using floating-point positions. As it is known, to avoid filtering artifacts (like small but visible gaps, or unwanted color overlaps at the tile borders) one has to do "edge padding" which is described as adding an additional fringe of a width of one pixel and using the color of adjacent pixels for the added pixels. Discussions about this issue can be found for example here. An example image of this issue from our game can be found below.
However, I do not really understand how to do this - technically, and specifically in XNA.
(1) When adding a fringe of one pixel width, my tiles would then be 129 x 129 and the overlapping fringes would create quite visible artifacts of their own.
(2) Alternatively, once could add the padding pixels but then not draw the full 129x129 pixel texture but only its "center" (without the fringe) e.g. by choosing the source rectangle of this texture to be (1,1,128,128). But are then the padding pixels not simply ignored or is the filtering hardware really using this information?
So basically, I wonder how this is done properly? :-)
Example image of filtering issue from game: Unwanted vertical gap in brown foreground tiles.
I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.
I have binarized an image and calculate its black and white pixels. After pixel calculation, calculate the ratio of black pixels i.e. R= (no:of black pixels/ no: of white pixel+no: of black pixels)*100. I have to used this result to declare an eye is open or closed if R>20% then eye is in open state else closed. But when I calculate this ratio it is not coming what i want. I think there might be some error in calculation of black and white pixel, due to unwanted black region in image or might be problem in thresholding an image. I am using Otsu's method for thresholding image.
While finding on this topic I also try `openInput=bwareaopen(bw, 80) but this is not work well to remove unwanted black area. Kindly help me out in removing the unwanted area.
close all
clear all
I=imread('op.jpg');
I=rgb2gray(I);
thres_level=graythresh(I); % find the threshold level of image
bw=im2bw(I,thres_level); % converts an image into binary
figure, imshow(bw);
totnumpix=numel(bw); % calculate total no of pixels in image
nwhite_open=sum(bw(:)); % calculate the black pixels in image;
nblack_open=totnumpix-nwhite_open; %calculate white pixels in image;
R=(nblack_open/(nblack_open+nwhite_open))*100
The area opening erase areas having a surface smaller than the parameter you enter. In your case, it will not likely help.
I think that the black/white ratio is not the solution. I would either:
Detect the iris (multiple effective papers/posts on the subject, mainly using hough transform), and if the detection fails, then the eye is closed.
If you have the original color image, then use a skin detection (simple color thresholding in the HSV color space). Because the ratio of skin you will determine when the eye is closed.
I am resizing png images which might have alpha channel.
Everything works good, with one exception:
I get some gray pixels around the transparent areas.
The original image doesn't have any drop shadows.
Is there a way to fix this / work it around?
I am using SmoothResize by Gustavo Daud (See the first answer to this question), to resize the png image.
I cannot provide the code that I am using as I did not write it and do not have the author's permission to publish it.
I suspect that is caused by 2 things: funny RGBA values in PNG and naive resizing code.
You need to check your PNG contents. You are looking for RGB values in transparent areas. Despite transparent areas having Alpha at 0, they still have an RGB info. In your case I would expect that transparent areas are filled with black RGB value. That is what might cause the grey outline after resize if the resize is done naively. Example: What happens if code resizes 2 adjustent pixels (0,0,0,0) and (255,255,255,255) into one? Both pixels contribute by 50% The result is 128,128,128,128) which is semi-transparent grey. Same thing happens when you upscale by e.g x1.5, the added pixel inbetween original two will be grey. Usually that does not happen because image-editing software is smart enough to fill those invisible pixels with color from nearest visible pixel.
You can try to "fix" the PNG by filling transparent areas with white (or another color that is on border of your images).
Another approach is to use an advanced resizing code (write or find library), that will ignore transparent pixels RGB values (e.g. by taking RGB from closest non-transparent pixel).
When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion