I'm trying to extract a urinalysis strip colors to analyze them and need to segment color areas to have a robust solution.
Currently, I'm using a hardcoded distance from top approximation.
I already tried using adaptative thresholding and can't segment colors correctly without detecting background noise, joining multiple colors or not detecting some colors at all.
I think you are a bit over complicating this: your problem is in essence a 1D problem: you can look at the average color per row of your image, and this should give you a clean and more robust version to work on:
img = imread('http://i.imgur.com/mhGA3hp.jpg');
img = im2double(img);
avg = mean(img,2);
imshow(bsxfun(#times, avg, ones(1,50,3)));
Results with:
I believe you will find it easier to work with the 1D clean version of your image.
Related
Some details:
I'm making a small prototype in Framer, some kind a wallpaper app. I use vibrant.js to automatically pick colors from the images to add a bit of a tint to my interface. I use two vibrant color profiles: "DarkMuted" - for the backgrounds and "Vibrant" - for active controls / accents etc.
Unfortunately, color combintation looks dull and desaturated sometimes, active elements don't stand out as much as I want it.
So my first decision was to
Blindly edit colors.
I convert them to hsl and explicitly set s and l values.
s: .2, l: .2 # DarkMuted
s: .6, l: .8 # Vibrant
This creates enough contrast between the two, but also has a drawback: sometimes colors look a bit oversaturated and distorted (compared to the input).
By this link you can find pairs of screenshots to show you the difference between "original" color pair returned by "vibrant.js" and colors with adjusted s and l values.
I've already asked on another forum if it's possible to apply automated adjustments to the color, to normalize percieved bias for some color ranges. The answer was "almost impossible".
I would say that subjectively acceptable color rate is ~ 65% but the result is too unpredictable. Since it's an automatic solution I can't rely on that too much.
So I decided to approach it another way:
Generate a bunch of colors and filter one
The problem here is:
I've not found how to generate more than one color per profile with vibrant.js
Also, I've tried the color-thief.js library to generate a palette of dominant colors and then filter, what I call, a "vibrant" color.
# Threshold values I used
thr = {minL: .4, maxL: .8, minS: .6, maxS: .8}
But here the another problem occurs - not every image has a set of colors that fall under my threshold. Some images have a pastel gamma or b/w and don't return anything.
So,
Can I overcome the vibrant.js limitation of 1 color per profile to have a bunch of "Vibrant" colors and then pick one that suits my requirements?
Or, maybe, there is another / better solution of doing it?
There is a specification about minimum contrast between colors (WCAG) you can find it here. So a possible strategie would be extracting the colors with vibrant.js and after that you could check contrast with a function. You can find a guide to build a function to check color constrast here. The last step probably would be generate colors variations with good contrast based on the results from the color contrast function. You can generate variations using this lib.
Example Image
I want to remove the lines (shown in RED color) as they are out of order. Lines shown in black color are repeating at same period (approximately). Period is not known beforehand. Is there any way of deleting non-periodic lines( shown in red color) automatically?
NOTE: Image is binary ( back & while).. lines shown in red color only for illustration.
Of course there is any way. There is almost always some way to do something.
Infortunately you have not provided any particular problem. The entire thing is too broad to be answered here.
To help you getting started: (I highly recommend you start with pen, paper and your brain)
Detect the lines -> google or think, there are many standard ways to detect lines in an image. if you don't have noise in your binary image its trivial.
find any aequidistant sets -> think
delete the rest -> think ( you know what is good so everything else has to go away)
I assume, your lines are (almost) vertical.
The following should work
turn the image to a column sum histogram
try a Fourier transformation on the signal (potentially padding the image appropriately)
pick the maximum/peak from the Fourier spectrum as your base period
If you need the lines rather than the position of the lines, generate a mask with lines at appropriate intervals (as determined by your analysis before) and apply to the image.
I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.
I am trying to change the white point/white balance programmatically. This is what I want to accomplish:
- Choose a (random) pixel from the image
- Get color of that pixel
- Transform the image so that all pixels of that color will be transformed to white and all other colors shifted to match
I have accomplished the first two steps but the third step is not really working out.
At first I thought that, as per Apples documentation CIWhitePointAdjust should be the thing to accomplish exactly that but, although it does change the image it is not doing what I would like/expect it to do.
Then it seemed that CIColorMatrix should be something that would help me to shift the colors but I was (and still am) at a loss of what to input to it with those pesky vectors.
I have tried almost everything (same RGB values on all vectors, corresponding values (R for R, etc.) on each vector, 1 - corresponding value, 1 + corresponding value, 1/corresponding value. RGB values and different (1 - x, 1 + x, 1 / x).
I have also come across CITemperatureAndTint that, as per Apples documentation should also help, but I have not yet figured out how to convert from RGB to temperature and tint. I have seen algorithms and formulas about converting from RGB to Temperatur, but nothing regarding tint. I will continue experimenting with this a little though.
Any help much appreciated!
After a lot of experimenting and mathematics I finally got my app to work almost the way I want.
If anyone else will find themselves facing a similar problem then here is what I did.
I ended up using CITemperatureAndTint filter supplying a color in Kelvins calculated from the selected pixels RGB value and user suppliable tint value.
To get to Kelvins I:
- firstly converted RGB to XYZ using the D65 illuminant (ie Daylight).
- then converted from XYZ to Yxy. Both of these conversions were made using the algorithms found from EasyRGB.
- I then calculated Kelvins from Yxy using the McCamry's formula I found in a paper here.
These steps got the image in the ballpark but not quite there, so I added a UISlider for the user to supply the tint value ranging from -100 to 100.
With selecting a point that should be white and choosing values from the positive side of the tint scale (all the images I on my phone tend to be more yellow) an image can now be converted to (more) neutral colors. Yey!
I supplyed the calculated temperature and user chosen tint as inputNeutral vector values.
6500 (D65 daylight) and 0 as inputTargetNeutral vector values to CITTemperatureAndTint filter.
I have a bunch of uncompressed bitonal TIF document images. All of them have a watermark in the middle. When I run them through OCR, the text that overlaps with the watermark does not get recognized. I am trying to see if I can apply some type of cleanup to remove those watermarks to be able to recognize the missing text.
Again, the images are black and white, but when you look at the watermark it appears grey since it has a pattern of black and white pixels that makes the letters in the watermark less "dense" than regular text. At the same time, the watermark letters are very big, much bigger than the regular text.
An example of a somewhat similar image is this (except this one is color and the watermark characters in my case are a lot thicker and bigger; my watermarks are also a lot shorter: only 3 to 4 letters long)
It seems that there might be some sort of clean up filter that would be similar to removing large black borders from an image except borders are ually "denser" than a watermark so they appear "more black".
I have 3 tools at my disposal: GIMP, ImageMagick and IrfanView. Can you recommend any specific features of any subset of these tools that might help me?
Playing with contrast etc did not help, but I found a different way. As stated above, the regular text is a lot "denser" than the watermark text meaning that a regular black pixel has more surrounding black pixels than a watermark black pixel. So I devised a simple window-based filtering and thresholding algorithm.
Here's how I did it in Matlab, using a 5X5 window:
im=imread('imageWithWmark.tif');
imInv = ~im;
nr=size(imInv,1);
nc=size(imInv,2);
d = 2; % for 5X5 window
counts = zeros(nr,nc);
for rr = d+1 : nr-d-1
for cc = d+1 : nc-d-1
counts(rr,cc) = nnz(imInv(rr-d:rr+d,cc-d:cc+d));
end
end
thresh=10; % 10 out of 25 -- the larger the thresh the thinner the resulting letters are
imThresh = (counts>=thresh) & imInv;
imwrite(~imThresh,sprintf('Thresh_%d.tif',thresh),'Compression','none','Resolution',300);
Of course, the size of the window, the threshold and other parameters depend on the parameters of the regular text on the page (letter bigger/smaller, thicker/thinner etc) but even this initial version worked pretty well