I have an image with a group of cells and I need to count them. I did a similar exercise using bwlabel, however this one is a bit more challenging because there are some little cells that I don't want to count. In addition, some cells are on top of each other. I've seem some MATLAB examples online but they all involved functions that aren't available. Do you have any ideas how to separate the overlapping cells?
Here's the image:
To make it clearer: Please help me count the number of red blood cells (which have a circular shape) like so:
The image is in grayscale but I think you can distinguish which ones are red blood cells. They have a distinctive biconcave shape... Everything else doesn't matter. But to be more specific here is an image with all the things that I want to ignore/discard/not count highlighted in red.
The main issue is the overlapping of cells.
The following is an ImageJ macro to do this (which is free software too). I would recommend you use ImageJ (or Fiji), to explore this type of stuff. Then, if you really need it, you can write an Octave program to do it.
run ("8-bit");
setAutoThreshold ("Default");
setOption ("BlackBackground", false);
run ("Convert to Mask");
run ("Fill Holes");
run ("Watershed");
run ("Analyze Particles...", "size=100-Infinity exclude clear add");
This approach gives this result:
And it is point and click equivalent as:
Image > Type > 8-bit
Image > Adjust > Threshold
select "Default" and untick "dark background" on the threshold dialogue. Then click "Apply".
Process > Binary > Fill holes
Process > Binary > Watershed
Analyze > Analyze particles...
7 Set "100-Infinity" as range of valid particle size on the "Analyze particles" dialogue
On ImageJ, if you have a bianry image, watershed actually performs the distance transform, and then the watershed.
Octave has all the functions above except watershed (I plan on implementing it soon).
If you can't use ImageJ for your problem (why not? It can run in headless mode too), then an alternative is to get the area of each object, and if too high, then assume it's multiple cells. It kinda of depends on your question and if can generate a value for average cell size (and error).
Another alternative is to measure the roundness of each object identified. Cells that overlap will be less round, you can identify them that way.
It depends on how much error are you willing to accept on your program output.
This is only to help with "noise" but why not continue using bwlabel and try using bwareaopen to get rid of small objects? It seems the cells are pretty large, just set some size threshold to get rid of small objects http://www.mathworks.com/matlabcentral/answers/46398-removing-objects-which-have-area-greater-and-lesser-than-some-threshold-areas-and-extracting-only-th
As for overlapping cells, maybe setting an upperbound for the size of a single cell. so when you have two cells overlapping, it will classify this as "greater than one cell" or something like that. so it at least acknowledges the shape, but can't determine exactly how many cells are there
Related
I have a table with values in it. I want to be able to immediately spot the cells with the biggest values. Using the color scale tool works nicely as demonstrated below
However, with my real data, there are a few cells that have huge values in it - these then raise the maxpoint so high that the formatting kind of breaks for the other values since in relation to the maxpoint, the other values are very small, making the colors among them almost indistinguishable.
I have obtained the second-highest value by doing =large(A1:C3,2) in an unrelated cell - however I seem to be unable to reference that cell in the Maxpoint setting. Is there a way?
Another idea was to manually set up a color scale that is more logarithmic in its curve but this really isn't a nice option I think. The only option that's left that I can think of is to dynamically color the cells via a script - is there already an easier or existing solution?
try like this:
=LARGE($A$1:$C$3; 2)
Example Image
I want to remove the lines (shown in RED color) as they are out of order. Lines shown in black color are repeating at same period (approximately). Period is not known beforehand. Is there any way of deleting non-periodic lines( shown in red color) automatically?
NOTE: Image is binary ( back & while).. lines shown in red color only for illustration.
Of course there is any way. There is almost always some way to do something.
Infortunately you have not provided any particular problem. The entire thing is too broad to be answered here.
To help you getting started: (I highly recommend you start with pen, paper and your brain)
Detect the lines -> google or think, there are many standard ways to detect lines in an image. if you don't have noise in your binary image its trivial.
find any aequidistant sets -> think
delete the rest -> think ( you know what is good so everything else has to go away)
I assume, your lines are (almost) vertical.
The following should work
turn the image to a column sum histogram
try a Fourier transformation on the signal (potentially padding the image appropriately)
pick the maximum/peak from the Fourier spectrum as your base period
If you need the lines rather than the position of the lines, generate a mask with lines at appropriate intervals (as determined by your analysis before) and apply to the image.
I'm currently developing my iOS app and want to depict a graph whose shape is a circle like pie chart, but its radius is dependent on each specific values. Sorry I don't know what the name of such chart is, but I'm sure every sane baseball fans or any sports fans I think should have ever seen such chart. For example, if a team's batting average is the best in the league which consists of 5 teams, its radius is length 5 (or any other length proportional to the other values), and if the same team's earned runs average is fourth in the league, its length is 2, etc, etc... and then those points or "tips" are connected to each other within the chart, and finally the area of the connected figure is filled with any colors.
Sorry for the awful explanation (it's quite difficult for non-English native to explain it more clearly), but my question is, is it feasible to depict such graphs in iOS application? If it can be done in iOS app, how/what library do I use to plot such graphs?
I've read Core Graphics documentation as well as CorePlot example page but I wasn't able to find any such charts in those pages. I don't like the idea of using D3 embedded in UIWebView as suggested in this post since it's slow due to network latency. I don't know any other libraries to be as flexible and versatile as the two libraries above.
I use iOS 7.1 and Xcode 5.1.
[Update]
It's not a bubble chart. Let me explain it a little bit more concretely. The chart is a hexagon if every component of a record or sample is the best among the other competing records or samples and the number of the component to be described is six. In other words, the length of the component from the origin is whatever the longest possible values. But if one component, say, stolen bases, is NOT the best in the samples - say, it's the second best -, then the length of the component from the origin is not the longest; it's the second longest among the samples. And once every components (6 in this case) is plotted on the graph, those plotted points are connected to each other, and it finally is filled with whatever colors to make it the "area" of the record. And then this might be repeated in other records or samples as well. But unlike the bubble chart, one graph is made of one record and six features (or columns or variables) in this case - not all records and one feature (actually, three, but only one is used to make a bubble) which it is in the case of the bubble chart. Hope you get it...
[Update 2]
I finally got such charts on the Internet! The chart is something like this:
.
You're describing a bubble chart. You can make one with Core Plot using a scatter plot. Implement one of the following datasource methods to provide custom plot symbols. Use your data to determine the size of each symbol. They can be different shapes and have varying fills and border line styles, too.
-(NSArray *)symbolsForScatterPlot:(CPTScatterPlot *)plot recordIndexRange:(NSRange)indexRange;
-(CPTPlotSymbol *)symbolForScatterPlot:(CPTScatterPlot *)plot recordIndex:(NSUInteger)idx;
I'm trying to add text to an image in PSP X6 and I know how to do it but how can I make it look clearer/thinner?
I've created this using Arial (9 pixels) and the default stroke width (1) and not sure what else to do to make the text look less thick...
As a long-time fellow user of Corel's PaintShop Pro (including several versions by JASC before that), I have experienced the "Text" tool fluctuate (in effectiveness) over the years. This can be quite frustrating - as the quality (crispness in particular) of the text seems to vary from version-to-version.. - and sadly X6 (recently superseded by X7) is no exception.
In answer to your question, I would recommend one of the following solutions:
To get the best result with small text:
1) Ensure that the "Anti-alias" option is set to either "Sharp" or "Off".
2) Double-check that the text is not set to Bold (obvious, I know - but easily missed)
3) If possible, try dark greys (rather than pure black) as this can sometimes seem "softer" around the edges.
If you have already checked and/or tried the above, then another option is to use the technique I have adopted many times over the years - as follows:
1) Create your text (e.g. "Previous") but in a separate document - and purposefully set the text size MUCH larger than you need (e.g. 100 Point - the bigger, the better).
2) "Flatten" that image (ensuring you preserve transparency if you plan to overlay your text on a background other than pure white).
3) Resize the image down to the approximate width you require (e.g. 100 Pixels wide) by using the "Image Resize" option and ensuring that "Resample Using" is ticked/checked - and that "Smart Size" is the option used (the others: Bicubic, Bilinear and Weighted Average can also sometimes deliver better results - a little trial and error might be the order of the day until you get the hang of the technique).
Providing the desired end size is not too small, you should find the end results MUCH better than simple typing in at (say..) 9 Point to begin with.
Worth noting that this technique works particularly well for "mid-size" text - but you should also see an improvement for smaller sizes too. So something of a workaround for sure, but it definitely can and does work.
I have a device that is taking TV screenshots at precise times (it doesn't take incomplete frames).
Still this screenshot is an interlace image made from two different original frames.
Now, the question is if/how is possible to identify which of the lines are newer/older.
I have to mention that I can take several sequential screenshots if needed.
Take two screenshots one after another, yielding a sequence of two images (1,2). Split each screenshot into two fields (odd and even) and treat each field as a separate image. If you assume that the images are interlaced consistently (pretty safe assumption, otherwise they would look horrible), then there are two possibilities: (1e, 1o, 2e, 2o) or (1o, 1e, 2o, 2e). So at the moment it's 50-50.
What you could then do is use optical flow to improve your chances. Say you go with the
first option: (1e, 1o, 2e, 2o). Calculate the optical flow f1 between (1e, 2e). Then calculate the flow f2 between (1e, 1o) and f3 between (1o,2e). If f1 is approximately the same as f2 + f3, then things are moving in the right direction and you've picked the right arrangement. Otherwise, try the other arrangement.
Optical flow is a pretty general approach and can be difficult to compute for the entire image. If you want to do things in a hurry, replace optical flow with video tracking.
EDIT
I've been playing around with some code that can do this cheaply. I've noticed that if 3 fields are consecutive and in the correct order, the absolute error due to smooth, constant motion will be minimized. On the contrary, if they are out of order (or not consecutive), this error will be greater. So one way to do this is two take groups of 3 fields and check the error for each of the two orderings described above, and go with the ordering that yielded the lower error.
I've only got a handful of interlaced videos here to test with but it seems to work. The only down-side is its not very effective unless there is substantial smooth motion or the number of used frames is low (less than 20-30).
Here's an interlaced frame:
Here's some sample output from my method (same frame):
The top image is the odd-numbered rows. The bottom image is the even-numbered rows. The number in the brackets is the number of times that image was picked as the most recent. The number to the right of that is the error. The odd rows are labeled as the most recent in this case because the error is lower than for the even-numbered rows. You can see that out of 100 frames, it (correctly) judged the odd-numbered rows to be the most recent 80 times.
You have several fields, F1, F2, F3, F4, etc. Weave F1-F2 for the hypothesis that F1 is an even field. Weave F2-F3 for the hypothesis that F2 is an even field. Now measure the amount of combing in each frame. Assuming that there is motion, there will be some combing with the correct interlacing but more combing with the wrong interlacing. You will have to do this at several times in order to find some fields when there is motion.