Counting nuclear foci in imagej - getting strange results - imagej

I'm following the protocol outlined here: http://microscopy.duke.edu/HOWTO/countfoci.html
And the commands I'm using are enclosed below.
My problem is that I'm working on massive images (25k x 17k pixels) and sometimes I get accurate values but sometimes I get what I see below. Going by results table (and by RawIntDen/255 to get the actual number of nuclear foci), the cluster highlighted has around 60,000 nuclei! ...which is not the case as you can see by a quick visual examination.
Any idea why this is or what I can do about it?
I've tried re-binarizing the image in the step immediately before measuring. That didn't work. I get this problem whether I do the commands manually or whether I do it via the macro. Any other ideas?
Thanks in advance.
Link to picture: https://www.dropbox.com/s/b7p6wkijf5smlpy/photo%20aug%2009%2C%204%2054%2021%20pm.jpg?dl=0
run("8-bit");
run("Auto Threshold", "method=Triangle white setthreshold show");
run("Convert to Mask");
run("Fill Holes");
run("Dilate");
run("Dilate");
run("Analyze Particles...", "size=50-Infinity display clear summarize add in_situ");
wait(30000);
run("Revert");
run("Find Maxima...", "noise=7 output=[Single Points] exclude");
run("ROI Manager...");
roiManager("Show None");
roiManager("Show All");
run("Set Measurements...", "area mean min integrated redirect=None decimal=3");
roiManager("Measure");
wait(30000);
roiManager("Save", path + ".zip");
saveAs("Results", path + ".xls");
close();

your pictures seem to be quite big in Pixels. Your defined particle size is "50 - infinity". I think this is too small, try something bigger than 50. hope that solves the Problem.
P.S.: We've tried to cope with similar problems also using this algorithm, perhaps that's going to help you: http://focinator.oeck.de/.

Related

Google sheets, summing over a formula?

Apologies if this is a little unclear, or this question has been asked already. It's a little difficult to explain, but I've bolded my question - it's essentially about shortening formulas.
I'm running a payment plan waterfall model. My code works, but, it's er, you know...
IF($K2=Q$1,Assumptions!$E$56*($N2+$O2),IF(AND($K2<Q$1,$M2>Q$1),Assumptions!$E$56*$P2/$L2,0)) + IF(edate($K2,1)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,1)<Q$1,edate($M2,1)>Q$1),Assumptions!$F$56*$P2/$L2,0)) + IF(edate($K2,2)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,2)<Q$1,edate($M2,2)>Q$1),Assumptions!$F$56*$P2/$L2,0)) + IF(edate($K2,3)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,3)<Q$1,edate($M2,3)>Q$1),Assumptions!$F$56*$P2/$L2,0)) + IF(edate($K2,4)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,4)<Q$1,edate($M2,4)>Q$1),Assumptions!$F$56*$P2/$L2,0)) + IF(edate($K2,5)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,5)<Q$1,edate($M2,5)>Q$1),Assumptions!$F$56*$P2/$L2,0)) + IF(edate($K2,6)=Q$1,Assumptions!$F$56*($N2+$O2),IF(AND(edate($K2,6)<Q$1,edate($M2,6)>Q$1),Assumptions!$F$56*$P2/$L2,0))
...pretty long.
Essentially what's going on is: the assumption is, when we launch a product, we sell say 80% in the first month, and 2.5% every subsequent month until we each 100%.
I'd like the 80% and the 2.5% to be variables (listed as Assumptions!$E$56 and Assumptions!$E$56) here.
Obviously a little long. But noticed after the first IF clause, the subsequent ones are actually identical, the only difference being the number inside edate(__,2), edate(__,3)...
So my question is - can this code be tidied up into some sort of for loop? Python would make it pretty simple to increment over the variable edate(__,i) and sum over i = 1:6.
Sure, there is. Usually looping is emulated by Sequence(N), which makes an array of numbers from [1,N] vertically, which is somewhat like your Python range. Then you can do stuff to it as an ArrayFormula.
In your case, you end up with two terms: your initial term using $E, and all the looped stuff using $F. I see 6 terms, so I will use Sequence(6):
=IF(
$K2=Q$1,
Assumptions!$E$56*($N2+$O2),
IF(
AND(
$K2<Q$1,
$M2>Q$1
),
Assumptions!$E$56*$P2/$L2,
0
)
) + ArrayFormula(SUM(
IF(
edate($K2,SEQUENCE(6))=Q$1,
Assumptions!$F$56*($N2+$O2),
IF(
(edate($K2,SEQUENCE(6))<Q$1)*
(edate($M2,SEQUENCE(6))>Q$1),
Assumptions!$F$56*$P2/$L2,
0
)
)
))
And if you want, you can give your Assumption values names using named ranges.

"For all" using Apache Jenas rule engine

I'm currently working on some small examples about Apache Jena. What I want to show is universal quantification.
Let's say I have balls that each have a different color. These balls are stored within boxes. I now want to determine whether these boxes only contain balls that have the same color of if they are mixed.
So basically something along these lines:
SAME_COLOR = ∃x∀y:{y in Box a → color of y = x}
I know that this is probably not possible with Jena, and can be converted to the following:
SAME_COLOR = ∃x¬∃y:{y in Box a → color of y != x}
With "not exists" Jena's "NoValue" can be used, however, this does (at least for me) not work and I don't know how to translate above logical representations in Jena. Any thoughts on this?
See the code below, which is the only way I could think of:
(?box, ex:isA, ex:Box)
(?ball, ex:isIn, ?box)
(?ball, ex:hasColor, ?color)
(?ball2, ex:isIn, ?box)
(?ball2, ex:hasColor, ?color2)
NotEqual(?color, ?color2)
->
(?box, ex:hasSomeColors, "No").
(?box, ex:isA, ex:Box)
NoValue(?box, ex:hasSomeColors)
->
(?box, ex:hasSomeColors, "Yes").
A box with mixed content now has both values "Yes" and "No".
I've ran into the same sort of problem, which is more simplified.
The question is how to get a collection of objects or count no. of objects in rule engine.
Given that res:subj ont:has res:obj_xxx(several objects), how to get this value in rule engine?
But I just found a Primitive called Remove(), which may inspire me a bit.

Detecting color in video feed

I know that this seems like a question that has been asked a million times, and it definitely helped, but I haven't been able to find anything that can help me with my specific problem. Overall, my project is centered about detecting a blinking LED that's blinking a Morse Code, and then translating that Morse Code. What I've done so far is I've thresholded an image so that only the LED will show up, everything else is black. The light from the LED is red. So what I want to do to start off is print out either a "0" or "1" depending on if the LED is on or off. However, I am not sure how to detect any color in an image. Here is the part of the code that I'm working on currently
if(frameFromCamera->InRange(new Bgr(0, 0, 200),new Bgr(0, 0, 255)) == 255){
tbMorse->Text ="1";
}
else{
tbMorse->Text = "0";
}
But I am getting the following error.
BAOTFISInterface.cpp(1010): error C2664: 'Emgu::CV::Image<TColor,TDepth> ^Emgu::CV::Image<TColor,unsigned short>::InRange(Emgu::CV::Image<TColor,unsigned short> ^,Emgu::CV::Image<TColor,unsigned short> ^)' : cannot convert parameter 1 from 'Emgu::CV::Structure::Bgr *' to 'Emgu::CV::Image<TColor,TDepth> ^'
with
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned char
]
and
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned short
]
No user-defined-conversion operator available, or
Cannot convert an unmanaged type to a managed type
Does anyone know how to fix this? I'm using VS2010 so I have to use EMGU cv formatting to use the OpenCV library. This is all in Managed C++. I will take any pointers or suggestions that I can get.

IllegalArgumentException when using weka.clusterers.HierarchicalClusterer

I searched a lot, but I was not able to find any example code, which describes how to use the WEKA HierarchicalClusterer. Using the following C#-code gives me an IllegalArgumentException at "agg.buildClusterer(insts);".
weka.clusterers.HierarchicalClusterer agg = new weka.clusterers.HierarchicalClusterer();
agg.setNumClusters(NumCluster);
/*
Tag[] TAGS_LINK_TYPE = agg.getLinkType().getTags();
agg.setLinkType(new SelectedTag(1, TAGS_LINK_TYPE));
*/
agg.buildClusterer(insts);
for (int i = 0; i < insts.numInstances(); i++)
{
int clusterNumber = agg.clusterInstance(insts.instance(i));
}
The StackTrace says:
at java.util.PriorityQueue..ctor(Int32 initialCapacity, Comparator comparator)
at weka.clusterers.HierarchicalClusterer.doLinkClustering(Int32 , Vector[] , Node[] )
at weka.clusterers.HierarchicalClusterer.buildClusterer(Instances data)
but no Message or InnerException is specified.
The varaible "insts" is an Instances-object, which only holds instances with an equal amount of numerical attributes.
Is anyone able to quickly find my error or please post/link some example code?
Further, is the setting of the LinkType (commented code) correct?
Thanks,
Björn
The HierarchicalClusterer class has a TAGS_LINK_TYPE attribute. So like
agg.setLinkType(new SelectedTag(1, HierarchicalClusterer.TAGS_LINK_TYPE));
will achieve what you are after for setting the linking. Now what on earth does that 1 mean? From the javadocs we see what TAGS_LINK_TYPE contains:
-L Link type (Single, Complete, Average, Mean, Centroid, Ward, Adjusted complete, Neighbor Joining)
[SINGLE|COMPLETE|AVERAGE|MEAN|CENTROID|WARD|ADJCOMLPETE|NEIGHBOR_JOINING]
In general, your code looks ok for the C# case. I see you don't set the distance metric in your example above and maybe you would want to do this? I too use Weka as best I can with C# using IKVM. I have found the dataset allowed for hierarchical clustering is not too large. Maybe your dataset exceeds what WEKA can handled and you would avoid your error if you reduced the size of the dataset?

Mathematica's TextRecognize not up to par

Please take a look at the screenshot below and see if you can tell me why this won't work. The examples in on the reference page for TextRecognize look pretty impressive, I don't think recognizing single letters like this should be a problem. I've tried resizing the letters as well as having the image sharpened.
For convenience in case you want to try this yourself I have included the image that I use at the bottom of this post. You can also find plenty more like this by searching for "Wordfeud" in Google Image Search.
Very cool question!
TextRecognize uses heuristics to recognize whole words from the English language. This is
the gotcha that makes recognizing single letters very hard
Consider the following line of thought:
s = Import["http://i.stack.imgur.com/JHYuh.png"];
p = ImagePartition[s, 32]
Now pick letters to form the English word 'EXIT':
x = {p[[1, 13]], p[[6, 6]], p[[3, 13]], p[[1, 12]]}
Now clean up these images a bit, like so:
d = ImageAssemble[ Map[ImageTake[#, {3, 27}, {2, 20}] &, x ]];
Then this returns the string "EXIT":
TextRecognize[d]
This is an approach completely different from using TextRecognize, so I am posting this as a separate answer. It uses the same image recognition technique from the How do I find Waldo with Mathematica.
First get the puzzle:
wordfeud = Import["http://i.stack.imgur.com/JHYuh.png"]
And then get the pieces of the puzzle:
Grid[pieces = ImagePartition[s, 32]]
Let's be interested in the letter E:
LetterE = pieces[[4, 3]]
Get the correlation image:
correlation =
ImageCorrelate[wordfeud, Binarize[LetterE],
NormalizedSquaredEuclideanDistance]
And highlight the matches:
positions = Dilation[ColorNegate[Binarize[correlation, .1]], DiskMatrix[20]];
found = ImageMultiply[wordfeud, ImageAdd[ColorConvert[positions, "GrayLevel"], .5]]
As before, this requires a bit of tuning on binarizing the correlation image, but other than
that this should help to identify bits and pieces of this puzzle.
I thought the quality of your image might be interfering. Binarizing your image did not help : recognition was zilch. I also tried a very sharp black and white image of a crossword puzzle solution. (see below) Again, nothing was recognized whether in regular or binarized format.
So I removed the black background leaving only the letters and their thin black frames. Again, recognition was about 0%.
When I removed the frames from around some of the letters AND binarized the image the only parts that were recognizable were those regions in which there was nothing but letters. (see below)
Notice in the output below, ANTS, TIRES, and TEXAS are correctly identified (as well as VECTORS), but just about nothing else.
Notice also that, even though the strings were widely spaced, mma interpreted them as words, rather than separate letters. Note "TEXAS" instead of "T E X A S".
TextRecognize[Binarize#img]
(* output *)
ANTS FFWWW FEEWF
E R o If IU I?
E A FI5F WWWFF 5
5552? L E F F
T s E NTT BT|
H0RWW#0WVlWF;EE F
5 W E ; OCS
FOFT W W R AL%AE
A TT I T ? _
i iE#W'NF WG%S W
A A EW F I i
SWWTW W ALTFCWD N
H A V 5 A F F
PLATT EWWLIGHT
W N E T
HE TIRES C
TEXAS VECTORS
I didn't have the patience to completely clean up the image. It would have been much faster to retype the text by hand.
Conclusion: Don't use text recognition in mma unless you have absolutely clear text against an even-colored, bright, preferrably white, background.
The results also varied depending on the file format used. Avoid .pdf altogether.
Edit
acl captured and tried to recognize the last 5 lines (above Edit). His results (in a comment below): mostly gibberish.
I decided to do the same. But since Prashant warned that text size makes a difference, I zoomed in first so that the text appear (to my eyes) to be about 20 pica. Below is the picture of the text I scanned and TextRecognized.
Here's the result of an unbinarized TextRecognize (at that large size):
Gliii. Q lk-ii`t`*¥ if EY £\[CloseCurlyDoubleQuote]1\[Euro]'EE \
Di'¥C~E\"P ITF SKI' T»f}!E'!',IL:?E\[CloseCurlyDoubleQuote] I 2 VEEE5\
\[CloseCurlyQuote] LEP \"- \"VE
1. ur e=\\..r.1.»».»\\\\ rw r 1»»\\|a'*r | r .fm -»'-an \
\[OpenCurlyQuote] -.-rr -_.»~|-.'i~-.w~,.-- nv n.w~»-\
\[OpenCurlyDoubleQuote]~"
Now, here's the result for the TextRecognize of the binarized image. The original image was a .png from Jing.
I didn't have the patience to completely clean up the image. It would \
have been much faster to retype the
text by hand.
Conclusion: Don't use text recognition in mma unless you have \
absolutely clear text against an even-
colored, bright, preferrably white, background.
The results also varied depending on the file format used. Avoid .pdf \
altogether.

Resources