Preserve User ROIs after Analyze Particles Add to Manager - imagej

I'm working primarily in the macro language.
For my macro, I am asking a user to select a few elements from an image. One is an ellipse selection and the other polygonal. I have these two ROIs in the ROI Manager and I would like to keep them.
When I do further analysis on the image to isolate different tissues I need to use analyze particles to add more ROIs to the ROI Manager, but when I add them, it erases the user selected ROIs.
Is there a way to append the ROIs to the list or do I need to reorder my macro?
run("Select None"); //Selecting nothing just in case
setTool(1); //Elliptical tool
while (selectionType != 1) {
message= "Elliptical Selection Required\n Please select the Peduncle Attachment Zone";
waitForUser(message);
}
roiManager("Add");
// User selection of Seed Cavity
// hexagonal (polygon) selection
run("Select None"); //Selecting nothing just in case
setTool(2); //Elliptical tool
while (selectionType != 2) {
message= "Polygonal Selection Required\n Please select the Seed Cavity";
waitForUser(message);
}
roiManager("Add");
run("Select None"); //clear current selection
//threshold here
//analyze particles
run("8-bit");
setThreshold(0, 128);
setOption("BlackBackground", false);
run("Convert to Mask");
correctLUT(); //checks if inverted and corrects
run("Analyze Particles...", "size=55000-Infinity circularity=0.65-1.00 show=Masks display exclude add");
function correctLUT() {
invertedLUT = is("Inverting LUT");
if (invertedLUT == 1) {
run("Invert LUT");
run("Invert");
}
}

Related

How to check if multiple GeoJSON objects form one complete area?

I have a Javascript array of GeoJSON objects, which are all by default Polygons. I am currently using Turf.js with intersect to check if two features overlap, but how do I check if 2+ can all form a single cohesive area?
There is no upper limit to the number of mini-areas
If there are 3 locations for example, location 1 and 3 don't have to intersect each other as long as they both intersect location 2 (i.e. the idea is to be able to form one new polygon at the end)
I have tried something along the lines of the following, but it doesn't seem very efficient
var locationsToCheck = [...]; // Populated with GeoJSON objects
var currLocation = locationsToCheck.pop();
while (locationsToCheck.length > 0) {
var currLocationTwo = locationsToCheck.pop();
var intersection = turf.intersect(currLocation, currLocationTwo);
if (intersection != null) {
currLocation = intersection;
} else {
locationsToCheck.unshift(currLocationTwo);
}
}
and it ends in an infinite loop if there are areas that never intersect at all. I thought of creating a second array to store the ones that have been checked once already, but that produces the same infinite loop again. What would be the optimal way to achieve this?
Note: The reason I use unshift to insert the area again, is it because it might intersect with another mini-area later in the list

OpenLayers 3 selecting multiple features from same layer

I have a layer with overlapping features (e.g. bounding boxes). In OL2 the select control seemed to select the expected feature (e.g. the feature with less surface area). In OL3 this doesn't seem to be the case. While I could get all features at a specific pixel, I would prefer for the select control to return all features that intersect with the click. Any way to do this?
You can set the multi member of ol.interaction.Select to true (it allows to choose all features at the coordinate you have clicked) and add an event to choose what feature do you want to select among all the overlapping features:
var select = new ol.interaction.Select({
multi: true
});
var fnHandler = function (e) {
e.selected; // array of selected features
e.target; // select interaction
var feature = e.selected.filter(function (feature) {
// do some filtering to choose what feature do you want
})[0];
e.target.getFeatures().clear(); // unselect all features
e.target.getFeatures().push(feature); // select the feature you filtered
};
select.on('select', fnHandler);
To get all of the features in a layer that intersect with the mouse click I do something like this:
map.on("click", function(event) {
var coordinate = event.coordinate;
var features = myVectorLayer.getSource().getFeaturesAtCoordinate(coordinate);
// Do something with the features that were clicked here...
});

Strange File Save Behavior with ImageJ

I wrote an imageJ script to color and merge a series of black and white images. The script saves both the unmerged colored images and merged colored images. Everything works beautifully when I'm running in debug mode and step through the script. When I run it for real, however, it occasionally saves a couple of the original black and whites instead of the resulting colored image. All the merged images appear to be fine.
Why would everything work fine in debug mode but fail during regular usage?
Below is my code :
// Choose the directory with the images
dir = getDirectory("Choose a Directory ");
// Get a list of everything in the directory
list = getFileList(dir);
// Determine if a composite directory exists. If not create one.
if (File.exists(dir+"/composite") == 0) {
File.makeDirectory(dir+"/composite")
}
// Determine if a colored directory exists. If not create one.
if (File.exists(dir+"/colored") == 0) {
File.makeDirectory(dir+"/colored")
}
// Close all files currently open to be safe
run("Close All");
// Setup options
setOption("display labels", true);
setBatchMode(false);
// Counter 1 keeps track of if you're on the first or second image of the tumor/vessel pair
count = 1;
// Counter 2 keeps track of the number of pairs in the folder
count2 = 1;
// Default Radio Button State
RadioButtonDefault = "Vessel";
// Set Default SatLevel for Contrast Adjustment
// The contrast adjustment does a histogram equalization. The Sat Level is a percentage of pixels that are allowed to saturate. A larger number means more pixels can saturate making the image appear brighter.
satLevelDefault = 2.0;
// For each image in the list
for (i=0; i<list.length; i++) {
// As long as the name doesn't end with / or .jpg
if (endsWith(list[i], ".tif")) {
// Define the full path to the filename
fn = list[i];
path = dir+list[i];
// Open the file
open(path);
// Create a dialog box but don't show it yet
Dialog.create("Image Type");
Dialog.addRadioButtonGroup("Type:", newArray("Vessel", "Tumor"), 1, 2, RadioButtonDefault)
Dialog.addNumber("Image Brightness Adjustment", satLevelDefault, 2, 4, "(applied only to vessel images)")
// If it's the first image of the pair ...
if (count == 1) {
// Show the dialog box
Dialog.show();
// Get the result and put it into a new variable and change the Default Radio Button State for the next time through
if (Dialog.getRadioButton=="Vessel") {
imgType = "Vessel";
RadioButtonDefault = "Tumor";
} else {
imgType = "Tumor";
RadioButtonDefault = "Vessel";
}
// If it's the second image of the pair
} else {
// And the first image was a vessel assume the next image is a tumor
if (imgType=="Vessel") {
imgType="Tumor";
// otherwise assume the next image is a vessel
} else {
imgType="Vessel";
}
}
// Check to see the result of the dialog box input
// If vessel do this
if (imgType=="Vessel") {
// Make image Red
run("Red");
// Adjust Brightness
run("Enhance Contrast...", "saturated="+Dialog.getNumber+" normalize");
// Strip the .tif off the existing filename to use for the new filename
fnNewVessel = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewVessel+"_colored");
// Get the title of the image for the merge
vesselTitle = getTitle();
// Othersie do this ...
} else {
// Make green
run("Green");
// Strip the .tif off the existing filename to use for the new filename
fnNewTumor = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewTumor+"_colored");
// Get the title of the image for the merge
tumorTitle = getTitle();
}
// If it's the second in the pair ...
if (count == 2) {
// Merge the two images
run("Merge Channels...", "c1="+vesselTitle+" c2="+tumorTitle+" create");
// Save as Jpg
saveAs("Jpeg", dir+"/composite/composite_"+count2);
// Reset the number within the pair counter
count = count-1;
// Increment the number of pairs counter
count2 = count2+1;
// Otherwise
} else {
// Increment the number within the pair counter
count += 1;
}
}
}
Not sure why I'd need to do this but adding wait(100) immediately before saveAs() seems to do the trick
The best practice in this scenario would be to poll IJ.macroRunning(). This method will return true if a macro is running. I would suggest using helper methods that can eventually time out, like:
/** Run with default timeout of 30 seconds */
public boolean waitForMacro() {
return waitForMacro(30000);
}
/**
* #return True if no macro was running. False if a macro runs for longer than
* the specified timeOut value.
*/
public boolean waitForMacro(final long timeOut) {
final long time = System.currentTimeMillis();
while (IJ.macroRunning()) {
// Time out after 30 seconds.
if (System.currentTimeMillis() - time > timeOut) return false;
}
return true;
}
Then call one of these helper methods whenever you use run(), open(), or newImage().
Another direction that may require more work, but provide a more robust solution, is using ImageJ2. Then you can run things with a ThreadService, which gives you back a Java Future which can then guarantee execution completion.

2D Shape recognition algorithm - looking for guidance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I need the ability to verify that a user has drawn a shape correctly, starting with simple shapes like circle, triangle and more advanced shapes like the letter A.
I need to be able to calculate correctness in real time, for example if the user is supposed to draw a circle but is drawing a rectangle, my hope is to be able to detect that while the drawing takes place.
There are a few different approaches to shape recognition, unfortunately I don't have the experience or time to try them all and see what works.
Which approach would you recommend for this specific task?
Your help is appreciated.
We may define "recognition" as the ability to detect features/characteristics in elements and compare them with features of known elements seen in our experience. Objects with similar features probably are similar objects. The higher the amount and complexity of the features, the greater is our power to discriminate similar objects.
In the case of shapes, we can use their geometric properties such as number of angles, the angles values, number of sides, sides sizes and so forth. Therefore, in order to accomplish your task you should employ image processing algorithms to extract such features from the drawings.
Below I present a very simple approach that shows this concept in practice. We gonna recognize different shapes using the numbers of corners. As I said: "The higher the amount and complexity of the features, the greater is our power to discriminate similar objects". Since we are using just one feature, the number of corners, we can differentiate a few different kinds of shapes. Shapes with the same number of corners will not be discriminated. Therefore, in order to improve the approach you might add new features.
UPDATE:
In order to accomplish this task in real time you might extract the features in real time. If the object to be drawn is a triangle and the user is drawing the fourth side of any other figure, you know that he or she is not drawing a triangle. About the level of correctness you might calculate the distance between the feature vector of the desired object and the drawn one.
Input:
The Algorithm
Scale down the input image since the desired features can ben detected in lower resolution.
Segment each object to be processed independently.
For each object, extract its features, in this case, just the number of corners.
Using the features, classify the object shape.
The Software:
The software presented below was developed in Java and using Marvin Image Processing Framework. However, you might use any programming language and tools.
import static marvin.MarvinPluginCollection.floodfillSegmentation;
import static marvin.MarvinPluginCollection.moravec;
import static marvin.MarvinPluginCollection.scale;
public class ShapesExample {
public ShapesExample(){
// Scale down the image since the desired features can be extracted
// in a lower resolution.
MarvinImage image = MarvinImageIO.loadImage("./res/shapes.png");
scale(image.clone(), image, 269);
// segment each object
MarvinSegment[] objs = floodfillSegmentation(image);
MarvinSegment seg;
// For each object...
// Skip position 0 which is just the background
for(int i=1; i<objs.length; i++){
seg = objs[i];
MarvinImage imgSeg = image.subimage(seg.x1-5, seg.y1-5, seg.width+10, seg.height+10);
MarvinAttributes output = new MarvinAttributes();
output = moravec(imgSeg, null, 18, 1000000);
System.out.println("figure "+(i-1)+":" + getShapeName(getNumberOfCorners(output)));
}
}
public String getShapeName(int corners){
switch(corners){
case 3: return "Triangle";
case 4: return "Rectangle";
case 5: return "Pentagon";
}
return null;
}
private static int getNumberOfCorners(MarvinAttributes attr){
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
int corners=0;
List<Point> points = new ArrayList<Point>();
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
// Is it a corner?
if(cornernessMap[x][y] > 0){
// This part of the algorithm avoid inexistent corners
// detected almost in the same position due to noise.
Point newPoint = new Point(x,y);
if(points.size() == 0){
points.add(newPoint); corners++;
}else {
boolean valid=true;
for(Point p:points){
if(newPoint.distance(p) < 10){
valid=false;
}
}
if(valid){
points.add(newPoint); corners++;
}
}
}
}
}
return corners;
}
public static void main(String[] args) {
new ShapesExample();
}
}
The software output:
figure 0:Rectangle
figure 1:Triangle
figure 2:Pentagon
The other way is you can use math with this problem using the average of each point that are smallest distance from the one your'e comparing it from,
first you must resize shape with the ones in your library of shapes and then:
function shortestDistanceSum( subject, test_subject ) {
var sum = 0;
operate( subject, function( shape ){
var smallest_distance = 9999;
operate( test_subject, function( test_shape ){
var distance = dist( shape.x, shape.y, test_shape.x, test_shape.y );
smallest_distance = Math.min( smallest_distance, distance );
});
sum += smallest_distance;
});
var average = sum/subject.length;
return average;
}
function operate( array, callback ) {
$.each(array, function(){
callback( this );
});
}
function dist( x, y, x1, y1 ) {
return Math.sqrt( Math.pow( x1 - x, 2) + Math.pow( y1 - y, 2) );
}
var square_shape = Array; // collection of vertices in a square shape
var triangle_shape = Array; // collection of vertices in a triangle
var unknown_shape = Array; // collection of vertices in the shape your'e comparing from
square_sum = shortestDistanceSum( square_shape, unknown_shape );
triangle_sum = shortestDistanceSum( triangle_shape, unknown_shape );
Where the lowest sum is the closest shape.
You have two inputs - the initial image and the user input - and you are looking for a boolean outcome.
Ideally you would convert all your input data to a comparable format. Instead, you could also parameterize both types of input and use a supervised machine learning algorithm (Nearest Neighbor comes to mind for closed shapes).
The trick is in finding the right parameters. If your input is a flat image file, this could be a binary conversion. If user input is a swiping motion or pen stroke, I'm sure there are ways to capture and map this as binary but the algorithm would probably be more robust if it used data closest to the original input.

add mouseevents to webgl objects

im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}
Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.
DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?
XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.

Resources