Strange File Save Behavior with ImageJ - imagej

I wrote an imageJ script to color and merge a series of black and white images. The script saves both the unmerged colored images and merged colored images. Everything works beautifully when I'm running in debug mode and step through the script. When I run it for real, however, it occasionally saves a couple of the original black and whites instead of the resulting colored image. All the merged images appear to be fine.
Why would everything work fine in debug mode but fail during regular usage?
Below is my code :
// Choose the directory with the images
dir = getDirectory("Choose a Directory ");
// Get a list of everything in the directory
list = getFileList(dir);
// Determine if a composite directory exists. If not create one.
if (File.exists(dir+"/composite") == 0) {
File.makeDirectory(dir+"/composite")
}
// Determine if a colored directory exists. If not create one.
if (File.exists(dir+"/colored") == 0) {
File.makeDirectory(dir+"/colored")
}
// Close all files currently open to be safe
run("Close All");
// Setup options
setOption("display labels", true);
setBatchMode(false);
// Counter 1 keeps track of if you're on the first or second image of the tumor/vessel pair
count = 1;
// Counter 2 keeps track of the number of pairs in the folder
count2 = 1;
// Default Radio Button State
RadioButtonDefault = "Vessel";
// Set Default SatLevel for Contrast Adjustment
// The contrast adjustment does a histogram equalization. The Sat Level is a percentage of pixels that are allowed to saturate. A larger number means more pixels can saturate making the image appear brighter.
satLevelDefault = 2.0;
// For each image in the list
for (i=0; i<list.length; i++) {
// As long as the name doesn't end with / or .jpg
if (endsWith(list[i], ".tif")) {
// Define the full path to the filename
fn = list[i];
path = dir+list[i];
// Open the file
open(path);
// Create a dialog box but don't show it yet
Dialog.create("Image Type");
Dialog.addRadioButtonGroup("Type:", newArray("Vessel", "Tumor"), 1, 2, RadioButtonDefault)
Dialog.addNumber("Image Brightness Adjustment", satLevelDefault, 2, 4, "(applied only to vessel images)")
// If it's the first image of the pair ...
if (count == 1) {
// Show the dialog box
Dialog.show();
// Get the result and put it into a new variable and change the Default Radio Button State for the next time through
if (Dialog.getRadioButton=="Vessel") {
imgType = "Vessel";
RadioButtonDefault = "Tumor";
} else {
imgType = "Tumor";
RadioButtonDefault = "Vessel";
}
// If it's the second image of the pair
} else {
// And the first image was a vessel assume the next image is a tumor
if (imgType=="Vessel") {
imgType="Tumor";
// otherwise assume the next image is a vessel
} else {
imgType="Vessel";
}
}
// Check to see the result of the dialog box input
// If vessel do this
if (imgType=="Vessel") {
// Make image Red
run("Red");
// Adjust Brightness
run("Enhance Contrast...", "saturated="+Dialog.getNumber+" normalize");
// Strip the .tif off the existing filename to use for the new filename
fnNewVessel = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewVessel+"_colored");
// Get the title of the image for the merge
vesselTitle = getTitle();
// Othersie do this ...
} else {
// Make green
run("Green");
// Strip the .tif off the existing filename to use for the new filename
fnNewTumor = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewTumor+"_colored");
// Get the title of the image for the merge
tumorTitle = getTitle();
}
// If it's the second in the pair ...
if (count == 2) {
// Merge the two images
run("Merge Channels...", "c1="+vesselTitle+" c2="+tumorTitle+" create");
// Save as Jpg
saveAs("Jpeg", dir+"/composite/composite_"+count2);
// Reset the number within the pair counter
count = count-1;
// Increment the number of pairs counter
count2 = count2+1;
// Otherwise
} else {
// Increment the number within the pair counter
count += 1;
}
}
}

Not sure why I'd need to do this but adding wait(100) immediately before saveAs() seems to do the trick

The best practice in this scenario would be to poll IJ.macroRunning(). This method will return true if a macro is running. I would suggest using helper methods that can eventually time out, like:
/** Run with default timeout of 30 seconds */
public boolean waitForMacro() {
return waitForMacro(30000);
}
/**
* #return True if no macro was running. False if a macro runs for longer than
* the specified timeOut value.
*/
public boolean waitForMacro(final long timeOut) {
final long time = System.currentTimeMillis();
while (IJ.macroRunning()) {
// Time out after 30 seconds.
if (System.currentTimeMillis() - time > timeOut) return false;
}
return true;
}
Then call one of these helper methods whenever you use run(), open(), or newImage().
Another direction that may require more work, but provide a more robust solution, is using ImageJ2. Then you can run things with a ThreadService, which gives you back a Java Future which can then guarantee execution completion.

Related

how to progressively add drawable to a canvas?

I have points generated one by one, and when a new point is generated, I want to draw a line segment connecting with the previous point. Like this:
var x by remember { mutableStateOf( 0.0f)}
var y by remember { mutableStateOf( 0.5f)}
var pStart by remember { mutableStateOf(Offset(0f, 0.5f))}
Canvas(modifier = Modifier.fillMaxSize()) {
canvasWidth = size.width
canvasHeight = size.height
val pEnd = Offset(x * canvasWidth, (1-y) * canvasHeight)
val col = if (pEnd.y < pStart.y) Color.Green else Color.Red
drawLine(
start = pStart,
end = pEnd,
strokeWidth = 4f,
color = col
)
pStart = pEnd
}
But this only draws the segment in a flash and no segments stay on the screen.
I know I can save the points to a list and redraw all the segments whenever a new point is added. But I just hope to economize. Is it possible?
There's no practical other way. You COULD in fact, keep track of just two points, adding a whole new Canvas (all Transparent and Filling the maximum Size, stacked on top of one another), for each extra point that is added. This does seem a bit impractical, but maybe try it out and do some benchmarking to see which one checks out. This is the only OTHER way I could think of, where you do not have to store all the points and recompose every time a point is added, since all the other lines would technically be frozen in space.
In response to the somewhat (unreasonably) aggressive comment below, here's some sample code. I assume you have a stream of new points coming in so a LiveData object is assumed to be the source of that, which I shall be converting to a MutableState<T> for my use-case.
var latestPoint by liveData.collectAsState()
var recordedPoint by remember { mutableStateOf(latestPoint) }
var triggerDraw by remember { mutableStateOf(false) }
var canvasList = mutableStateListOf<#Composable () -> Unit>Canvas>() // Canvas is the Composable
if(triggerDraw){
canvasList.add(
Canvas(){
/* you have the recordedPoint, and latestPoint, simply draw a line here */
}
)
triggerDraw = false
}
LaunchedEffect(latestPoint){
triggerDraw = true
}
canvasList.forEach {
it() // Invoke the Composable here
}
Thanks Dad!

Image brightness/contrast - Xamarin iOS

Xamarin provides some sample code for doing simple adjustments to an image in iOS:
https://github.com/xamarin/recipes/blob/master/ios/media/coreimage/adjust_contrast_and_brightness_of_an_image/color_controls_pro/ImageViewController.cs
This code updates the image only when the user lets go of the slider knob - not the continuous updating we normally expect.
However when I make the following change I reliably get SIGSEGV faults on hardware.
//sliderC.TouchUpInside += HandleValueChanged;
//sliderS.TouchUpInside += HandleValueChanged;
//sliderB.TouchUpInside += HandleValueChanged;
sliderC.ValueChanged += HandleValueChanged;
sliderS.ValueChanged += HandleValueChanged;
sliderB.ValueChanged += HandleValueChanged;
I expect that this is "overloading" the code in some way. How would you implement image adjustments that avoid this problem? Is there a lower-level approach, or do other apps simply use a much lower-rez version of the image for adjustments?
Here is a quickie version that I did that is 'more' realtime (This is a recording from the sim, the device (6s) is smooth depending upon initial image size.
Create a single viewcontroller iOS app from the template and add a UIImageView and three sliders to the storyboard so it looks like the animate gif.
I created a simple class to store the ColorCtrl values (brightness, contrast, saturation:
public class ColorCtrl
{
public float s;
public float b;
public float c;
}
Than in the ViewDidLoad method, do some setup:
public override void ViewDidLoad ()
{
base.ViewDidLoad ();
string filePath = Path.Combine (NSBundle.MainBundle.BundlePath, "hero.jpg");
originalImage = new CIImage (new NSUrl (filePath, false));
colorCtrls = new CIColorControls ();
colorCtrls.Image = originalImage;
// Create the context only once, and re-use it
var contextOptions = new CIContextOptions ();
contextOptions.UseSoftwareRenderer = false; // gpu vs. cpu
// On save of the image, create a new context with highqual attributes and re-apply the filter...
contextOptions.HighQualityDownsample = false;
contextOptions.PriorityRequestLow = false; // high queue order it
contextOptions.CIImageFormat = (int)CIFormat.ARGB8; // use 32bpp, vs. 64|128bpp
context = CIContext.FromOptions (contextOptions);
}
Then for the three slider's change handlers.
Within those I 'hack' a busy flag in order to skip the image transformation if the last transform is not done get. If we are not busy, then do an await call on our async transform method.
Note: I said 'hack', I am mean it, in a best practice way, this should pump transform requests to a queue and the queue handler would summarize all the pending items in the queue, flush them and do the transform.
Note: I added the "async" to the generated event handlers so I can await the image transform.
Note: The three slider's handlers are all the same except for the value that they are assigning; colorCtrlV.b | colorCtrlV.s | colorCtrlV.c
Note: You can down-sample large image at the moment the user does a touch down, perform the transforms on that, and on touch up, transform the original full-size image...
async partial void brightnessChange (UISlider sender)
{
if (!busy) {
busy = true;
colorCtrlV.b = sender.Value;
this.imageView.Image = await FilterImageAsync (colorCtrlV);
busy = false;
}
}
Ok, now for actual transform, a fairly simple Run.Task so we can get this work off the main thread and NOT block the UI. This way the sliders will not stutter as the user slides their finger, BUT due to the 'busy' flag/hack in the slider's handler we might skip some of those events (we should add a handler for the touch drag exit so we process the 'last' user's requested value....)
// async Task.Run() - not best practice - just a demo
async Task<UIImage> FilterImageAsync (ColorCtrl value)
{
if (transformImage == null)
transformImage = new Func<UIImage>(() => {
colorCtrls.Brightness = colorCtrlV.b;
colorCtrls.Saturation = colorCtrlV.s;
colorCtrls.Contrast = colorCtrlV.c;
var output = colorCtrls.OutputImage;
var cgImage = context.CreateCGImage (output, originalImage.Extent);
var filteredImage = new UIImage (cgImage);
return filteredImage;
});
UIImage image = await Task.Run<UIImage>(transformImage);
return image;
}
Personally For this type of realtime image transformations I prefer to do it via OpenGL-ES using GPUImage as the screen's interaction at the 60z refresh rate is as smooth as butter, but it is a lot more work than using CoreImage filters.

Very slow hover interactions in OpenLayers 3 with any browser except Chrome

I have two styles of interactions, one highlights the feature, the second places a tooltop with the feature name. Commenting both out, they're very fast, leave either in, the map application slows in IE and Firefox (but not Chrome).
map.addInteraction(new ol.interaction.Select({
condition: ol.events.condition.pointerMove,
layers: [stationLayer],
style: null // this is actually a style function but even as null it slows
}));
$(map.getViewport()).on('mousemove', function(evt) {
if(!dragging) {
var pixel = map.getEventPixel(evt.originalEvent);
var feature = null;
// this block directly below is the offending function, comment it out and it works fine
map.forEachFeatureAtPixel(pixel, function(f, l) {
if(f.get("type") === "station") {
feature = f;
}
});
// commenting out just below (getting the feature but doing nothing with it, still slow
if(feature) {
target.css("cursor", "pointer");
$("#FeatureTooltip").html(feature.get("name"))
.css({
top: pixel[1]-10,
left: pixel[0]+15
}).show();
} else {
target.css("cursor", "");
$("#FeatureTooltip").hide();
}
}
});
I mean this seems like an issue with OpenLayers-3 but I just wanted to be sure I wasn't overlooking something else here.
Oh yeah, there's roughly 600+ points. Which is a lot, but not unreasonably so I would think. Zooming-in to limit the features in view definitely helps. So I guess this is a # of features issue.
This is a known bug and needs more investigation. You can track progress here: https://github.com/openlayers/ol3/issues/4232.
However, there is one thing you can do to make things faster: return a truthy value from map.forEachFeatureAtPixel to stop checking for features once one was found:
var feature = map.forEachFeatureAtPixel(pixel, function(f) {
if (f.get('type') == 'station') {
return feature;
}
});
i had same issue, solved a problem by setInterval, about this later
1) every mouse move to 1 pixel fires event, and you will have a quee of event till you stop moving, and the quee will run in calback function, and freezes
2) if you have an objects with difficult styles, all element shown in canvas will take time to calculate for if they hit the cursor
resolve:
1. use setInterval
2. check for pixels moved size from preview, if less than N, return
3. for layers where multiple styles, try to simplify them by dividing into multiple ones, and let only one layer by interactive for cursor move
function mouseMove(evt) {
clearTimeout(mm.sheduled);
function squareDist(coord1, coord2) {
var dx = coord1[0] - coord2[0];
var dy = coord1[1] - coord2[1];
return dx * dx + dy * dy;
}
if (mm.isActive === false) {
map.unByKey(mm.listener);
return;
}
//shedules FIFO, last pixel processed after 200msec last process
const elapsed = (performance.now() - mm.finishTime);
const pixel = evt.pixel;
const distance = squareDist(mm.lastP, pixel);
if (distance > 0) {
mm.lastP = pixel;
mm.finishTime = performance.now();
mm.sheduled = setTimeout(function () {
mouseMove(evt);
}, MIN_ELAPSE_MSEC);
return;
} else if (elapsed < MIN_ELAPSE_MSEC || mm.working === true) {
// console.log(`distance = ${distance} and elapsed = ${elapsed} mesc , it never should happen`);
mm.sheduled = setTimeout(function () {
mouseMove(evt);
}, MIN_ELAPSE_MSEC);
return;
}
//while multithreading is not working on browsers, this flag is unusable
mm.working = true;
let t = performance.now();
//region drag map
const vStyle = map.getViewport().style;
vStyle.cursor = 'default';
if (evt.dragging) {
vStyle.cursor = 'grabbing';
}//endregion
else {
//todo replace calback with cursor=wait,cursor=busy
UtGeo.doInCallback(function () {
checkPixel(pixel);
});
}
mm.finishTime = performance.now();
mm.working = false;
console.log('mm finished', performance.now() - t);
}
In addition to #ahocevar's answer, a possible optimization for you is to utilize the select interaction's select event.
It appears that both the select interaction and your mousemove listener are both checking for hits on the same layers, doing double work. The select interaction will trigger select events whenever the set of selected features changes. You could listen to it, and show the popup whenever some feature is selected and hide it when not.
This should reduce the work by half, assuming that forEachFeatureAtPixel is what's hogging the system.

How to speed up band selection tool in a Dart WebGL application

The task at hand is to add a band selection tool to a Dart WebGL application.
The tool will be used to draw a rectangle over multiple objects by dragging the mouse.
Thus multiple objects can be selected/picked in a single user action.
I'm currently using gl.readPixels() to read colors from an off-screen renderbuffer.
Problem is, when a large area is band-selected, gl.readPixels() issues millions of pixels.
Scanning such a big amount of colors wastes precious seconds just to locate few objects.
Please anyone point possibly faster methods for band-selecting multiple objects with Dart+WebGL.
For reference, I show below the current main portion of the band selection tool.
Uint8List _color = new Uint8List(4);
void bandSelection(int x, y, width, height, PickerShader picker, RenderingContext gl, bool shift) {
if (picker == null) {
err("bandSelection: picker not available");
return;
}
int size = 4 * width * height;
if (size > _color.length) {
_color = new Uint8List(size);
}
gl.bindFramebuffer(RenderingContext.FRAMEBUFFER, picker.framebuffer);
gl.readPixels(x, y, width, height, RenderingContext.RGBA, RenderingContext.UNSIGNED_BYTE, _color);
if (!shift) {
// shift is released
_selection.clear();
}
for (int i = 0; i < size; i += 4) {
if (_selection.length >= picker.numberOfInstances) {
// selected all available objects, no need to keep searching
break;
}
PickerInstance pi = picker.findInstanceByColor(_color[i], _color[i+1], _color[i+2]);
if (pi == null) {
continue;
}
_selection.add(pi);
}
debug("bandSelection: $_selection");
}
// findInstanceByColor is method from PickerShader
PickerInstance findInstanceByColor(int r, g, b) {
return colorHit(_instanceList, r, g, b);
}
PickerInstance colorHit(Iterable<Instance> list, int r,g,b) {
bool match(Instance i) {
Float32List f = i.pickColor;
return (255.0*f[0] - r.toDouble()).abs() < 1.0 &&
(255.0*f[1] - g.toDouble()).abs() < 1.0 &&
(255.0*f[2] - b.toDouble()).abs() < 1.0;
}
Instance pi;
try {
pi = list.firstWhere(match);
} catch (e) {
return null;
}
return pi as PickerInstance;
}
Right now I can see small solutions that might speed up your algorithm to limit as much as possible iterating over all of your elements,
The first thing you can do is have a default colour. When you see that colour, you know you don't need to iterate all over your array of elements.
It will accelerate large poorly populated areas.
It's very easy to implement, just adding a if.
For more dense areas you can implement some kind of colour caching. That means you store an array of colour you encountered. When you check a pixel, you first check the cache and then go over the entire list of elements, and if you find the element, add it to the cache.
It should accelerate cases with few big elements but will be bad if you have lots of small elements, which is very unlikely if you have picking...
You can accelerate your cache buy sorting your cached elements by last hit or/and by number of hits, it's very likely to find the same element in a continuous raw of pixels.
It's more work but stays relatively easy and short to implement.
Last optimisation would be to implement a space partitioning algorithm to filter the elements you want to check.
That would be more work but will pay better of on the long run.
edit :
I'm not a dart guy but this is how it would look like to implement in a basic way the first two optimisations:
var cache = new Map<UInt32, PickerInstance>();
for (int i = 0; i < size; i += 4) {
UInt32 colour = _color[i] << 24 | _color[i+1] << 16 | _color[i+2] << 8 | 0; // I guess we can just skip transparency.
if (_selection.length >= picker.numberOfInstances) {
// selected all available objects, no need to keep searching
break;
}
// black is a good default colour.
if(colour == 0) {
// if the pixel is black we didn't hit any element :(
continue;
}
// check the cache
if(cache[colour] != null) {
_selection.add(cache[colour]);
continue;
}
// valid colour and cache miss, we can't avoid iterating the list.
PickerInstance pi = picker.findInstanceByColor(_color[i], _color[i+1], _color[i+2]);
if (pi == null) {
continue;
}
_selection.add(pi);
// update cache
cache[colour] = pi;
}

Receieved msg_Image getting distored while displaying in openCv

I have published an image from one node and then i want to subscribe that image in my second node. But after subscribing it in the second node, when i try to store it in cv::Mat image then, it get distorted.
The patchImage in the following code is distored. there are some horizontal lines and four images of the same image merged.
An overview of my code is following.
first_node_publisher
{
im.header.stamp = time;
im.width = width;
im.height = height;
im.step = 3*width;
im.encoding = "rgb8";
image_pub.publish(im);
}
second_node_imageCallBack(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat patchImage;
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8); //
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
}
patchImage=cv_ptr->image;
imshow("Received Image", patchImage); //This patchImage is distored
}
I believe the problem is with your encoding setting, are you sure the encoding is actually rgb8? That is unlikely because OpenCV stores images by default in the BGR format (such as CV_8UC3). It is also possible that your images are actually not even stored as unsigned characters, but shorts, floats, doubles, etc.
I always include assert(image.type==CV_8UC3) in my publishers to make sure the encoding is correct

Resources