How to speed up band selection tool in a Dart WebGL application - webgl

The task at hand is to add a band selection tool to a Dart WebGL application.
The tool will be used to draw a rectangle over multiple objects by dragging the mouse.
Thus multiple objects can be selected/picked in a single user action.
I'm currently using gl.readPixels() to read colors from an off-screen renderbuffer.
Problem is, when a large area is band-selected, gl.readPixels() issues millions of pixels.
Scanning such a big amount of colors wastes precious seconds just to locate few objects.
Please anyone point possibly faster methods for band-selecting multiple objects with Dart+WebGL.
For reference, I show below the current main portion of the band selection tool.
Uint8List _color = new Uint8List(4);
void bandSelection(int x, y, width, height, PickerShader picker, RenderingContext gl, bool shift) {
if (picker == null) {
err("bandSelection: picker not available");
return;
}
int size = 4 * width * height;
if (size > _color.length) {
_color = new Uint8List(size);
}
gl.bindFramebuffer(RenderingContext.FRAMEBUFFER, picker.framebuffer);
gl.readPixels(x, y, width, height, RenderingContext.RGBA, RenderingContext.UNSIGNED_BYTE, _color);
if (!shift) {
// shift is released
_selection.clear();
}
for (int i = 0; i < size; i += 4) {
if (_selection.length >= picker.numberOfInstances) {
// selected all available objects, no need to keep searching
break;
}
PickerInstance pi = picker.findInstanceByColor(_color[i], _color[i+1], _color[i+2]);
if (pi == null) {
continue;
}
_selection.add(pi);
}
debug("bandSelection: $_selection");
}
// findInstanceByColor is method from PickerShader
PickerInstance findInstanceByColor(int r, g, b) {
return colorHit(_instanceList, r, g, b);
}
PickerInstance colorHit(Iterable<Instance> list, int r,g,b) {
bool match(Instance i) {
Float32List f = i.pickColor;
return (255.0*f[0] - r.toDouble()).abs() < 1.0 &&
(255.0*f[1] - g.toDouble()).abs() < 1.0 &&
(255.0*f[2] - b.toDouble()).abs() < 1.0;
}
Instance pi;
try {
pi = list.firstWhere(match);
} catch (e) {
return null;
}
return pi as PickerInstance;
}

Right now I can see small solutions that might speed up your algorithm to limit as much as possible iterating over all of your elements,
The first thing you can do is have a default colour. When you see that colour, you know you don't need to iterate all over your array of elements.
It will accelerate large poorly populated areas.
It's very easy to implement, just adding a if.
For more dense areas you can implement some kind of colour caching. That means you store an array of colour you encountered. When you check a pixel, you first check the cache and then go over the entire list of elements, and if you find the element, add it to the cache.
It should accelerate cases with few big elements but will be bad if you have lots of small elements, which is very unlikely if you have picking...
You can accelerate your cache buy sorting your cached elements by last hit or/and by number of hits, it's very likely to find the same element in a continuous raw of pixels.
It's more work but stays relatively easy and short to implement.
Last optimisation would be to implement a space partitioning algorithm to filter the elements you want to check.
That would be more work but will pay better of on the long run.
edit :
I'm not a dart guy but this is how it would look like to implement in a basic way the first two optimisations:
var cache = new Map<UInt32, PickerInstance>();
for (int i = 0; i < size; i += 4) {
UInt32 colour = _color[i] << 24 | _color[i+1] << 16 | _color[i+2] << 8 | 0; // I guess we can just skip transparency.
if (_selection.length >= picker.numberOfInstances) {
// selected all available objects, no need to keep searching
break;
}
// black is a good default colour.
if(colour == 0) {
// if the pixel is black we didn't hit any element :(
continue;
}
// check the cache
if(cache[colour] != null) {
_selection.add(cache[colour]);
continue;
}
// valid colour and cache miss, we can't avoid iterating the list.
PickerInstance pi = picker.findInstanceByColor(_color[i], _color[i+1], _color[i+2]);
if (pi == null) {
continue;
}
_selection.add(pi);
// update cache
cache[colour] = pi;
}

Related

Go/OpenCV: Filter Contours

I'm using this library to write an OpenCV app in Golang. I'm trying to do something very basic but can't seem to make it work. I simply want to take a set of contours, remove those contours that don't have a minimum area, then return the filtered result.
This is the current state of my code:
// given *opencv.Seq and image, draw all the contours
func opencvDrawRectangles(img *opencv.IplImage, contours *opencv.Seq) {
for c := contours; c != nil; c = c.HNext() {
rect := opencv.BoundingRect(unsafe.Pointer(c))
fmt.Println("Rectangle: ", rect.X(), rect.Y())
opencv.Rectangle(img,
opencv.Point{ rect.X(), rect.Y() },
opencv.Point{ rect.X() + rect.Width(), rect.Y() + rect.Height() },
opencv.ScalarAll(255.0),
1, 1, 0)
}
}
// return contours that meet the threshold
func opencvFindContours(img *opencv.IplImage, threshold float64) *opencv.Seq {
defaultThresh := 10.0
if threshold == 0.0 {
threshold = defaultThresh
}
contours := img.FindContours(opencv.CV_RETR_LIST, opencv.CV_CHAIN_APPROX_SIMPLE, opencv.Point{0, 0})
if contours == nil {
return nil
}
defer contours.Release()
threshContours := opencv.CreateSeq(opencv.CV_SEQ_ELTYPE_POINT,
int(unsafe.Sizeof(opencv.CvPoint{})))
for ; contours != nil; contours = contours.HNext() {
v := *contours
if opencv.ContourArea(contours, opencv.WholeSeq(), 0) > threshold {
threshContours.Push(unsafe.Pointer(&v))
}
}
return threshContours
}
In opencvFindContours, I'm trying to add to a new variable only those contours that meet the area threshold. When I take those results and pass them into opencvDrawRectangles, contours is filled with nonsense data. If, on the other hand, I just return contours directly in opencvFindContours then pass that to opencvDrawRectangles, I get the rectangles I would expect based on the motion detected in the image.
Does anyone know how to properly filter the contours using this library? I'm clearly missing something about how these data structures work, just not sure what.
However it's best implemented, the main thing I'm trying to figure out here is simply how to take a sequence of contours and filter out ones that fall below a certain area.... all the c++ examples I've seen make this look pretty easy, but I'm finding it quite challenging using a Go wrapper of the C API.
You're taking the Sizeof the pointer that would be returned by CreateSeq. You probably want the Sizeof the struct opencv.CVPoint{} instead.

Strange File Save Behavior with ImageJ

I wrote an imageJ script to color and merge a series of black and white images. The script saves both the unmerged colored images and merged colored images. Everything works beautifully when I'm running in debug mode and step through the script. When I run it for real, however, it occasionally saves a couple of the original black and whites instead of the resulting colored image. All the merged images appear to be fine.
Why would everything work fine in debug mode but fail during regular usage?
Below is my code :
// Choose the directory with the images
dir = getDirectory("Choose a Directory ");
// Get a list of everything in the directory
list = getFileList(dir);
// Determine if a composite directory exists. If not create one.
if (File.exists(dir+"/composite") == 0) {
File.makeDirectory(dir+"/composite")
}
// Determine if a colored directory exists. If not create one.
if (File.exists(dir+"/colored") == 0) {
File.makeDirectory(dir+"/colored")
}
// Close all files currently open to be safe
run("Close All");
// Setup options
setOption("display labels", true);
setBatchMode(false);
// Counter 1 keeps track of if you're on the first or second image of the tumor/vessel pair
count = 1;
// Counter 2 keeps track of the number of pairs in the folder
count2 = 1;
// Default Radio Button State
RadioButtonDefault = "Vessel";
// Set Default SatLevel for Contrast Adjustment
// The contrast adjustment does a histogram equalization. The Sat Level is a percentage of pixels that are allowed to saturate. A larger number means more pixels can saturate making the image appear brighter.
satLevelDefault = 2.0;
// For each image in the list
for (i=0; i<list.length; i++) {
// As long as the name doesn't end with / or .jpg
if (endsWith(list[i], ".tif")) {
// Define the full path to the filename
fn = list[i];
path = dir+list[i];
// Open the file
open(path);
// Create a dialog box but don't show it yet
Dialog.create("Image Type");
Dialog.addRadioButtonGroup("Type:", newArray("Vessel", "Tumor"), 1, 2, RadioButtonDefault)
Dialog.addNumber("Image Brightness Adjustment", satLevelDefault, 2, 4, "(applied only to vessel images)")
// If it's the first image of the pair ...
if (count == 1) {
// Show the dialog box
Dialog.show();
// Get the result and put it into a new variable and change the Default Radio Button State for the next time through
if (Dialog.getRadioButton=="Vessel") {
imgType = "Vessel";
RadioButtonDefault = "Tumor";
} else {
imgType = "Tumor";
RadioButtonDefault = "Vessel";
}
// If it's the second image of the pair
} else {
// And the first image was a vessel assume the next image is a tumor
if (imgType=="Vessel") {
imgType="Tumor";
// otherwise assume the next image is a vessel
} else {
imgType="Vessel";
}
}
// Check to see the result of the dialog box input
// If vessel do this
if (imgType=="Vessel") {
// Make image Red
run("Red");
// Adjust Brightness
run("Enhance Contrast...", "saturated="+Dialog.getNumber+" normalize");
// Strip the .tif off the existing filename to use for the new filename
fnNewVessel = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewVessel+"_colored");
// Get the title of the image for the merge
vesselTitle = getTitle();
// Othersie do this ...
} else {
// Make green
run("Green");
// Strip the .tif off the existing filename to use for the new filename
fnNewTumor = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewTumor+"_colored");
// Get the title of the image for the merge
tumorTitle = getTitle();
}
// If it's the second in the pair ...
if (count == 2) {
// Merge the two images
run("Merge Channels...", "c1="+vesselTitle+" c2="+tumorTitle+" create");
// Save as Jpg
saveAs("Jpeg", dir+"/composite/composite_"+count2);
// Reset the number within the pair counter
count = count-1;
// Increment the number of pairs counter
count2 = count2+1;
// Otherwise
} else {
// Increment the number within the pair counter
count += 1;
}
}
}
Not sure why I'd need to do this but adding wait(100) immediately before saveAs() seems to do the trick
The best practice in this scenario would be to poll IJ.macroRunning(). This method will return true if a macro is running. I would suggest using helper methods that can eventually time out, like:
/** Run with default timeout of 30 seconds */
public boolean waitForMacro() {
return waitForMacro(30000);
}
/**
* #return True if no macro was running. False if a macro runs for longer than
* the specified timeOut value.
*/
public boolean waitForMacro(final long timeOut) {
final long time = System.currentTimeMillis();
while (IJ.macroRunning()) {
// Time out after 30 seconds.
if (System.currentTimeMillis() - time > timeOut) return false;
}
return true;
}
Then call one of these helper methods whenever you use run(), open(), or newImage().
Another direction that may require more work, but provide a more robust solution, is using ImageJ2. Then you can run things with a ThreadService, which gives you back a Java Future which can then guarantee execution completion.

What is the correct way to apply filter to a image

I was wondering what the correct way would be to apply filter to a image. The image processing textbook that I am reading only talks about the mathematical and theoretical aspect of filters but doesn't talk much the programming part of it !
I came up with this pseudo code could some one tell me if it is correct cause I applied the sobel edge filter to a image and I am not satisfied with the output. I think it detected many unnecessary points as edges and missed out on several points along the edge.
int filter[][] = {{0d,-1d,0d},{-1d,8d,-1d},{0d,-1d,0d}};// I dont exactly remember the //sobel filter
int total = 0;
for(int i = 2;i<image.getWidth()-2;i++)
for(int j = 2;j<image.getHeight()-2;j++)
{
total = 0;
for(int k = 0;k<3;k++)
for(int l = 0;l<3;l++)
{
total += intensity(image.getRGB(i,j)) * filter[i+k][j+l];
}
if(total >= threshold){
image.setRGB(i,j,WHITE);
}
}
int intensity(int color)
{
return (((color >> 16) & 0xFF) + ((color >> 8) & 0xFF) + color)/3;
}
Two issues:
(1) The sober operator includes x-direction and y-direction, they are
int filter[][] = {{1d,0d,-1d},{2d,0d,-2d},{1d,0d,-1d}}; and
int filter[][] = {{1d,2d,1d},{0d,0d,0d},{-1d,-2d,-1d}};
(2) The convolution part:
total += intensity(image.getRGB(i+k,j+l)) * filter[k][l];
Your code doesn't look quiet right to me. In order to apply the filter to the image you must apply the discrete time convolution algorithm http://en.wikipedia.org/wiki/Convolution.
When you do convolution you want to slide the 3x3 filter over the image, moving it one pixel at a time. At each step you multiply the value of the filter 'pixel' by the corresponding value of the image pixel which is under that particular filter 'pixel' (the 9 pixels under the filter are all affected). The values that result should be added up onto a new resulting image as you go.
Thresholding is optional...
The following is your code modified with some notes:
int filter[][] = {{0d,-1d,0d},{-1d,8d,-1d},{0d,-1d,0d}};
//create a new array for the result image on the heap
int newImage[][][3] = ...
//initialize every element in the newImage to 0
for(int i = 0;i<image.getWidth()-1;i++)
for(int j = 0;j<image.getHeight()-1;j++)
for (int k = 0; k<3; k++)
{
newImage[i][j][k] = 0;
}
//Convolve the filter and the image
for(int i = 1;i<image.getWidth()-2;i++)
for(int j = 1;j<image.getHeight()-2;j++)
{
for(int k = -1;k<2;k++)
for(int l = -1;l<2;l++)
{
newImage[i+k][j+l][1] += getRed(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
newImage[i+k][j+l][2] += getGreen(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
newImage[i+k][j+l][3] += getBlue(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
}
}
int getRed(int color)
{
...
}
int getBlue(int color)
{
...
}
int getGreen(int color)
{
...
}
Please note that the code above does not handle the edges of the image exactly right. If you wanted to make it absolutely perfect you'd start by sliding the filter mostly off screen (so the first position would apply the lower right corner of the filter to the image 0,0 pixel of the image. Doing this is really a pain though, so usually its easier just to ignore the 2 pixel border around the edges.
Once you've got that working you can experiment by sliding the Sobel filter in the horizontal and then the vertical directions. You will notice that the filter acts most strongly on lines which are perpendicular to the direction of travel (to the filter). So for the best results apply the filter in the horizontal and then the vertical direction (using the same newImage). That way you will detect vertical as well as horizontal lines equally well. :)
You have some serious undefined behavior going on here. The array filter is 3x3 but the subscripts you're using i+k and j+l are up to the size of the image. It looks like you've misplaced this addition:
total += intensity(image.getRGB(i+k,j+l)) * filter[k][l];
Use GPUImage, it's quite good for you.

Drawstring refuses to stay on screen (XNA)

been fighting with this problem for a good 3 hours now, and i figured it was finally time to ask some professionals.
My problem is that i want to make a scrollbar, and I've figured out that i want to make it by using two integers and then give every item in the list an ID, and then say that all the items that has an ID thats in between the two integers are going to be shown, and then afterwards make 2 buttons that will + or - the integers so you can "scroll" though the items.
So to realize this i decided to make dummy code, to see how i could work it out.
And for most part its working, however i have the problem now, that it seems that my draw function is refreshing the screen constantly (Even though I've put in a bool to make sure it woulden), and by doing that it keeps erasing the numbers I'm listing on the screen.
Heres the code:
List<int> test = new List<int>() {1,2,3,4,5,6,7 };
int low = 0;
int high = 3;
int count;
bool isDone = false;
public override void Draw(GameTime gameTime)
{
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null);
if (!isDone)
{
int posX = 20;
foreach (int nr in test)
{
if (count == high)
{
isDone = true;
break;
}
else
{
count++;
}
if (count >= low && count <= high)
{
posX += 20;
spriteBatch.DrawString(gameFont, nr.ToString(), new Vector2(posX, 20), Color.White);
}
}
}
spriteBatch.End();
}
Hopefully some of you clever folks can eye the mistake I've been missing.
Thanks in advance
~Etarnalazure.
You never reset your counter. Therefore, the numbers will be only visible in the first frame. Add
count = 0;
before the for-loop.
Furthermore, you might want to revise your structure. If you use a List<>, then every item already has its index. And you don't have to iterate over each and every item, if you use a for loop:
for(int i = low; i <= high; ++i)
{
var currentItem = list[i];
}

Manipulating Opacity in Blackberry 5

I am building a news ticker that needs to be implemented on Blackberry 5. When transitioning from one element to the next, I am looking at a fade out/fade in transition. Mostly because I am having trouble finding resources on creating animations in the Blackberry 5 reference.
the basic flow I am looking at is:
public void updateUI() {
//fade out
//set values
//fade in
}
So far I have all the UI elements contained inside a HorizontalFieldManager. I have tried digging through the Field and Graphics documents, but did not find what I was looking for.
Keep in mind, supporting Blackberry 5 is the client's requirement, not mine.
You need to handle animations explicitly, using a timer for transitions.
My typical solution is something like this (inside the paint() method):
final long time = System.currentTimeMillis();
final int alpha;
if (startFadeIn != 0) {
alpha = (int) Math.min((time - startFadeIn) / SPEED, 255);
if (alpha < 255) {
invalidate();
}
} else if (startFadeOut != 0) {
alpha = (int) Math.max(255 + (startFadeOut - time) / SPEED, 0);
if (alpha > 0) {
invalidate();
}
} else {
alpha = 255;
}
graphics.setGlobalAlpha(alpha);
It burns some CPU cycles (for a short time), but it works.

Resources