How to get live output while processing images in ImageJ? - image-processing

All what I am describing here takes place in ImageJ 32-bit version(Java 1.6.0_20).The Plugin I am using to get the capture stream from the camera is Civil Capture.
I made it work that i get image slices in a stack from a live video capture stream, with a camera. Afterwords I process them, lets just say i feed the values of one horizontal line of pixels of one image in an array and than plot this line with the methods from Plot and Plot Window.
So now my Problem:
When I run my plugin it first captures the picture than it gets processed. Now a Window is beeing opend for the Image stack and one for the Plot.
For the next incoming image it will also be processed and the Plot is beeing updated. But only at the end, when as many images, as I want to, were processed, the Plot and the images will actually be displayed. While the processing is happening, both Windows are white and dont show anything.
Is it possible to make ImageJ display the plot and the images after the processing part? and how?
I have found the Dynamik Profiler but I didnt get it to work the way I wanted it to. Especally since you have to input a image and not an array.
This is my first question so dont be too hard on me if i missed to give you information.

You should be able to use ImagePlus#updateAndDraw() to update the image display.
For the Plot class, use:
Plot#show()
to display it for the first time, and:
Plot#updateImage()
to update the plot window.

Solution:
So the problem was that the entier program took place in one thread. So to fix it, the ploting had to be done in a seperat thread.

Related

Image Stream in ImageReader from Camera2 API

I want to capture Images as a sequential stream in OnImageAvailableListener of the ImageReader.
My Aim is to process these images in a background thread.
In the basic camera2 example this ImageAvailable Listener is called only once when I take a picture by clicking a button.
I need to get this OnImageAvailable called in real time.
You need to add an ImageReader Surface into the repeating preview request targets as well, instead of just to the still image capture.
You likely also want to use a YUV_420_888 format instead of JPEG, depending on your use case.

How can I get the position of an identified object in Scikit-Learn?

I can use Scikit-Learn to train a model and recognize objects but I also need to be able to tell where in my test data images the object is residing. Is there someway I could maybe get the coordinates of the part of the test image which has the object I'm trying to recognize?
If not, please refer me to some other library that'll help me achieve this task.
Thankyou
I assume that you are talking about a computer vision application. Usually, the way that a box is drawn around an identified object is by using a sliding window and running your classifier on each window as it steps across the screen. You can keep track of which windows come back with positive results and use those windows as your bounds. You may wish to use windows of various size, if the object scale changes from image to image. In that case, you would likely want to prefer the smaller of two overlapping windows.

GPUImageView flashes between filtered and unfiltered view when adding a GPUImageFilter

I'm trying to display a GPUImageView with a live camera feed (using GPUImageStillCamera or GPUImageVideoCamera) and display a series of filters below it. When a filter is tapped, I want it to apply that filter to the live feed so that the GPUImageView shows a live, filtered feed of the camera input. I have all of it set up, but for some reason when I tap on pretty much any included GPUImageOutput filter (Vignette, Smooth Toon, Emboss, etc), the video feed flashes like crazy. It seems like its alternating between the filtered view and the unfiltered view. When i switch to a different filter, i can tell that the filter is working properly for a tiny tiny fraction of a second before it switches to a different filter.
The grayscale and sepia filters don't flash but instead only show at half-strength. I've tried setting the intensity to 1.0 (and a bunch of other values) for the sepia filter, but the grayscale one doesn't have any settings to change and it seems like some things are gray but there's still color. I tried to take a screenshot of the grayscale view but when i look at the screenshots, the image is either properly grayscaled or not grayscaled at all, even though its not what i see on my actual device. My theory is that its switching between the filtered view and the non-filtered view really fast, therefore creating the illusion of a grayscale filter at 50% strength.
I have no idea why this would be happening, because the standard GPUImage example projects work just fine, and I'm not doing much differently in my project.
If anyone could help me out or at least point me in the right direction, it would be very much appreciated. I have been trying to debug this issue for 3 days straight and I simply cannot figure it out.
EDIT: when I call capturePhotoAsImageProcessedUpToFilter on my GPUImageStillCamera, it returns nil for both the UIImage and for the NSError in the completion block (even though the GPUImageStillCamera is not nil. Not sure if this is related, but I figured it was worth mentioning.
EDIT 2: I just realized it was returning a nil image because no filters were set. But if that's the case, how do you take a photo without having any filters active? And does that possibly have anything to do with my original issue? I set a grayscale filter (and I'm still seeing the half-strength version of it), and the image returned in the completion block is the actual proper grayscale image, despite the fact that the live feed looks different.
You probably have two inputs targeting your view.
Can't tell without seeing your code, but I found that drawing a graph with all my input / outputs really helped debugging filters.

Compare drawing with original

For the last week I've been attempting to write some code in which I'd be able to trace an image and compare it with the original by pixel color count. So far the only two definite thing working is the drawing and pixel count (which obviously returns 0 since the rest isn't working)
To make the UIImageView of the drawing into a PNG for comparison I take a screenshot.However, I'm trying to call the saved drawing (screenshot) immediately after the save for a pixel color count and comparison. This whole process is not working whatsoever.
The pixel count code asks for a PNG at the moment,so I'm trying to work around that.
Is there maybe an easier way to do this whole process instead of first taking a screenshot, then calling it and then getting the pixel color count? I'm trying to make the code as simple as possible by comparing only two colors.
Any help would be appreciated. I would post code, but I'd rather get a fresh take on this since I've tried many different ways already. Thanks in Advance.

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources