What would be lightweight and reliable method to detect secchi disk in the image?
Related
I have a very large 1.2GB medical image that I would like to add annotations to. I have a tiled TIFF and I know the tile I would like to replace with a new equally sized tile/image, could I do so without reading the entire TIFF into memory and then re-writing the entire 1.2GB file to make such a small change?
In other words, can you swap out a portion of the TIFF without reading and writing the entire image each time?
I trained a few images and label the live image with the trained data. But when the live image is little far away from the camera my algorithm couldn't detect those images. Is there any way to even detect far images ?
Things I tried :
(*) Trained more images (i.e) increased dataset
(*) Used Image filter - Like medium filter, Gaussian filter
But those things too failed in detecting far images.
The detector couldn't detect images that are 5-6th far from the camera.
I have a stream of jpeg pictures(25fps, aboud 1000x700) and i want to render it the screen with as less CPU usage as possible.
By now I found out a fast way to decompress jpeg images - it is a gdi+ api. On my machine it take about 50ns per frame. I don't know how do they manage to do it but it's true, libjpeg8 for example is a much much slower as remembered it.
I tried to use gdi+ to output a stretched picture but it uses to much CPU for such a simple job. So I switched to directx9. It's good for me, but I can't find a good way to convert a gdi+ picture to directx9 texture.
There are a lot of ways to do it and all of them slow and have high CPU usage.
One of them:
get surface from texture
get hdc from surface
create gdi+ graphics from hdc
draw without stretching (DrawI of flat API).
Another way:
lock bits of image
lock bits of surface
copy bits
By the way D3DXCreateTextureFromFileInMemory is slow.
The question is how can I use an image as texture without copy overhead? Or what is the best way to convert image to texture?
Emgu.CV, a .Net wrapper to OpenCV comes with a video surveillance example. If used with a laptop embedded camera under artificial lightning, the whole picture is "noisy", and a foreground detected by an OpenCV's FGDetector is massive.
What can I do (plain OpenCV answer will also work) to filer out this noise to feed a relatively nosiseless image to a BlobTracker?
If you are using a simple Background Substraction, where you just have a previous background model and substract it from the current input image to generate a binary image representing 255 - Foreground / 0 - Background, you can look for connected components within the binary image and if they don't occupy a certain minimum area, they are filtered out (turned from 255 to 0).
Using OpenCV, you can use findContours to find all the blobs within the image and use contourArea to check if the blob is big enough to be considered a foreground.
Than you use fillPolly to fill the big blobs with 255(white) and the small blobs with 0(black).
I am creating mosaic of two images based on the region matches between them using sift descriptors. The problem is when the created mosaic's size gets too large matlab runs out of memory.
Is there some way of stitching the images without actually loading the complete images in memory.
If not how do other gigapixel image generation techniques work or the panorama apps.
Determine the size of the final mosaic prior to stitching (easy to compute with the size of your input images and the homography).
Write a blank mosaic to file (not in any specific format but a sequence of bytes just as in memory)
I'm assuming you're inverse mapping the pixels from the original images to the mosaic. So, just write to file when you're trying to store the intensity of the pixel in your mosaic.
There are a few ways you can save memory:
You should use integer data types, such as uint8 for your data.
If you're stitching, you can only keep the regions of interest in memory, such as the potential overlap regions.
If none of the other works, you can spatially downsample the images using imresample, and work on the resulting smaller images.
You can potentially use distributed arrays in the parallel computing toolbox