Is there a newer blob detection/tracking library?
Is it not a good library?
Isn't legacy supposed to be old and not useful code?
Does anybody know?
Here is newer blob detector:
http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_feature_detectors.html#SimpleBlobDetector
Here is my code which i used to track the blob in Emgucv 3.1 Version
//MCvFont font = new MCvFont(Emgu.CV.CvEnum.FontFace.HersheySimplex, 0.5, 0.5);
using (CvTracks tracks = new CvTracks())
using (ImageViewer viewer = new ImageViewer())
using (Capture capture = new Capture())
using (Mat fgMask = new Mat())
{
//BGStatModel<Bgr> bgModel = new BGStatModel<Bgr>(capture.QueryFrame(), Emgu.CV.CvEnum.BG_STAT_TYPE.GAUSSIAN_BG_MODEL);
BackgroundSubtractorMOG2 bgModel = new BackgroundSubtractorMOG2(0, 0, true);
//BackgroundSubstractorMOG bgModel = new BackgroundSubstractorMOG(0, 0, 0, 0);
capture.ImageGrabbed += delegate(object sender, EventArgs e)
{
Mat frame = new Mat();
capture.Retrieve(frame);
bgModel.Apply(frame, fgMask);
using (CvBlobDetector detector = new CvBlobDetector())
using (CvBlobs blobs = new CvBlobs())
{
detector.Detect(fgMask.ToImage<Gray, Byte>(), blobs);
blobs.FilterByArea(100, int.MaxValue);
tracks.Update(blobs, 20.0, 10, 0);
Image<Bgr, Byte> result = new Image<Bgr, byte>(frame.Size);
using (Image<Gray, Byte> blobMask = detector.DrawBlobsMask(blobs))
{
frame.CopyTo(result, blobMask);
}
//CvInvoke.cvCopy(frame, result, blobMask);
foreach (KeyValuePair<uint, CvTrack> pair in tracks)
{
if (pair.Value.Inactive == 0) //only draw the active tracks.
{
CvBlob b = blobs[pair.Value.BlobLabel];
Bgr color = detector.MeanColor(b, frame.ToImage<Bgr, Byte>());
result.Draw(pair.Key.ToString(), pair.Value.BoundingBox.Location, Emgu.CV.CvEnum.FontFace.HersheySimplex, 0.5, color);
result.Draw(pair.Value.BoundingBox, color, 2);
Point[] contour = b.GetContour();
result.Draw(contour, new Bgr(0, 0, 255), 1);
}
}
viewer.Image = frame.ToImage<Bgr, Byte>().ConcateVertical(fgMask.ToImage<Bgr, Byte>().ConcateHorizontal(result));
}
};
capture.Start();
viewer.ShowDialog();
}
Related
I want to find contours after canny edge detection. But they shown up in white color even though BGR is set as (255,0,0).
What is wrong in the code?
private void Capture_ImageGrabbed1(object sender, EventArgs e)
{
try
{
Mat m = new Mat();
capture.Retrieve(m);
pic1.Image = m.ToImage<Bgr, byte>().Bitmap;
if(mode == 1)
{
var image = m.ToImage<Bgr, byte>();
var grayScaleImage = image.Convert<Gray, byte>();
var blurredImage = grayScaleImage.SmoothGaussian(5, 5, 0, 0);
var cannyImage = new UMat();
CvInvoke.Canny(blurredImage, cannyImage, 10, 50);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
CvInvoke.FindContours(cannyImage, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(cannyImage, contours, -1, new MCvScalar(255, 0, 0),2);
pic2.Image = cannyImage.Bitmap;
}
}
catch (Exception)
{
throw;
}
}
pic2.Image is the picturebox. Code is working but no blue contours as in attached enter image description herepicture.
private void Capture_ImageGrabbed1(object sender, EventArgs e)
{
try
{
Mat m = new Mat();
capture.Retrieve(m);
pic1.Image = m.Bitmap;
if(mode == 1)
{
var image = m.ToImage<Bgr, byte>();
var grayScaleImage = image.Convert<Gray, byte>();
var blurredImage = grayScaleImage.SmoothGaussian(5, 5, 0, 0);
var cannyImage = new UMat();
CvInvoke.Canny(blurredImage, cannyImage, 10, 50);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
CvInvoke.FindContours(cannyImage, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var cannyOut = cannyImage.ToImage<Bgr, byte>();
CvInvoke.DrawContours(cannyOut, contours, -1, new MCvScalar(255, 0, 0),2);
pic2.Image = cannyOut.Bitmap;
}
}
catch (Exception)
{
throw;
}
}
I'm using opencv 2.4.6 and javacv 0.6. I'm trying to make face recognizer.
This is my code:
FaceRecognizer ef = createEigenFaceRecognizer(1, 0.00000001);
int facewidth = 92, faceheight = 112;
private boolean stopRec = false;
List<String> names = new ArrayList<String>();
public void recognize(IplImage face) {
int predicted;
int [] tabPredicted = new int[2];
double[] predConfTab = new double[2];
IplImage resizedFace = IplImage.create(new CvSize(facewidth, faceheight), IPL_DEPTH_8U, 1);
cvResize(face, resizedFace);
if (names.size() != 0)
{
ef.predict(resizedFace, tabPredicted, predConfTab);
predicted = tabPredicted[0];
}
else
{
predicted = -1;
}
if(predicted == -1 )
{
//adding user like:
int i = names.size();
names.add(name);
System.out.println("Identified new person: " + names.get(i));
MatVector mv = new MatVector(1);
mv.put(0, resizedFace);
int[] u = new int[] {i};
ef.train(mv, u);
}
I tried lot of configurations. I'm sure that i have valid face image in grayscale. The problem is that after ef.predict(resizedFace, tabPredicted, predConfTab);
tabPredicted[0] is always index of last added user and predConfTab[0] always equals 0, so it means that any faces exacly matches the last one added.
I have custom an camera app in Windows Phone 8. I need add a watermark image to each frame from camera capture and then record to a video.
I can customise each frame from the preview using the following code:
int[] pixelData = new int[(int)(camera.PreviewResolution.Width * camera.PreviewResolution.Height)];
camera.GetPreviewBufferArgb32(pixelData);
return pixelData;
and write it back to the preview.
my problem is that while I can show the frames on the screen while the user is recording a movie using the camera I cant find a working solution for WP8 to encode the frames and audio to save to a file.
I already tried opencv,libav and others without success,
if anyone can point me to the right direction it would be greatly appreciated.
You can do it like this.
private void GetCameraPicture_Click(object sender, RoutedEventArgs e)
{
Microsoft.Phone.Tasks.CameraCaptureTask cameraCaptureTask = new Microsoft.Phone.Tasks.CameraCaptureTask();
cameraCaptureTask.Completed += cct_Completed;
cameraCaptureTask.Show();
}
try
{
if (e.TaskResult == Microsoft.Phone.Tasks.TaskResult.OK)
{
var imageStream = e.ChosenPhoto;
var name = e.OriginalFileName;
using (MemoryStream mem = new MemoryStream())
{
TextBlock tb = new TextBlock() { Text = DateTime.Now.ToString("dd MMM yyyy, HH:mm"), Foreground = new SolidColorBrush(Color.FromArgb(128, 0, 0, 0)), FontSize = 40 };
BitmapImage finalImage = new BitmapImage();
finalImage.SetSource(imageStream);
WriteableBitmap wbFinal = new WriteableBitmap(finalImage);
wbFinal.Render(tb, null);
wbFinal.Invalidate();
wbFinal.SaveJpeg(mem, wbFinal.PixelWidth, wbFinal.PixelHeight, 0, 100);
mem.Seek(0, System.IO.SeekOrigin.Begin);
MediaLibrary lib = new MediaLibrary();
lib.SavePictureToCameraRoll("Copy" + name, mem.ToArray());
}
}
}
catch (Exception exp) { MessageBox.Show(exp.Message); }
Hope it may help you.
I would like to do image stitching to produce a panoramic view image at one position roughly 360 degrees. For now my coding is like this...
this is the capture while stitching button coding
int n = 1;
String fileName;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://192.168.0.4/view/snapshot.shtml?picturepath=/jpg/image.jpg");
res = request.GetResponse();//.GetResponse();
string[] CapturedImage = new string[28];
Image<Bgr, Byte>[] TotalImage = new Image<Bgr, Byte>[28];
do
{
fileName = #"D:\Ervin Loong (116834P)\Program Testing\Images\" + "Image" + (n++) + ".jpg"; // File location & Filename.
}
while (System.IO.File.Exists(fileName)); // check for existing previously saved files. If yes, add another new picture.
{
AMCLiveFeed.SaveCurrentImage(0, fileName);
res.Close();
}
if (n == 29)
{
for (int i = 0; i < 28; i++) // i < 10 - save 10 images
{
CapturedImage[i] = #"D:\Ervin Loong (116834P)\Program Testing\Images\" + "Image" + (i + 1) + ".jpg";
TotalImage[i] = new Image<Bgr, byte>(CapturedImage[i]);
}
try
{
using (Stitcher stitcher = new Stitcher(false)) //Although the Stitcher class is built for GPU acceleration, a false flag must be passed to enable CPU processing. As GPU is not implemented yet.
{
Image<Bgr, Byte> CapturedResult = stitcher.Stitch(TotalImage);
IMGBXDisplayStitched.Image = CapturedResult; //imagebox displays stitched results.
}
}
finally
{
foreach (Image<Bgr, Byte> DisposeImage in TotalImage)
{
DisposeImage.Dispose();
}
}
}
I'm porting AIR app to iOS. App saves document localy with File.browseForSave(). That seem not to to work on iPad. How is it possible to save files on iPad?
P.S. tracing File.url says "app-storage:/New%20map.comap". Maybe names with % are not allowed on iOS?
Best wishes
You can only save to specific directories within the apps sandboxed area. e.g. Documents directory.
Something like this saves an image to the documents directory. It uses the Adobe JPGEncoder to create a byte array that is written and the crop method to take a snapshot of the stage.
private function createImages():void {
var snapShot:Bitmap = crop(0, 0, 1024, 768);
f = File.documentsDirectory.resolvePath("test.jpg");
var stream:FileStream = new FileStream();
stream.open(f, FileMode.WRITE);
var j:JPGEncoder = new JPGEncoder(80);
var bytes:ByteArray = new ByteArray();
bytes = j.encode(snapShot.bitmapData);
stream.writeBytes(bytes, 0, bytes.bytesAvailable);
stream.close();
stream.openAsync(f, FileMode.READ);
stream.addEventListener(Event.COMPLETE, imagewritten, false, 0, true);
}
private function imagewritten(e:Event):void {
trace("done");
}
private function crop( _x:Number, _y:Number, _width:Number, _height:Number, displayObject:DisplayObject = null):Bitmap
{
var cropArea:Rectangle = new Rectangle( 0, 0, _width, _height );
var croppedBitmap = new Bitmap( new BitmapData( _width, _height ), PixelSnapping.ALWAYS, true );
croppedBitmap.bitmapData.draw( (displayObject!=null) ? displayObject : stage, new Matrix(1, 0, 0, 1, -_x, -_y) , null, null, cropArea, true );
cropArea = null;
return croppedBitmap;
}