How to create 3D surface with skeletonized images? - opencv

I'm using OpenCV 3.1 (with VTK7.1) and attempt to create 3D-like surface with 2D images.
The idea is simple:
input continuous image(each frame of video stream)
skeletonization processing each frame.
stacking in 3-dimension space. Here is the problem:
How can I convert a image to point cloud and how can I display?
Here is a part of the code:
for (;;)
{
inStream >> singleFrm;
if (singleFrm.empty()) {
break;
}
else {
imshow("Origin", singleFrm);
singleFrm = singleFrm(roi);
cvtColor(singleFrm, roiSkelFrm, CV_RGB2GRAY);
const Size2d size(roiSkelFrm.cols, roiSkelFrm.rows);
threshold(roiSkelFrm, roiSkelFrm, 80, 255, cv::THRESH_BINARY);
thinning(roiSkelFrm, roiSkelFrm);
stackChFrm[0] = roiSkelFrm;
stackChFrm[1] = roiSkelFrm;
stackChFrm[2] = roiSkelFrm;
cv::merge(stackChFrm, 3, skelFrm);
skelFrm.convertTo(skelFrm, CV_32FC3);
viz::WCloud aCloudSlice(skelFrm, viz::Color::white());
myWindow.showWidget("image", aSlice);
myWindow.spinOnce(1, true);
if (waitKey(1) == 27)
break;
}
}
while (!myWindow.wasStopped())
{
myWindow.spinOnce(1, true);
}

Related

opencv Unable to record video from camera to a file

i am trying capture a video from webcam. But i am always getting a 441 byte size file getting created.
Also in console there is error coming
OpenCVCMD[37317:1478193] GetDYLDEntryPointWithImage(/System/Library/Frameworks/AppKit.framework/Versions/Current/AppKit,_NSCreateAppKitServicesMenu) failed.
Code Snippet
void demoVideoMaker() {
//Camera Input
VideoCapture cap(0);
vidoFeed = ∩
namedWindow("VIDEO", WINDOW_AUTOSIZE);
//Determine the size of inputFeed
Size inpFeedSize = Size((int) cap.get(CV_CAP_PROP_FRAME_WIDTH), // Acquire input size
(int) cap.get(CV_CAP_PROP_FRAME_HEIGHT));
cout<<"Input Feed Size: "<<inpFeedSize<<endl;
VideoWriter outputVideo;
char fName[] = "capturedVid.avi";
outputVideo.open(fName, CV_FOURCC('P','I','M','1'), 20, inpFeedSize, true);
if (!outputVideo.isOpened()) {
cout<<"Failed to write Video"<<endl;
}
//Event Loop
Mat frame;
bool recordingOn = false;
while(1){
//Process user input if any
char ch = char(waitKey(10));
if (ch == 'q') {
break;
}if (ch == 'r') {
recordingOn = !recordingOn;
}
//Move to next frame
(*vidoFeed)>>frame;
if (frame.empty()) {
printf("\nEmpty Frame encountered");
}else {
imshow("VIDEO", frame);
if(recordingOn) {
cout<<".";
outputVideo.write(frame);
}
}
}
}
I am using opencv2.4, XCode 8.2 on mac OS Sierra 10.12.1
Tried changing the codec, fps, but nothing helped. I was assuming this would be a straight forward task but got stuck here. Please help.

Contour position with "findcontour" opencv on processing

I'm working on a project where I have to use a webcam, an arduino, a raspberry and an IR proximity sensor. I arrived to do everything with some help of google. But I have a big problem that's really I think.
I'm using OpenCV library on processing and I'd like the contours that get by the webcam be in the center of the sketch. But All only arrived to move the video and not the contours here's my code.
I hope you'll could help me :)
All the best
Alexandre
////////////////////////////////////////////
////////////////////////////////// LIBRARIES
////////////////////////////////////////////
import processing.serial.*;
import gab.opencv.*;
import processing.video.*;
/////////////////////////////////////////////////
////////////////////////////////// INITIALIZATION
/////////////////////////////////////////////////
Movie mymovie;
Capture video;
OpenCV opencv;
Contour contour;
////////////////////////////////////////////
////////////////////////////////// VARIABLES
////////////////////////////////////////////
int lf = 10; // Linefeed in ASCII
String myString = null;
Serial myPort; // The serial port
int sensorValue = 0;
int x = 300;
/////////////////////////////////////////////
////////////////////////////////// VOID SETUP
/////////////////////////////////////////////
void setup() {
size(1280, 1024);
// List all the available serial ports
printArray(Serial.list());
// Open the port you are using at the rate you want:
myPort = new Serial(this, Serial.list()[1], 9600);
myPort.clear();
// Throw out the first reading, in case we started reading
// in the middle of a string from the sender.
myString = myPort.readStringUntil(lf);
myString = null;
opencv = new OpenCV(this, 720, 480);
video = new Capture(this, 720, 480);
mymovie = new Movie(this, "visage.mov");
opencv.startBackgroundSubtraction(5, 3, 0.5);
mymovie.loop();
}
////////////////////////////////////////////
////////////////////////////////// VOID DRAW
////////////////////////////////////////////
void draw() {
image(mymovie, 0, 0);
image(video, 20, 20);
//tint(150, 20);
noFill();
stroke(255, 0, 0);
strokeWeight(1);
// check if there is something new on the serial port
while (myPort.available() > 0) {
// store the data in myString
myString = myPort.readStringUntil(lf);
// check if we really have something
if (myString != null) {
myString = myString.trim(); // let's remove whitespace characters
// if we have at least one character...
if (myString.length() > 0) {
println(myString); // print out the data we just received
// if we received a number (e.g. 123) store it in sensorValue, we sill use this to change the background color.
try {
sensorValue = Integer.parseInt(myString);
}
catch(Exception e) {
}
}
}
}
if (x < sensorValue) {
video.start();
opencv.loadImage(video);
}
if (x > sensorValue) {
image(mymovie, 0, 0);
}
opencv.updateBackground();
opencv.dilate();
opencv.erode();
for (Contour contour : opencv.findContours()) {
contour.draw();
}
}
//////////////////////////////////////////////
////////////////////////////////// VOID CUSTOM
//////////////////////////////////////////////
void captureEvent(Capture video) {
video.read(); // affiche l'image de la webcam
}
void movieEvent(Movie myMovie) {
myMovie.read();
}
One approach you could use is to call the translate() function to move the origin of the canvas before you call contour.draw(). Something like this:
translate(moveX, moveY);
for (Contour contour : opencv.findContours()) {
contour.draw();
}
What you use for moveX and moveY depends entirely on exactly what you're trying to do. You might use whatever position you're using to draw the video (if you want the contours displayed on top of the video), or you might use width/2 and height/2 (maybe minus a bit to really center the contours).
More info can be found in the reference. Play with a bunch of different values, and post an MCVE if you get stuck. Good luck.

Improve Face Recognition

I am trying to a develop face-recognition app in android. I am using JavaCv FaceRecognizer. But so far I am getting very poor results. It recognizes image of person which was trained but it also recognizes unknown images. For the known faces it gives me large value as a distance, most of the time from 70-90, sometimes 90+, while unknown images also get 70-90.
So how can I increase the performance of face-recognition? What techniques are there? What percentage of success you can get with this normally?
I have never worked with image processing. I will appreciate any guidelines.
Here is the code:
public class PersonRecognizer {
public final static int MAXIMG = 100;
FaceRecognizer faceRecognizer;
String mPath;
int count=0;
labels labelsFile;
static final int WIDTH= 70;
static final int HEIGHT= 70;
private static final String TAG = "PersonRecognizer";
private int mProb=999;
PersonRecognizer(String path)
{
faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,100);
// path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
mPath=path;
labelsFile= new labels(mPath);
}
void changeRecognizer(int nRec)
{
switch(nRec) {
case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
break;
case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
break;
case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
break;
}
train();
}
void add(Mat m, String description)
{
Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m,bmp);
bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);
FileOutputStream f;
try
{
f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
count++;
bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
f.close();
} catch (Exception e) {
Log.e("error",e.getCause()+" "+e.getMessage());
e.printStackTrace();
}
}
public boolean train() {
File root = new File(mPath);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".jpg");
};
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img=null;
IplImage grayImg;
int i1=mPath.length();
for (File image : imageFiles) {
String p = image.getAbsolutePath();
img = cvLoadImage(p);
if (img==null)
Log.e("Error","Error cVLoadImage");
Log.i("image",p);
int i2=p.lastIndexOf("-");
int i3=p.lastIndexOf(".");
int icount = 0;
try
{
icount=Integer.parseInt(p.substring(i2+1,i3));
}
catch(Exception ex)
{
ex.printStackTrace();
}
if (count<icount) count++;
String description=p.substring(i1,i2);
if (labelsFile.get(description)<0)
labelsFile.add(description, labelsFile.max()+1);
label = labelsFile.get(description);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
if (counter>0)
if (labelsFile.max()>1)
faceRecognizer.train(images, labels);
labelsFile.Save();
return true;
}
public boolean canPredict()
{
if (labelsFile.max()>1)
return true;
else
return false;
}
public String predict(Mat m) {
if (!canPredict())
return "";
int n[] = new int[1];
double p[] = new double[1];
//conver Mat to black and white
/*Mat gray_m = new Mat();
Imgproc.cvtColor(m, gray_m, Imgproc.COLOR_RGBA2GRAY);*/
IplImage ipl = MatToIplImage(m, WIDTH, HEIGHT);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
{
mProb=(int)p[0];
Log.v(TAG, "Distance = "+mProb+"");
Log.v(TAG, "N = "+n[0]);
}
else
{
mProb=-1;
Log.v(TAG, "Distance = "+mProb);
}
if (n[0] != -1)
{
return labelsFile.get(n[0]);
}
else
{
return "Unknown";
}
}
IplImage MatToIplImage(Mat m,int width,int heigth)
{
Bitmap bmp;
try
{
bmp = Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.RGB_565);
}
catch(OutOfMemoryError er)
{
bmp = Bitmap.createBitmap(m.width()/2, m.height()/2, Bitmap.Config.RGB_565);
er.printStackTrace();
}
Utils.matToBitmap(m, bmp);
return BitmapToIplImage(bmp, width, heigth);
}
IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {
if ((width != -1) || (height != -1)) {
Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
bmp = bmp2;
}
IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
IPL_DEPTH_8U, 4);
bmp.copyPixelsToBuffer(image.getByteBuffer());
IplImage grayImg = IplImage.create(image.width(), image.height(),
IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);
return grayImg;
}
protected void SaveBmp(Bitmap bmp,String path)
{
FileOutputStream file;
try
{
file = new FileOutputStream(path , true);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, file);
file.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
Log.e("",e.getMessage()+e.getCause());
e.printStackTrace();
}
}
public void load() {
train();
}
public int getProb() {
// TODO Auto-generated method stub
return mProb;
}
}
I have faced similar challenges recently, here are the things which helped me in getting better results:
Crop the faces from images - this will remove unnecessary pixels at the time of inference
Resize the cropped face images - this impacts when detecting face landmarks, try different scales on test sets to understand what works best. Also, this impacts the inference time as well, smaller the size, faster the inference.
Improve the brightness of the face images - I found this really helpful, detecting face landmarks in darker images was not much good, this is mainly due to the model, which was pre-trained with mostly white faces - having understanding on training data will helps when dealing with bias.
Convert to grayscale images - this I have seen it in many forums and said that, this will helpful in finding the edges efficiently - and processing time is less when compared to colour images (3 channels -RGB) - however, this did not help much.
Try to capture (register) as many as images for individual person in different angles, lightings and other variations - this one really helps as it is comparing with encodings of the stored images.
Try to implement 1-1 comparison for face verification - for example, in my system, I have captured 10 pictures for each person, and at the time of verification, I am comparing against 10 pictures, instead of all the encodings of all the persons stored in the system. This will provide, false positives, however use-cases are limited in this setup, I am using it for face authentication, and compare the new face against existing faces where mobile number is same.
My understanding as of today, face recognition system works great and but not 100% accurate, we have to understand the model architecture, training data and our requirement and deploy it accordingly to get better outcome. Here are some points which helped me improve overall system:
Implement fallback method - provide option to user, when our system failed to detects them correctly, example, if face authentication failed for some reason, show them enter PIN option
In critical system - add periodic human intervention to confirm system result - for example, if a system not allows a user based on FR result - verify with a human agent for failed result and allow the user
Implement multiple factors for authentication - deploy face recognition system as addition to existing system - for example, after user logged in with credentials - verify them its intended person using face recognition system
Design your user interface in a way, at the time of verification, how user should behave like open eyes, close mouth, etc without impacting user experience
Provide clear instruction to users, when they are dealing with the system - for example, let user know, FR system integrated and they need to show their faces in good lighting condition, etc.

OpenCV-iOS demos run at 6-10 FPS on iPad, is this normal?

The OpenCV-iOS detection and tracking code runs betwee 6-10 FPS on my iPad.
Is this normal?
I figured their "sample" code would run as fast as it could...
DetectTrackSample.cpp
#include <iostream>
#include "DetectTrackSample.h"
#include "ObjectTrackingClass.h"
#include "FeatureDetectionClass.h"
#include "Globals.h"
DetectTrackSample::DetectTrackSample()
: m_fdAlgorithmName("ORB")
, m_feAlgorithmName("FREAK")
, m_maxCorners(200)
, m_hessianThreshold(400)
, m_nFeatures(500)
, m_minMatches(4)
, m_drawMatches(true)
, m_drawPerspective(true)
{
std::vector<std::string> fdAlgos, feAlgos, otAlgos;
// feature detection options
fdAlgos.push_back("ORB");
fdAlgos.push_back("SURF");
registerOption("Detector", "", &m_fdAlgorithmName, fdAlgos);
// feature extraction options
feAlgos.push_back("ORB");
feAlgos.push_back("SURF");
feAlgos.push_back("FREAK");
registerOption("Extractor", "", &m_feAlgorithmName, feAlgos);
// SURF feature detector options
registerOption("hessianThreshold", "SURF", &m_hessianThreshold, 300, 500);
// ORB feature detector options
registerOption("nFeatures", "ORB", &m_nFeatures, 0, 1500);
// matcher options
registerOption("Minumum matches", "Matcher", &m_minMatches, 4, 200);
// object tracking options
registerOption("m_maxCorners", "Tracking", &m_maxCorners, 0, 1000);
// Display options
registerOption("Matches", "Draw", &m_drawMatches);
registerOption("Perspective", "Draw", &m_drawPerspective);
}
//! Gets a sample name
std::string DetectTrackSample::getName() const
{
return "Detection and Tracking";
}
std::string DetectTrackSample::getSampleIcon() const
{
return "DetectTrackSampleIcon.png";
}
//! Returns a detailed sample description
std::string DetectTrackSample::getDescription() const
{
return "Combined feature detection and object tracking sample.";
}
//! Returns true if this sample requires setting a reference image for latter use
bool DetectTrackSample::isReferenceFrameRequired() const
{
return true;
}
//! Sets the reference frame for latter processing
void DetectTrackSample::setReferenceFrame(const cv::Mat& reference)
{
getGray(reference, objectImage);
computeObject = true;
}
// Reset object keypoints and descriptors
void DetectTrackSample::resetReferenceFrame() const
{
detectObject = false;
computeObject = false;
trackObject = false;
}
//! Processes a frame and returns output image
bool DetectTrackSample::processFrame(const cv::Mat& inputFrame, cv::Mat& outputFrame)
{
// display the frame
inputFrame.copyTo(outputFrame);
// convert input frame to gray scale
getGray(inputFrame, imageNext);
// begin tracking object
if ( trackObject ) {
// prepare the tracking class
ObjectTrackingClass tracker;
tracker.setMaxCorners(m_maxCorners);
// track object
tracker.track(outputFrame,
imagePrev,
imageNext,
pointsPrev,
pointsNext,
status,
err);
// check if the next points array isn't empty
if ( pointsNext.empty() ) {
// if it is, go back to detect
trackObject = false;
detectObject = true;
}
}
// try to find the object in the scene
if (detectObject) {
// prepare the robust matcher and set paremeters
FeatureDetectionClass rmatcher;
rmatcher.setConfidenceLevel(0.98);
rmatcher.setMinDistanceToEpipolar(1.0);
rmatcher.setRatio(0.65f);
// feature detector setup
if (m_fdAlgorithmName == "SURF")
{
// prepare keypoints detector
cv::Ptr<cv::FeatureDetector> detector = new cv::SurfFeatureDetector(m_hessianThreshold);
rmatcher.setFeatureDetector(detector);
}
else if (m_fdAlgorithmName == "ORB")
{
// prepare feature detector and detect the object keypoints
cv::Ptr<cv::FeatureDetector> detector = new cv::OrbFeatureDetector(m_nFeatures);
rmatcher.setFeatureDetector(detector);
}
else
{
std::cerr << "Unsupported algorithm:" << m_fdAlgorithmName << std::endl;
assert(false);
}
// feature extractor and matcher setup
if (m_feAlgorithmName == "SURF")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::SurfDescriptorExtractor;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for SURF
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_L2, false);
rmatcher.setDescriptorMatcher(matcher);
} else if (m_feAlgorithmName == "ORB")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::OrbDescriptorExtractor;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for ORB
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_HAMMING, false);
rmatcher.setDescriptorMatcher(matcher);
} else if (m_feAlgorithmName == "FREAK")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::FREAK;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for FREAK
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_HAMMING, false);
rmatcher.setDescriptorMatcher(matcher);
}
else {
std::cerr << "Unsupported algorithm:" << m_feAlgorithmName << std::endl;
assert(false);
}
// call the RobustMatcher to match the object keypoints with the scene keypoints
cv::vector<cv::Point2f> objectKeypoints2f, sceneKeypoints2f;
std::vector<cv::DMatch> matches;
cv::Mat fundamentalMat = rmatcher.match(imageNext, // input scene image
objectKeypoints, // input computed object image keypoints
objectDescriptors, // input computed object image descriptors
matches, // output matches
objectKeypoints2f, // output object keypoints (Point2f)
sceneKeypoints2f); // output scene keypoints (Point2f)
if ( matches.size() >= m_minMatches ) { // assume something was detected
// draw perspetcive lines (box object in the frame)
if (m_drawPerspective)
rmatcher.drawPerspective(outputFrame,
objectImage,
objectKeypoints2f,
sceneKeypoints2f);
// draw keypoint matches as yellow points on the output frame
if (m_drawMatches)
rmatcher.drawMatches(outputFrame,
matches,
sceneKeypoints2f);
// init points array for tracking
pointsNext = sceneKeypoints2f;
// set flags
detectObject = false;
trackObject = true;
}
}
// compute object image keypoints and descriptors
if (computeObject) {
// select feature detection mechanism
if ( m_fdAlgorithmName == "SURF" )
{
// prepare keypoints detector
cv::Ptr<cv::FeatureDetector> detector = new cv::SurfFeatureDetector(m_hessianThreshold);
// Compute object keypoints
detector->detect(objectImage,objectKeypoints);
}
else if ( m_fdAlgorithmName == "ORB" )
{
// prepare feature detector and detect the object keypoints
cv::Ptr<cv::FeatureDetector> detector = new cv::OrbFeatureDetector(m_nFeatures);
// Compute object keypoints
detector->detect(objectImage,objectKeypoints);
}
else {
std::cerr << "Unsupported algorithm:" << m_fdAlgorithmName << std::endl;
assert(false);
}
// select feature extraction mechanism
if ( m_feAlgorithmName == "SURF" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::SurfDescriptorExtractor;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else if ( m_feAlgorithmName == "ORB" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::OrbDescriptorExtractor;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else if ( m_feAlgorithmName == "FREAK" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::FREAK;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else {
std::cerr << "Unsupported algorithm:" << m_feAlgorithmName << std::endl;
assert(false);
}
// set flags
computeObject = false;
detectObject = true;
}
// backup previous frame
imageNext.copyTo(imagePrev);
// backup points array
std::swap(pointsNext, pointsPrev);
return true;
}
This can be normal. It depends on your detection and tracking code.
For example:
On an iPhone 4 using the CV_HAAR_FIND_BIGGEST_OBJECT option the demo
app achieves up to 4 fps when a face is in the frame. This drops to
around 1.5 fps when no face is present. Without the
CV_HAAR_FIND_BIGGEST_OBJECT option multiple faces can be detected in a
frame at around 1.8 fps. Note that the live video preview always runs
at the full 30 fps irrespective of the processing frame rate and
processFrame:videoRect:videoOrientation: is called at 30 fps if you
only perform minimal processing.
Source: Click

Image Processing with Kinect and AForge

I'm working on a project where I want to track a dice with the Microsoft Kinect using the AForge.NET-Library.
The project itself contains only the fundamentals such as initializing the Kinect, obtaining a Colorframe and applying one color filter but there already the problem occurs.
So here is the main part of the program:
void ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
colorFrameManager.Update(colorFrame);
BitmapSource thresholdedImage =
diceDetector.GetThresholdedImage(colorFrameManager.Bitmap);
if (thresholdedImage != null)
{
Display.Source = thresholdedImage;
}
}
}
}
The 'Update'-method of the 'colorFrameManager'-object looks like this:
public void Update(ColorImageFrame colorFrame)
{
byte[] colorData = new byte[colorFrame.PixelDataLength];
colorFrame.CopyPixelDataTo(colorData);
if (Bitmap == null)
{
Bitmap = new WriteableBitmap(colorFrame.Width, colorFrame.Height,
96, 96, PixelFormats.Bgr32, null);
}
int stride = Bitmap.PixelWidth * Bitmap.Format.BitsPerPixel / 8;
imageRect.X = 0;
imageRect.Y = 0;
imageRect.Width = colorFrame.Width;
imageRect.Height = colorFrame.Height;
Bitmap.WritePixels(imageRect, colorData, stride, 0);
}
And the 'getThresholdedImage'-method looks like this:
public BitmapSource GetThresholdedImage(WriteableBitmap colorImage)
{
BitmapSource thresholdedImage = null;
if (colorImage != null)
{
try
{
Bitmap bitmap = BitmapConverter.ToBitmap(colorImage);
HSLFiltering filter = new HSLFiltering();
filter.Hue = new IntRange(335, 0);
filter.Saturation = new Range(0.6f, 1.0f);
filter.Luminance = new Range(0.1f, 1.0f);
filter.ApplyInPlace(bitmap);
thresholdedImage = BitmapConverter.ToBitmapSource(bitmap);
}
catch (Exception ex)
{
System.Console.WriteLine(ex.Message);
}
}
return thresholdedImage;
}
Now the program slows down a lot/ doesn't respond when this line is executed:
filter.ApplyInPlace(bitmap);
So I already read this thread (C# image processing on Kinect video using AForge) and I tried EMGU but I couldn't get it to work because of inner exceptions and as the thread-starter wasn't online since four months my question to have a look at his working code wasn't answered.
Now firstly I'm intereseted in how the reason for the slow execution can be
filter.ApplyInPlace(bitmap);
Is this image processing really so complex? Or could this be a problem with my enviroment?
Secondly I would like to ask if skipping frames is a good solution? Or is it better to use polling and open frames only every - for instance - 500 milliseconds.
Thank you very much!
The HSL filter would not slow down the computation, is not an complex Filter.
Im utilizing it in 320x240 images with 30 fps without problems.
The problem may be in the resolution of the computed image or in a too high frame rate!
If the resolution of the image is high, i suggest to resize it before any filter application.
And i think a framerate of 20 (and maybe less) is enough to tracking a dice.

Resources