Gathering the confidence with JAVACV program - opencv

Trying to figure out a way to gather the confidence level when it actually does the face recognizing on the target image. I have searched through a few examples but haven't found anything I can see how to implement. All help appreciated, thanks guys.
public static void facecompare() {
String trainingDir = "C:/TrainingDirectory"; //training directory
IplImage testImage = cvLoadImage("C:/TargetImages/boland_straight_happy_open_4.pgm"); //the target image
File root = new File(trainingDir);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".pgm");
}
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img;
IplImage grayImg;
for (File image : imageFiles) {
img = cvLoadImage(image.getAbsolutePath());
label = Integer.parseInt(image.getName().split("\\-")[0]);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
IplImage greyTestImage = IplImage.create(testImage.width(), testImage.height(), IPL_DEPTH_8U, 1);
// FaceRecognizer faceRecognizer = createFisherFaceRecognizer();
// FaceRecognizer faceRecognizer = createEigenFaceRecognizer();
FaceRecognizer faceRecognizer = createLBPHFaceRecognizer();
faceRecognizer.train(images, labels);
cvCvtColor(testImage, greyTestImage, CV_BGR2GRAY);
int predictedLabel = faceRecognizer.predict(greyTestImage);
System.out.println("Predicted label: " + predictedLabel);
}

There is another predict method that returns the confidence:
// pointer-like output parameters
// only the first element of these arrays will be changed
int[] plabel = new int[1];
double[] pconfidence = new double[1];
faceRecognizer.predict(greyTestImage, plabel, pconfidence);
int predictedLabel = plabel[0];
double confidence = pconfidence[0];

Related

JavaCV Display image in color from video capture

I have trouble with display image from camera. I using VideoCapture and when I try display image in grayscale it's works perfect, but when I try display image in color that I get something like that:
link
Part of my source code:
public void CaptureVideo()
{
VideoCapture videoCapture = new VideoCapture(0);
Mat frame = new Mat();
while (videoCapture.isOpened() && _canWorking)
{
videoCapture.read(frame);
if (!frame.empty())
{
Image img = MatToImage(frame);
videoView.setImage(img);
}
try { Thread.sleep(33); } catch (InterruptedException e) { e.printStackTrace(); }
}
videoCapture.release();
}
private Image MatToImage(Mat original)
{
BufferedImage image = null;
int width = original.size().width(), height = original.size().height(), channels = original.channels();
byte[] sourcePixels = MatToBytes(original, width, height, channels);
if (original.channels() > 1)
{
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
}
else
{
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return SwingFXUtils.toFXImage(image, null);
}
private byte[] MatToBytes(Mat mat, int width, int height, int channels)
{
byte[] output = new byte[width * height * channels];
UByteRawIndexer indexer = mat.createIndexer();
int i = 0;
for (int j = 0; j < mat.rows(); j ++)
{
for (int k = 0; k < mat.cols(); k++)
{
output[i] = (byte)indexer.get(j,k);
i++;
}
}
return output;
}
Anyone can tell me what I doing wrong? I'm new in image processing and I don't get why it's not working.
Ok. I resolve this.
Solution:
byte[] output = new byte[_frame.size().width() * _frame.size().height() * _frame.channels()];
UByteRawIndexer indexer = mat.createIndexer();
int index = 0;
for (int i = 0; i < mat.rows(); i ++)
{
for (int j = 0; j < mat.cols(); j++)
{
for (int k = 0; k < mat.channels(); k++)
{
output[index] = (byte)indexer.get(i, j, k);
index++;
}
}
}
return output;

How to convert Mat to IplImage in Javacv?

Is any one know how i can convert Mat to IplImage ?
to achieve this i have converted Mat to BufferedImage but again not able to find conversion in BufferedImage to IplImage.
is there any way where i can convert Mat to IplImage?
Thanks
I believe you can convert BufferedImage to IplImage as follows.
public static IplImage toIplImage(BufferedImage src) {
Java2DFrameConverter bimConverter = new Java2DFrameConverter();
OpenCVFrameConverter.ToIplImage iplConverter = new OpenCVFrameConverter.ToIplImage();
Frame frame = bimConverter.convert(src);
IplImage img = iplConverter.convert(frame);
IplImage result = img.clone();
img.release();
return result;
}
I got this from this question. Try this for now. I'll check if direct conversion is possible.
UPDATE:
Please have a look at this api docs. I haven't tested the following. Wrote it just now. Please do try and let me know.
public static IplImage toIplImage(Mat src) {
OpenCVFrameConverter.ToIplImage iplConverter = new OpenCVFrameConverter.ToIplImage();
OpenCVFrameConverter.ToMat matConverter = new OpenCVFrameConverter.ToMat();
Frame frame = matConverter.convert(src);
IplImage img = iplConverter.convert(frame);
IplImage result = img.clone();
img.release();
return result;
}

In Javacv cvCreateFileCapture is not able to read the video from the memory

I am trying to import video in javacv but cvCreateFileCapture("Filename") class is returning null value and I have also tried using OpenCVFrameGrabber("File name"), it is also not working properly.
Here is my code :
public class SmokeDetectionInStoredVideo {
private static final int IMG_SCALE = 2;
private static final int width = 640;
private static final int height = 480;
public static void main(String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
CvMemStorage storage = CvMemStorage.create();
IplImage img1=null,imghsv=null,imgbin=null;
CvSeq contour1;
CvSeq contour2;
double areaMax = 00, areaC = 0.0;
imghsv = IplImage.create(width/IMG_SCALE, height/IMG_SCALE, 8, 3);
imgbin = IplImage.create(width/IMG_SCALE, height/IMG_SCALE, 8, 1);
try{
CvCapture capture = cvCreateFileCapture("Red.mp4");//cvCreateCameraCapture(0);
if(capture.isNull()){
System.out.println("Error reading file");
}
IplImage grabbedImage = cvQueryFrame(capture);
OpenCVFrameConverter.ToIplImage converter=new OpenCVFrameConverter.ToIplImage();
CanvasFrame canvasFrame = new CanvasFrame("Actual Video");
CanvasFrame canvasBinFrame = new CanvasFrame("Contour Video");
canvasFrame.setCanvasSize(grabbedImage.width(), grabbedImage.height());
canvasBinFrame.setCanvasSize(grabbedImage.width(), grabbedImage.height());
imghsv = IplImage.create(grabbedImage.width(), grabbedImage.height(), 8, 3);
imgbin = IplImage.create(grabbedImage.width(), grabbedImage.height(), 8, 1);
while (canvasFrame.isVisible() && (img1 = cvQueryFrame(capture)) != null) {
cvCvtColor(img1, imghsv, CV_RGB2HSV);
//canvasFrame.showImage(converter.convert(imghsv));
cvInRangeS(img1, cvScalar(220, 220, 220, 0), cvScalar(235, 235, 235, 0), imgbin);
contour1 = new CvSeq();
cvFindContours(imgbin, storage, contour1, Loader.sizeof(CvContour.class), CV_RETR_LIST, CV_LINK_RUNS);
contour2 = contour1;
while (contour1 != null && !contour1.isNull()) {
areaC = cvContourArea(contour1, CV_WHOLE_SEQ, 1);
if(areaC>areaMax){
areaMax = areaC;
}
contour1 = contour1.h_next();
}
while (contour2 != null && !contour2.isNull()){
areaC = cvContourArea(contour2, CV_WHOLE_SEQ, 1);
if(areaC>areaMax){
cvDrawContours(imgbin, contour1, CV_RGB(0, 0, 0), CV_RGB(0, 0, 0), 0, CV_FILLED, 8);
}
contour2 = contour2.h_next();
}
canvasBinFrame.showImage(converter.convert(imgbin));
canvasFrame.showImage(converter.convert(img1));
}
canvasFrame.dispose();
cvReleaseMemStorage(storage);
cvReleaseImage(img1);
cvReleaseImage(imgbin);
cvReleaseImage(imghsv);
}
catch (Exception e) {
System.out.println("Error while processing video");
// TODO: handle exception
}
}
}
Is there any other way of importing the video in javacv.
Are you sure that you have this file in your class path? Can you try the following to check if the file exists and show the output please?
File file = new File("Red.mp4");
System.out.println(file.exists());

Track a single object when multiple objects are present in a static field

I'm trying to track a single moving object in a static field when multiple objects are present. With the help of a great mentor i got the following code. I'm using opencv for processing library. but when the code is complied i get an error :
cannot convert from element type from
ArrayList to ArrayList> at the line :
for (ArrayList< ArrayList > blob : blobgp )
import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;
int x, y;
OpenCV opencv;
Capture cam;
ArrayList<Contour> contours;
PVector previousPosition;
void setup() {
cam = new Capture(this, 640/2, 480/2);
size(cam.width, cam.height);
opencv = new OpenCV(this, cam.width, cam.height);
opencv.useGray();
opencv.startBackgroundSubtraction(5, 3, 0.1);
cam.start();
previousPosition = new PVector();
}
void draw() {
track();
stroke(255, 0, 0);
noFill();
strokeWeight(5);
ellipse(x, y, 10, 10);
}
void track() {
image(cam, 0, 0);
opencv.loadImage(cam);
opencv.updateBackground();
opencv.erode();
opencv.dilate();
ArrayList<Contour> contours = opencv.findContours(false, true);
ArrayList<Contour> contourblobs =new ArrayList<Contour>();
ArrayList<ArrayList<Contour>> blobgp = new ArrayList<ArrayList<Contour>>();
contourblobs.add(contours.get(0));
blobgp.add(contourblobs);
for (int i = 1; i < contours.size(); i++) {
ArrayList<Contour> remainingcontour =new ArrayList<Contour>();
remainingcontour.add(contours.get(i));
PVector contourCenter = centerOfContour(remainingcontour);
boolean matchesExistingBlob = false;
for (ArrayList< ArrayList<Contour> > blob : blobgp ) {
PVector blobCenter = centerOfBlob(blob);
if (PVector.dist(blobCenter, contourCenter) < threshold) {
blob.add(contour);
matchesExistingBlob = true;
}
}
// if it didn't match an existing blob
// create a new one
if (!matchesExistingBlob) {
ArrayList<ArrayList<Contour>> newBlob =new ArrayList<ArrayList<Contour>>();
newBlob.add(contour);
}
}
// now use unique blobs to draw the dots:
for (ArrayList<ArrayList<Contour>> blob : blobgp) {
PVector c = centerOfBlob(blob);
x=c.x;
y=c.y;
}
}
// helper functions
PVector centerOfContour(ArrayList<Contour> remainingcontour) {
PVector result = new PVector();
int numPoints = 0;
for (PVector p : contour.getPoints()) {
result.x += p.x;
result.y += p.y;
numPoints++;
}
result.x /= numPoints;
result.y /= numPoints;
return result;
}
PVector centerOfBlob(ArrayList<ArrayList<Contour>> blob) {
PVector result = new PVector();
for (ArrayList<Contour> contour : blob) {
PVector contourCenter = centerOfContour(contour);
result.x += contourCenter.x;
result.y += contourCenter.y;
}
result.x /= blob.size();
result.y /= blob.size();
return result;
}
}
You should have a better understand of the code you use.
For example, you can get rid of the syntax errors if you are careful with what arguments your functions expect and what you pass to them:
import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;
float x, y;
OpenCV opencv;
Capture cam;
ArrayList<Contour> contours;
PVector previousPosition;
int threshold = 20;
void setup() {
cam = new Capture(this, 640/2, 480/2);
size(cam.width, cam.height);
opencv = new OpenCV(this, cam.width, cam.height);
opencv.useGray();
opencv.startBackgroundSubtraction(5, 3, 0.1);
cam.start();
previousPosition = new PVector();
}
void draw() {
track();
stroke(255, 0, 0);
noFill();
strokeWeight(5);
ellipse(x, y, 10, 10);
}
void track() {
image(cam, 0, 0);
opencv.loadImage(cam);
opencv.updateBackground();
opencv.erode();
opencv.dilate();
ArrayList<Contour> contours = opencv.findContours(false, true);
ArrayList<Contour> contourblobs =new ArrayList<Contour>();
ArrayList<ArrayList<Contour>> blobgp = new ArrayList<ArrayList<Contour>>();
if(contours.size() > 0){
contourblobs.add(contours.get(0));
blobgp.add(contourblobs);
for (int i = 1; i < contours.size(); i++) {
ArrayList<Contour> remainingcontour =new ArrayList<Contour>();
remainingcontour.add(contours.get(i));
PVector contourCenter = centerOfContour(remainingcontour);
boolean matchesExistingBlob = false;
PVector blobCenter = centerOfBlob(blobgp);
if (PVector.dist(blobCenter, contourCenter) < threshold) {
blobgp.add(contours);
matchesExistingBlob = true;
}
/*
for (ArrayList< ArrayList<Contour> > blob : blobgp ) {
PVector blobCenter = centerOfBlob(blob);
if (PVector.dist(blobCenter, contourCenter) < threshold) {
blob.add(contour);
matchesExistingBlob = true;
}
}
*/
// if it didn't match an existing blob
// create a new one
if (!matchesExistingBlob) {
ArrayList<ArrayList<Contour>> newBlob =new ArrayList<ArrayList<Contour>>();
newBlob.add(contours);
}
}
// now use unique blobs to draw the dots:
/*
for (ArrayList<ArrayList<Contour>> blob : blobgp) {
PVector c = centerOfBlob(blob);
x=c.x;
y=c.y;
}
*/
PVector c = centerOfBlob(blobgp);
x=c.x;
y=c.y;
}
}
// helper functions
PVector centerOfContour(ArrayList<Contour> remainingcontour) {
PVector result = new PVector();
int numPoints = 0;
for (Contour contour : contours) {
for (PVector p : contour.getPolygonApproximation().getPoints()) {
result.x += p.x;
result.y += p.y;
numPoints++;
}
}
result.x /= numPoints;
result.y /= numPoints;
return result;
}
PVector centerOfBlob(ArrayList<ArrayList<Contour>> blob) {
PVector result = new PVector();
for (ArrayList<Contour> contour : blob) {
PVector contourCenter = centerOfContour(contour);
result.x += contourCenter.x;
result.y += contourCenter.y;
}
result.x /= blob.size();
result.y /= blob.size();
return result;
}
While the above code will compile and run I doubt it will do what you're after. This brings us to the question: what are you trying to achieve. " trying to track a single moving object in a static field when multiple objects" sounds vague. How should your algorithm above work ? (what is blobgp ? is seems to only deal with the first contour (contours.get(0)), etc.).
Are you simply trying to display the center of the moving blob in a scene that may contain multiple blobs ? If you have multiple blobs and only one is moving and you're interested in it, can you not simply subtract the background first, so the moving blob will be the only detected blob ?

Error opencv & openframework While cropping image

I have the following code:
IplImage* f( IplImage* src )
{
// Must have dimensions of output image
IplImage* cropped = cvCreateImage( cvSize(1280,500), src->depth, src->nChannels );
// Say what the source region is
cvSetImageROI( src, cvRect( 0,0, 1280,500 ) );
// Do the copy
cvCopy( src, cropped );
cvResetImageROI( src );
return cropped;
}
void testApp::setup(){
img.loadImage("test.jpg");
finder.setup("haarcascade_frontalface_default.xml");
finder.findHaarObjects(img);
}
//--------------------------------------------------------------
void testApp::update(){
}
//--------------------------------------------------------------
ofRectangle cur;
void testApp::draw(){
img = f(img);
img.draw(0, 0);
ofNoFill();
for(int i = 0; i < finder.blobs.size(); i++) {
cur = finder.blobs[i].boundingRect;
ofRect(cur.x-20, cur.y-20, cur.width+50, cur.height+50);
}
}
It produces an error. I think it's because I don't convert IplImage to ofImage. Can someone please tell me how to do it?
I would imagine that your instance of img, that your passing to f(), is an instance of ofxCVImage or similar. As far as I know ofx store a protected variable of iplimage, as such you can't cast ofxCVImage to an iplimage.
You might try
img = f(img.getCvImage()) // ofxCvImage::getCvImage() returns IplImage

Resources