Im cropping 64x128 pixel Images in 4x8 and 8x16 grids and saving them in a Temp Folder to extract features from for image classification. While im doing this in a loop for multiple Images (I crop the first Image, get 8x16 subimages, extract features for each subimage, move to the next image and overwrite the existing subimages) I get an "File not found" Exception at random grid cells because access is denied for said grid cell. This only occurs when working with a big number (say 20+) of Images. How can I work around this ?
My code for the cropping part:
package imageProcess;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Crop_Raster {
BufferedImage src;
BufferedImage dst;
public Crop_Raster(BufferedImage src) {
super();
this.src = src;
}
public void cropImage_4x8() throws IOException{
int filenumber = 1;
for (int y = 0;y<4;y++){
for (int x = 0; x<8;x++){
File output = new File("Temp/"+filenumber+".jpg");
dst = src.getSubimage(16*x,16*y, 16, 16);
ImageIO.write(dst, "jpg", output);
filenumber ++;
}
}
}
public void cropImage_8x16() throws IOException{
int filenumber = 1;
for (int y = 0;y<8;y++){
for (int x = 0; x<16;x++){
File output = new File("Temp/"+filenumber+".jpg");
dst = src.getSubimage(8*x,8*y, 8, 8);
ImageIO.write(dst, "jpg", output);
filenumber ++;
}
}
}
I get the following Exception, happening while handling the second subimage of my 6th Training Image:
java.io.FileNotFoundException: Temp\2.jpg (Zugriff verweigert)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(Unknown Source)
at java.io.RandomAccessFile.<init>(Unknown Source)
at javax.imageio.stream.FileImageOutputStream.<init>(Unknown Source)
at com.sun.imageio.spi.FileImageOutputStreamSpi.createOutputStreamInstance(Unknown Source)
at javax.imageio.ImageIO.createImageOutputStream(Unknown Source)
at javax.imageio.ImageIO.write(Unknown Source)
at imageProcess.Crop_Raster.cropImage_8x16(Crop_Raster.java:38)
at svm.CreateVektor.createVector_8x16(CreateVektor.java:94)
at Main_Test.main(Main_Test.java:107)
The error occurs during the cropping part, the rest of my methods should work fine.
Clearing the Temp folder after each Image fixed the issue.
Related
I have "rgb8" format msg delivered via ROS topic subscription.
How may create a QImage out of it and let the qml Image display the picture on it?
Currently I'm working on the following code snippet.
QImage *VideoPlayer::Mat2QImage(cv::Mat const& src)
{
QImage *imgPtr = new QImage((const uchar *) src.data,
src.cols, src.rows, src.step, QImage::Format_RGB888);
imgPtr->bits();
return imgPtr;
}
void VideoPlayer::imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
static int count = 0;
try
{
try {
Mat imgMat = cv_bridge::toCvShare(msg, "rgb8")->image;
delete imgProvider->currentShot;
imgProvider->currentShot = Mat2QImage(imgMat);
...
}
(*currentShot) is fecthed by imageRequest(...) method from qml side's 'source' property.
You should refer to QQuickImageProvider. Create q ROSImageProvider class that inherit QQuickImageProvider and implement requestImage() or requestPixmap(), and then register it in QMLEngine:
engine->addImageProvider(QLatin1String("ros"), new ROSImageProvider);
And then you can using the following QML syntax to get image:
Image { source: "ros://some_id" }
You can refer to the Qt documentation for full example.
I am trying to a develop face-recognition app in android. I am using JavaCv FaceRecognizer. But so far I am getting very poor results. It recognizes image of person which was trained but it also recognizes unknown images. For the known faces it gives me large value as a distance, most of the time from 70-90, sometimes 90+, while unknown images also get 70-90.
So how can I increase the performance of face-recognition? What techniques are there? What percentage of success you can get with this normally?
I have never worked with image processing. I will appreciate any guidelines.
Here is the code:
public class PersonRecognizer {
public final static int MAXIMG = 100;
FaceRecognizer faceRecognizer;
String mPath;
int count=0;
labels labelsFile;
static final int WIDTH= 70;
static final int HEIGHT= 70;
private static final String TAG = "PersonRecognizer";
private int mProb=999;
PersonRecognizer(String path)
{
faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,100);
// path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
mPath=path;
labelsFile= new labels(mPath);
}
void changeRecognizer(int nRec)
{
switch(nRec) {
case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
break;
case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
break;
case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
break;
}
train();
}
void add(Mat m, String description)
{
Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m,bmp);
bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);
FileOutputStream f;
try
{
f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
count++;
bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
f.close();
} catch (Exception e) {
Log.e("error",e.getCause()+" "+e.getMessage());
e.printStackTrace();
}
}
public boolean train() {
File root = new File(mPath);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".jpg");
};
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img=null;
IplImage grayImg;
int i1=mPath.length();
for (File image : imageFiles) {
String p = image.getAbsolutePath();
img = cvLoadImage(p);
if (img==null)
Log.e("Error","Error cVLoadImage");
Log.i("image",p);
int i2=p.lastIndexOf("-");
int i3=p.lastIndexOf(".");
int icount = 0;
try
{
icount=Integer.parseInt(p.substring(i2+1,i3));
}
catch(Exception ex)
{
ex.printStackTrace();
}
if (count<icount) count++;
String description=p.substring(i1,i2);
if (labelsFile.get(description)<0)
labelsFile.add(description, labelsFile.max()+1);
label = labelsFile.get(description);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
if (counter>0)
if (labelsFile.max()>1)
faceRecognizer.train(images, labels);
labelsFile.Save();
return true;
}
public boolean canPredict()
{
if (labelsFile.max()>1)
return true;
else
return false;
}
public String predict(Mat m) {
if (!canPredict())
return "";
int n[] = new int[1];
double p[] = new double[1];
//conver Mat to black and white
/*Mat gray_m = new Mat();
Imgproc.cvtColor(m, gray_m, Imgproc.COLOR_RGBA2GRAY);*/
IplImage ipl = MatToIplImage(m, WIDTH, HEIGHT);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
{
mProb=(int)p[0];
Log.v(TAG, "Distance = "+mProb+"");
Log.v(TAG, "N = "+n[0]);
}
else
{
mProb=-1;
Log.v(TAG, "Distance = "+mProb);
}
if (n[0] != -1)
{
return labelsFile.get(n[0]);
}
else
{
return "Unknown";
}
}
IplImage MatToIplImage(Mat m,int width,int heigth)
{
Bitmap bmp;
try
{
bmp = Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.RGB_565);
}
catch(OutOfMemoryError er)
{
bmp = Bitmap.createBitmap(m.width()/2, m.height()/2, Bitmap.Config.RGB_565);
er.printStackTrace();
}
Utils.matToBitmap(m, bmp);
return BitmapToIplImage(bmp, width, heigth);
}
IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {
if ((width != -1) || (height != -1)) {
Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
bmp = bmp2;
}
IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
IPL_DEPTH_8U, 4);
bmp.copyPixelsToBuffer(image.getByteBuffer());
IplImage grayImg = IplImage.create(image.width(), image.height(),
IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);
return grayImg;
}
protected void SaveBmp(Bitmap bmp,String path)
{
FileOutputStream file;
try
{
file = new FileOutputStream(path , true);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, file);
file.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
Log.e("",e.getMessage()+e.getCause());
e.printStackTrace();
}
}
public void load() {
train();
}
public int getProb() {
// TODO Auto-generated method stub
return mProb;
}
}
I have faced similar challenges recently, here are the things which helped me in getting better results:
Crop the faces from images - this will remove unnecessary pixels at the time of inference
Resize the cropped face images - this impacts when detecting face landmarks, try different scales on test sets to understand what works best. Also, this impacts the inference time as well, smaller the size, faster the inference.
Improve the brightness of the face images - I found this really helpful, detecting face landmarks in darker images was not much good, this is mainly due to the model, which was pre-trained with mostly white faces - having understanding on training data will helps when dealing with bias.
Convert to grayscale images - this I have seen it in many forums and said that, this will helpful in finding the edges efficiently - and processing time is less when compared to colour images (3 channels -RGB) - however, this did not help much.
Try to capture (register) as many as images for individual person in different angles, lightings and other variations - this one really helps as it is comparing with encodings of the stored images.
Try to implement 1-1 comparison for face verification - for example, in my system, I have captured 10 pictures for each person, and at the time of verification, I am comparing against 10 pictures, instead of all the encodings of all the persons stored in the system. This will provide, false positives, however use-cases are limited in this setup, I am using it for face authentication, and compare the new face against existing faces where mobile number is same.
My understanding as of today, face recognition system works great and but not 100% accurate, we have to understand the model architecture, training data and our requirement and deploy it accordingly to get better outcome. Here are some points which helped me improve overall system:
Implement fallback method - provide option to user, when our system failed to detects them correctly, example, if face authentication failed for some reason, show them enter PIN option
In critical system - add periodic human intervention to confirm system result - for example, if a system not allows a user based on FR result - verify with a human agent for failed result and allow the user
Implement multiple factors for authentication - deploy face recognition system as addition to existing system - for example, after user logged in with credentials - verify them its intended person using face recognition system
Design your user interface in a way, at the time of verification, how user should behave like open eyes, close mouth, etc without impacting user experience
Provide clear instruction to users, when they are dealing with the system - for example, let user know, FR system integrated and they need to show their faces in good lighting condition, etc.
I'm trying to use FaceRecognition with javacv. But when I have more than 5 train images I get this error:
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6a30b400, pid=4856, tid=32
#
# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
# Java VM: Java HotSpot(TM) Client VM (24.51-b03 mixed mode, sharing windows-x86 )
# Problematic frame:
# C [opencv_core246.dll+0x4b400]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\reco\workspace\hellow\hs_err_pid4856.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
This is my code:
import com.googlecode.javacv.cpp.opencv_core;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_contrib.*;
import java.io.File;
import java.io.FilenameFilter;
public class OpenCVFaceRecognizer {
public static void main(String[] args) {
String trainingDir = "C:/Users/reco/workspace/hellow";
IplImage testImage = cvLoadImage("C:/Users/reco/workspace/0.png");
File root = new File(trainingDir);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".png");
}
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img;
IplImage grayImg;
for (File image : imageFiles) {
img = cvLoadImage(image.getAbsolutePath());
String temp= image.getName();
label = Integer.parseInt(temp.charAt(0)+"");
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
IplImage greyTestImage = IplImage.create(testImage.width(), testImage.height(), IPL_DEPTH_8U, 1);
//FaceRecognizer faceRecognizer = createFisherFaceRecognizer();
FaceRecognizer faceRecognizer = createEigenFaceRecognizer();
// FaceRecognizer faceRecognizer = createLBPHFaceRecognizer()
faceRecognizer.train(images, labels);
cvCvtColor(testImage, greyTestImage, CV_BGR2GRAY);
int predictedLabel = faceRecognizer.predict(greyTestImage);
System.out.println("Predicted label: " + predictedLabel);
}
}
edit::i just removed
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
and it worked :)
IplImage img;
IplImage grayImg=null;
for (File image : imageFiles) {
img = cvLoadImage(image.getAbsolutePath(),CV_BGR2GRAY);
int yer = image.getName().indexOf(".");
String isim=image.getName().substring(0,yer);
label = Integer.parseInt(isim);
images.put(counter, img);
labels[counter] = label;
counter++;
}
it is the final of my code and it works like a charm :)
I am trying to save an OpenCV image to the hard drive.
Here is what I tried:
public void SaveImage (Mat mat) {
Mat mIntermediateMat = new Mat();
Imgproc.cvtColor(mRgba, mIntermediateMat, Imgproc.COLOR_RGBA2BGR, 3);
File path =
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES);
String filename = "barry.png";
File file = new File(path, filename);
Boolean bool = null;
filename = file.toString();
bool = Highgui.imwrite(filename, mIntermediateMat);
if (bool == true)
Log.d(TAG, "SUCCESS writing image to external storage");
else
Log.d(TAG, "Fail writing image to external storage");
}
}
Can any one show how to save that image with OpenCV 2.4.3?
Your question is a bit confusing, as your question is concerning OpenCV on the desktop, but your code is for Android, and you ask about IplImage, but your posted code is using C++ and Mat. Assuming you're on the desktop using C++, you can do something along the lines of:
cv::Mat image;
std::string image_path;
//load/generate your image and set your output file path/name
//...
//write your Mat to disk as an image
cv::imwrite(image_path, image);
...Or for a more complete example:
void SaveImage(cv::Mat mat)
{
cv::Mat img;
cv::cvtColor(...); //not sure where the variables in your example come from
std::string store_path("..."); //put your output path here
bool write_success = cv::imwrite(store_path, img);
//do your logging...
}
The image format is chosen based on the supplied filename, e.g. if your store_path string was "output_image.png", then imwrite would save it was a PNG image. You can see the list of valid extensions at the OpenCV docs.
One caveat to be aware of when writing images to disk with OpenCV is that the scaling will differ depending on the Mat type; that is, for floats the images are expected to be within the range [0, 1], while for say, unsigned chars they'll be from [0, 256).
For IplImages, I'd advise just switching to use Mat, as the old C-interface is deprecated. You can convert an IplImage to a Mat via cvarrToMat then use the Mat, e.g.
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
//now can use cv::imwrite with newC
alternately, you can convert an IplImage to a Mat just with
Mat newC(oldC0); //where newC is a Mat and oldC0 is your IplImage
Also I just noticed this tutorial at the OpenCV website, which gives you a walk-though on loading and saving images in a (desktop) environment.
I'm been trying to capture video from a cam and write it into an AVI file. I'm using Qt 4.8.2 with MSVC 2010 (x86) on Windows 7. I have 2 versions of the code: one using cv::Mat and the other using IplImage*. However, only the IplImage* version is working. Here's my code using cv::Mat:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main() {
VideoCapture* capture2 = new VideoCapture( CV_CAP_DSHOW );
Size size2 = Size(640,480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
VideoWriter* writer2 = new VideoWriter("video.avi",codec,15,size2);
int a = 100;
Mat frame2;
while ( a > 0 ) {
capture2->read(frame2);
writer2->write(frame2);
a--;
}
writer2->release();
capture2->release();
return 0;
}
And here's the code using IplImage*:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
CvCapture* capture = cvCaptureFromCAM( CV_CAP_DSHOW );
CvSize size = cvSize(640,480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
CvVideoWriter* writer = cvCreateVideoWriter("video.avi",codec,15,size);
int a = 100;
while ( a > 0 ) {
IplImage* frame = cvQueryFrame( capture );
cvWriteToAVI(writer,frame);
a--;
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture( &capture );
return 0;
}
It's basically the same, or at least it looks like the same thing to me. It reads 100 frames and should write them into "video.avi". It compiles and runs without errors, but the cv::Mat version doesn't write anything, and the IplImage* version works perfectly.
Does someone have any idea on what's going on?
The syntax in Opencv C++ reference is bit different, and here is a working code in C++.
I Just added imshow and waitkey, for checking you can remove them if you want.
int main()
{
VideoCapture* capture2 = new VideoCapture(CV_CAP_DSHOW);
Size size2 = Size(640, 480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
// Unlike in C, here we use an object of the class VideoWriter//
VideoWriter writer2("video_.avi", codec, 15.0, size2, true);
writer2.open("video_.avi", codec, 15.0, size2, true);
if (writer2.isOpened())
{
int a = 100;
Mat frame2;
while (a > 0)
{
capture2->read(frame2);
imshow("live", frame2);
waitKey(100);
writer2.write(frame2);
a--;
}
}
else
{
cout << "ERROR while opening" << endl;
}
// No Need to release the Writer as the distructor will called automatically
capture2->release();
return 0;
}
I had the same problem over and over again, and none of the solutions I found online helped.
Strange enough, the problem (identified purely with a trial and error method) was with the write permission. Everything worked after I sudo chmod u+rwx the python script.
I have the same problem and after a few time i realize that the input video isn't the same size with the output. Resize the input video may help u.
capture2->read(frame2);
cv::resize(frame2,frame2,cv::Size(640,480);
writer2->write(frame2);