I have "rgb8" format msg delivered via ROS topic subscription.
How may create a QImage out of it and let the qml Image display the picture on it?
Currently I'm working on the following code snippet.
QImage *VideoPlayer::Mat2QImage(cv::Mat const& src)
{
QImage *imgPtr = new QImage((const uchar *) src.data,
src.cols, src.rows, src.step, QImage::Format_RGB888);
imgPtr->bits();
return imgPtr;
}
void VideoPlayer::imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
static int count = 0;
try
{
try {
Mat imgMat = cv_bridge::toCvShare(msg, "rgb8")->image;
delete imgProvider->currentShot;
imgProvider->currentShot = Mat2QImage(imgMat);
...
}
(*currentShot) is fecthed by imageRequest(...) method from qml side's 'source' property.
You should refer to QQuickImageProvider. Create q ROSImageProvider class that inherit QQuickImageProvider and implement requestImage() or requestPixmap(), and then register it in QMLEngine:
engine->addImageProvider(QLatin1String("ros"), new ROSImageProvider);
And then you can using the following QML syntax to get image:
Image { source: "ros://some_id" }
You can refer to the Qt documentation for full example.
Related
I'm working on a project where I have to use a webcam, an arduino, a raspberry and an IR proximity sensor. I arrived to do everything with some help of google. But I have a big problem that's really I think.
I'm using OpenCV library on processing and I'd like the contours that get by the webcam be in the center of the sketch. But All only arrived to move the video and not the contours here's my code.
I hope you'll could help me :)
All the best
Alexandre
////////////////////////////////////////////
////////////////////////////////// LIBRARIES
////////////////////////////////////////////
import processing.serial.*;
import gab.opencv.*;
import processing.video.*;
/////////////////////////////////////////////////
////////////////////////////////// INITIALIZATION
/////////////////////////////////////////////////
Movie mymovie;
Capture video;
OpenCV opencv;
Contour contour;
////////////////////////////////////////////
////////////////////////////////// VARIABLES
////////////////////////////////////////////
int lf = 10; // Linefeed in ASCII
String myString = null;
Serial myPort; // The serial port
int sensorValue = 0;
int x = 300;
/////////////////////////////////////////////
////////////////////////////////// VOID SETUP
/////////////////////////////////////////////
void setup() {
size(1280, 1024);
// List all the available serial ports
printArray(Serial.list());
// Open the port you are using at the rate you want:
myPort = new Serial(this, Serial.list()[1], 9600);
myPort.clear();
// Throw out the first reading, in case we started reading
// in the middle of a string from the sender.
myString = myPort.readStringUntil(lf);
myString = null;
opencv = new OpenCV(this, 720, 480);
video = new Capture(this, 720, 480);
mymovie = new Movie(this, "visage.mov");
opencv.startBackgroundSubtraction(5, 3, 0.5);
mymovie.loop();
}
////////////////////////////////////////////
////////////////////////////////// VOID DRAW
////////////////////////////////////////////
void draw() {
image(mymovie, 0, 0);
image(video, 20, 20);
//tint(150, 20);
noFill();
stroke(255, 0, 0);
strokeWeight(1);
// check if there is something new on the serial port
while (myPort.available() > 0) {
// store the data in myString
myString = myPort.readStringUntil(lf);
// check if we really have something
if (myString != null) {
myString = myString.trim(); // let's remove whitespace characters
// if we have at least one character...
if (myString.length() > 0) {
println(myString); // print out the data we just received
// if we received a number (e.g. 123) store it in sensorValue, we sill use this to change the background color.
try {
sensorValue = Integer.parseInt(myString);
}
catch(Exception e) {
}
}
}
}
if (x < sensorValue) {
video.start();
opencv.loadImage(video);
}
if (x > sensorValue) {
image(mymovie, 0, 0);
}
opencv.updateBackground();
opencv.dilate();
opencv.erode();
for (Contour contour : opencv.findContours()) {
contour.draw();
}
}
//////////////////////////////////////////////
////////////////////////////////// VOID CUSTOM
//////////////////////////////////////////////
void captureEvent(Capture video) {
video.read(); // affiche l'image de la webcam
}
void movieEvent(Movie myMovie) {
myMovie.read();
}
One approach you could use is to call the translate() function to move the origin of the canvas before you call contour.draw(). Something like this:
translate(moveX, moveY);
for (Contour contour : opencv.findContours()) {
contour.draw();
}
What you use for moveX and moveY depends entirely on exactly what you're trying to do. You might use whatever position you're using to draw the video (if you want the contours displayed on top of the video), or you might use width/2 and height/2 (maybe minus a bit to really center the contours).
More info can be found in the reference. Play with a bunch of different values, and post an MCVE if you get stuck. Good luck.
I am trying to a develop face-recognition app in android. I am using JavaCv FaceRecognizer. But so far I am getting very poor results. It recognizes image of person which was trained but it also recognizes unknown images. For the known faces it gives me large value as a distance, most of the time from 70-90, sometimes 90+, while unknown images also get 70-90.
So how can I increase the performance of face-recognition? What techniques are there? What percentage of success you can get with this normally?
I have never worked with image processing. I will appreciate any guidelines.
Here is the code:
public class PersonRecognizer {
public final static int MAXIMG = 100;
FaceRecognizer faceRecognizer;
String mPath;
int count=0;
labels labelsFile;
static final int WIDTH= 70;
static final int HEIGHT= 70;
private static final String TAG = "PersonRecognizer";
private int mProb=999;
PersonRecognizer(String path)
{
faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,100);
// path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
mPath=path;
labelsFile= new labels(mPath);
}
void changeRecognizer(int nRec)
{
switch(nRec) {
case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
break;
case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
break;
case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
break;
}
train();
}
void add(Mat m, String description)
{
Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m,bmp);
bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);
FileOutputStream f;
try
{
f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
count++;
bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
f.close();
} catch (Exception e) {
Log.e("error",e.getCause()+" "+e.getMessage());
e.printStackTrace();
}
}
public boolean train() {
File root = new File(mPath);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".jpg");
};
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img=null;
IplImage grayImg;
int i1=mPath.length();
for (File image : imageFiles) {
String p = image.getAbsolutePath();
img = cvLoadImage(p);
if (img==null)
Log.e("Error","Error cVLoadImage");
Log.i("image",p);
int i2=p.lastIndexOf("-");
int i3=p.lastIndexOf(".");
int icount = 0;
try
{
icount=Integer.parseInt(p.substring(i2+1,i3));
}
catch(Exception ex)
{
ex.printStackTrace();
}
if (count<icount) count++;
String description=p.substring(i1,i2);
if (labelsFile.get(description)<0)
labelsFile.add(description, labelsFile.max()+1);
label = labelsFile.get(description);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
if (counter>0)
if (labelsFile.max()>1)
faceRecognizer.train(images, labels);
labelsFile.Save();
return true;
}
public boolean canPredict()
{
if (labelsFile.max()>1)
return true;
else
return false;
}
public String predict(Mat m) {
if (!canPredict())
return "";
int n[] = new int[1];
double p[] = new double[1];
//conver Mat to black and white
/*Mat gray_m = new Mat();
Imgproc.cvtColor(m, gray_m, Imgproc.COLOR_RGBA2GRAY);*/
IplImage ipl = MatToIplImage(m, WIDTH, HEIGHT);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
{
mProb=(int)p[0];
Log.v(TAG, "Distance = "+mProb+"");
Log.v(TAG, "N = "+n[0]);
}
else
{
mProb=-1;
Log.v(TAG, "Distance = "+mProb);
}
if (n[0] != -1)
{
return labelsFile.get(n[0]);
}
else
{
return "Unknown";
}
}
IplImage MatToIplImage(Mat m,int width,int heigth)
{
Bitmap bmp;
try
{
bmp = Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.RGB_565);
}
catch(OutOfMemoryError er)
{
bmp = Bitmap.createBitmap(m.width()/2, m.height()/2, Bitmap.Config.RGB_565);
er.printStackTrace();
}
Utils.matToBitmap(m, bmp);
return BitmapToIplImage(bmp, width, heigth);
}
IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {
if ((width != -1) || (height != -1)) {
Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
bmp = bmp2;
}
IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
IPL_DEPTH_8U, 4);
bmp.copyPixelsToBuffer(image.getByteBuffer());
IplImage grayImg = IplImage.create(image.width(), image.height(),
IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);
return grayImg;
}
protected void SaveBmp(Bitmap bmp,String path)
{
FileOutputStream file;
try
{
file = new FileOutputStream(path , true);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, file);
file.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
Log.e("",e.getMessage()+e.getCause());
e.printStackTrace();
}
}
public void load() {
train();
}
public int getProb() {
// TODO Auto-generated method stub
return mProb;
}
}
I have faced similar challenges recently, here are the things which helped me in getting better results:
Crop the faces from images - this will remove unnecessary pixels at the time of inference
Resize the cropped face images - this impacts when detecting face landmarks, try different scales on test sets to understand what works best. Also, this impacts the inference time as well, smaller the size, faster the inference.
Improve the brightness of the face images - I found this really helpful, detecting face landmarks in darker images was not much good, this is mainly due to the model, which was pre-trained with mostly white faces - having understanding on training data will helps when dealing with bias.
Convert to grayscale images - this I have seen it in many forums and said that, this will helpful in finding the edges efficiently - and processing time is less when compared to colour images (3 channels -RGB) - however, this did not help much.
Try to capture (register) as many as images for individual person in different angles, lightings and other variations - this one really helps as it is comparing with encodings of the stored images.
Try to implement 1-1 comparison for face verification - for example, in my system, I have captured 10 pictures for each person, and at the time of verification, I am comparing against 10 pictures, instead of all the encodings of all the persons stored in the system. This will provide, false positives, however use-cases are limited in this setup, I am using it for face authentication, and compare the new face against existing faces where mobile number is same.
My understanding as of today, face recognition system works great and but not 100% accurate, we have to understand the model architecture, training data and our requirement and deploy it accordingly to get better outcome. Here are some points which helped me improve overall system:
Implement fallback method - provide option to user, when our system failed to detects them correctly, example, if face authentication failed for some reason, show them enter PIN option
In critical system - add periodic human intervention to confirm system result - for example, if a system not allows a user based on FR result - verify with a human agent for failed result and allow the user
Implement multiple factors for authentication - deploy face recognition system as addition to existing system - for example, after user logged in with credentials - verify them its intended person using face recognition system
Design your user interface in a way, at the time of verification, how user should behave like open eyes, close mouth, etc without impacting user experience
Provide clear instruction to users, when they are dealing with the system - for example, let user know, FR system integrated and they need to show their faces in good lighting condition, etc.
When I try to build this code, the line struct my_error_mgr jerr; gives the error "Variable has incomplete type 'struct my_error_mgr'
#import "Engine.h"
#include "jpeglib.h"
#implementation Engine
+(void)test{
struct jpeg_decompress_struct cinfo;
struct my_error_mgr jerr;
FILE * infile;
if ((infile = fopen(filename, "rb")) == NULL) {
fprintf(stderr, "can't open %s\n", filename);
return 0;
}
cinfo.err = jpeg_std_error(&jerr.pub);
jerr.pub.error_exit = my_error_exit;
if (setjmp(jerr.setjmp_buffer)) {
jpeg_destroy_decompress(&cinfo);
fclose(infile);
return 0;
}
jpeg_create_decompress(&cinfo);
jpeg_stdio_src(&cinfo, infile);
(void) jpeg_read_header(&cinfo, TRUE);
jvirt_barray_ptr* coeffs_array;
coeffs_array = jpeg_read_coefficients(&cinfo);
BOOL done = FALSE;
for (int ci = 0; ci < 3; ci++)
{
JBLOCKARRAY buffer_one;
JCOEFPTR blockptr_one;
jpeg_component_info* compptr_one;
compptr_one = cinfo.comp_info + ci;
for (int by = 0; by < compptr_one->height_in_blocks; by++)
{
buffer_one = (cinfo.mem->access_virt_barray)((j_common_ptr)&cinfo, coeffs_array[ci], by, (JDIMENSION)1, FALSE);
for (int bx = 0; bx < compptr_one->width_in_blocks; bx++)
{
blockptr_one = buffer_one[0][bx];
for (int bi = 0; bi < 64; bi++)
{
blockptr_one[bi]++;
}
}
}
}
write_jpeg(output, &cinfo, coeffs_array); // saving modified JPEG to the output file
jpeg_destroy_decompress(&cinfo);
fclose(infile);
}
#end
When I try to comment out all lines related to this error, I then get a EXC_BAD_ACCESS runtime error on the line
(void) jpeg_read_header(&cinfo, infile);
I'm trying to modify the DCT coefficients of a saved JPEG image in iOS with the eventual goal of creating a JPEG steganography app. I have the following code attempting to add one to each DCT coefficient. I'm using the libjpeg library. The code is a combination of Objective-C and C.
A quick note, the variable filename is equal to "/Users/ScottBouloutian/Library/Application Support/iPhone Simulator/7.0.3/Applications/27A0450E-4685-4C3E-AAC8-A0CC6C85359E/Crypsis.app/screen.jpg", which is the path to the JPEG image I am trying to modify.
What is wrong with the code I have? Is it something dumb like a missing include or something wrong with my file path?
Whatever struct my_error_mgr is, the compiler can't find its declaration as a type. You need to include the header that has that declaration.
I was getting the following error in my code:
Variable has incomplete type 'RCTLayoutMetrics' (aka 'struct CG_BOXABLE')
I upgraded to Xcode Version 9.2 (9C40b) and it fixed the issue.
I am trying to save an OpenCV image to the hard drive.
Here is what I tried:
public void SaveImage (Mat mat) {
Mat mIntermediateMat = new Mat();
Imgproc.cvtColor(mRgba, mIntermediateMat, Imgproc.COLOR_RGBA2BGR, 3);
File path =
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES);
String filename = "barry.png";
File file = new File(path, filename);
Boolean bool = null;
filename = file.toString();
bool = Highgui.imwrite(filename, mIntermediateMat);
if (bool == true)
Log.d(TAG, "SUCCESS writing image to external storage");
else
Log.d(TAG, "Fail writing image to external storage");
}
}
Can any one show how to save that image with OpenCV 2.4.3?
Your question is a bit confusing, as your question is concerning OpenCV on the desktop, but your code is for Android, and you ask about IplImage, but your posted code is using C++ and Mat. Assuming you're on the desktop using C++, you can do something along the lines of:
cv::Mat image;
std::string image_path;
//load/generate your image and set your output file path/name
//...
//write your Mat to disk as an image
cv::imwrite(image_path, image);
...Or for a more complete example:
void SaveImage(cv::Mat mat)
{
cv::Mat img;
cv::cvtColor(...); //not sure where the variables in your example come from
std::string store_path("..."); //put your output path here
bool write_success = cv::imwrite(store_path, img);
//do your logging...
}
The image format is chosen based on the supplied filename, e.g. if your store_path string was "output_image.png", then imwrite would save it was a PNG image. You can see the list of valid extensions at the OpenCV docs.
One caveat to be aware of when writing images to disk with OpenCV is that the scaling will differ depending on the Mat type; that is, for floats the images are expected to be within the range [0, 1], while for say, unsigned chars they'll be from [0, 256).
For IplImages, I'd advise just switching to use Mat, as the old C-interface is deprecated. You can convert an IplImage to a Mat via cvarrToMat then use the Mat, e.g.
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
//now can use cv::imwrite with newC
alternately, you can convert an IplImage to a Mat just with
Mat newC(oldC0); //where newC is a Mat and oldC0 is your IplImage
Also I just noticed this tutorial at the OpenCV website, which gives you a walk-though on loading and saving images in a (desktop) environment.
I'm trying to use facial recognition with Android . All the loads are ok , but the haarcascade_frontalface_alt2.xml file wich i don't know how to load it using JavaCV.
This is the code i have:
public static void detectFacialFeatures()
{
// The cascade definition to be used for detection.
// This will redirect the OpenCV errors to the Java console to give you
// feedback about any problems that may occur.
new JavaCvErrorCallback();
// Load the original image.
// We need a grayscale image in order to do the recognition, so we
// create a new image of the same size as the original one.
IplImage grayImage = IplImage.create(iplImage.width(),iplImage.height(), IPL_DEPTH_8U, 1);
// We convert the original image to grayscale.
cvCvtColor(iplImage, grayImage, CV_BGR2GRAY);
CvMemStorage storage = CvMemStorage.create();
// We instantiate a classifier cascade to be used for detection, using the cascade definition.
CvHaarClassifierCascade cascade = new CvHaarClassifierCascade(cvLoad("./haarcascade_frontalface_alt2.xml"));
// We detect the faces.
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 1, 0);
Log.d("CARAS","Hay "+faces.total()+" caras ahora mismo");
}
The problem is here
CvHaarClassifierCascade(cvLoad("./haarcascade_frontalface_alt2.xml"));
I have tried putting the xml file it into the /assets folder , but i have no idea of how must i load it. It's always giving me the next error:
03-26 17:31:25.385: E/cv::error()(14787): OpenCV Error: Null pointer (Invalid classifier cascade) in CvSeq* cvHaarDetectObjectsForROC(const CvArr*, CvHaarClassifierCascade*, CvMemStorage*, std::vector<int>&, std::vector<double>&, double, int, int, CvSize, CvSize, bool), file /home/saudet/projects/cppjars/OpenCV-2.4.4/modules/objdetect/src/haar.cpp, line 1514
...
looking more near at the error it points to this code line:
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 1,
0);
That's why i'm pretty sure that the problem comes from the haarcascade_frontalface_alt2.xml load.
Thanks for your help.
P.D: I want to include the cascade into the apk not in sdcard .
If your cascade is in SD card you can use:
CascadeClassifier cascade = new CascadeClassifier(Environment.getExternalStorageDirectory().getAbsolutePath() + "/cascade.xml");
Environment.getExternalStorageDirectory().getAbsolutePath() give you right path to SD card and next - is address to your file in your SD.
You can pack your file in apk and then copy it to external location so it is accessible by OpenCV functions.
try {
File learnedInputFile = new File(Environment.getExternalStorageDirectory().getPath() + "/learnedData.xml");
if (!learnedInputFile.exists()) {
InputStream learnedDataInputStream = assetManager.open("learnedData.xml");
FileOutputStream learnedDataOutputStream = new FileOutputStream(learnedInputFile);
// copy file from asset folder to external location, i.e. sdcard
byte[] buffer = new byte[300];
int n = 0;
while (-1 != (n = learnedDataInputStream.read(buffer))) {
learnedDataOutputStream.write(buffer, 0, n);
}
}
classifier.load(learnedInputFile.getAbsolutePath());
} catch (IOException exception) {
// there are no learned data, train ml algorithm or show error, etc.
}