Constructor OpenCV is undefined - opencv

Trying to learn Face Recognition via Processing2 using OpenCV. This code is copied right out of Jan Vantomme's book. However, I am using an updated version of OpenCV from that listed in book. Error received is: The constructor(sketch_ORIGopenCV) is undefined. Does anyone know if OpenCV is not upward compatable, or what causes this error ?
import gab.opencv.*; //original code in Vantomme's book shows "import hypermedia.video.*;"
// I assume thats updated to gab.opencv.* but should still work.
OpenCV opencv;
void setup()
{
size( 640, 480 );
opencv = new OpenCV( this ); // <-----THIS IS THE ERROR LINE*********
opencv.capture( width, height );
}
void draw()
{
opencv.read();
opencv.flip( OpenCV.flip_HORIZONTAL );
image( opencv.image(), 0, 0 );
}

Related

Web camera can't see any face in face_tracking project

I am making project face_detection using OpenCV, Arduino and Processing. However I have faced some problems.
The code is correctly written but after compiling, the web camera of my laptop can't see any face, only dark window.
'import hypermedia.video.*;
import java.awt.Rectangle;
import processing.video.*;
OpenCV opencv;
int contrast_value = 0;
int brightness_value = 0;
void setup() {
size( 1000, 500 );
opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection
description, here-> front face detection : "haarcascade_frontalface_alt.xml"
// print usage
println( "Drag mouse on X-axis inside this sketch window to change contrast" );
println( "Drag mouse on Y-axis inside this sketch window to change brightness" );
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
// display the image
image( opencv.image(), 0, 0 );
// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
}
}
void mouseDragged() {
contrast_value = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
Output is black window. Why is this happening? Here's my output image
I've never used OpenCV with Processing but I think you're missing something on the opencv.capture part, since I see you're not specifying anything about the source of the video stream.
Here's the code for a live camera test... It passes this, and then the other stuff.
The extract of the code I think may solve your problem is this:
void setup() {
size(640, 480);
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
video.start();
}
I think this might help!
Source: OpenCV for Processing on GitHub

how to apply opencv filters to ARGB images in Processing?

In a part of a project, after displaying backgroundimage, I should hide part of it by creating a mask (named mask1 in my code) and making first 3000 pixels of mask1colorless. Then glare the result by utilizing blur() method of OpenCV library. The problem is that it seems that OpenCv library is ignoring opacity(alpha channel) of pixels in mask1. As a result, it is impossible to see the backgroundimage behind the blured image which has been created by OpenCV library. here is my code:
import gab.opencv.*;
OpenCV opencv;
int[] userMap;
PImage backgroundimage,mask1;
void setup() {
backgroundimage=loadImage("test.jpg");
mask1=createImage(640,480,ARGB);
opencv = new OpenCV(this,640,480);
size(640,480);
}
void draw() {
image(backgroundimage,0,0);
mask1.loadPixels();
for (int index=0; index<640*480; index++) {
if (index<30000) {
mask1.pixels[index]=color(0,0,0,0);
}
else{
mask1.pixels[index]=color(255,255,255);
}
}
mask1.updatePixels();
opencv.loadImage(mask1);
opencv.blur(8);
image(opencv.getSnapshot(), 0, 0);
}
Is there any other solution to glare mask1??

Colour & Size Issue After STL image Import in Processing

I wrote a program in Processing, to import a STL file and then rotate in 3D according to requirement.
But I am facing a problem that the import image is very colorful and small in size. it looks very different from original.
Can you please help me to resolve this problem.
Coding and images are given below
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.processing.*;
TriangleMesh mesh;
ToxiclibsSupport gfx;
void setup() {
size(displayWidth, displayHeight,P3D);
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
gfx=new ToxiclibsSupport(this);
}
void draw() {
background(51);
translate(width/2,height/2,0);
rotateX(mouseY*0.01);
rotateY(mouseX*0.01);
gfx.mesh(mesh,false);
}
I also had same confusion like you but after scrolling reference page of Processing.org
I found couple of commands which can effected on STL Object in positive way.
some command are:
directionalLight
nostroke
scale
Increases or decreases the size of a shape by expanding and contracting vertices
Your above issue is directly linked with these commands so just edit it according to your requirement
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.processing.*;
TriangleMesh mesh;
ToxiclibsSupport gfx;
void setup() {
size(displayHeight, displayWidth,P3D);
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
gfx=new ToxiclibsSupport(this);
}
void draw() {
background(51);
translate(width/2,height/2,0);
rotateX(mouseY*0.01);
rotateY(mouseX*0.01);
directionalLight(192, 168, 128,0, -1000, -0.5);
directionalLight(255, 64, 0, 0.5f, -0.5f, -0.1f);
noStroke();
scale(3);
gfx.mesh(mesh,false);
}
image after program running

How to store image from cam?

I am just started with processing.js.
The goal of my program is adept image filter(opencv) to video frame.
So I thought (However I found out it does not working in this way :<) :
get video steam from Capture object which in processing.video package.
Store current Image(I hope it can store as PImage Object).
adept OpenCV image Filter
call image method with filtered PImage Object.
I found out how to get video stream from cam, but do not know how to store this.
import processing.video.*;
import gab.opencv.*;
Capture cap;
OpenCV opencv;
public void setup(){
//size(800, 600);
size(640, 480);
colorMode(RGB, 255, 255, 255, 100);
cap = new Capture(this, width, height);
opencv = new OpenCV(this, cap);
cap.start();
background(0);
}
public void draw(){
if(cap.available()){
//return void
cap.read();
}
image(cap, 0, 0);
}
this code get video stream and show what it gets. However, I can not store single frame since Capture.read() returns 'void'.
After Store current frame I would like to transform PImage s with OpenCV like :
PImage gray = opencv.getSnapshot();
opencv.threshold(80);
thresh = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.blur(12);
blur = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.adaptiveThreshold(591, 1);
adaptive = opencv.getSnapshot();
Is there any decent way to store and transform current frame? (I think my way - this means show frame after save current image and transform - uses lots of resources depend on frame rate)
Thanks for answer :D
not sure what you want to do, I'm sure you solved it already, but this could be useful for someone anyway...
It seems you can just write the Capture object's name directly and it returns a PImage:
cap = new Capture(this, width, height);
//Code starting and reading capture in here
PImage snapshot = cap;
//Then you can do whatever you wanted to do with the PImage
snapshot.save("snapshot.jpg");
//Actually this seems to work fine too
cap.save("snapshot.jpg");
Use opencv.loadImage(cap). For example:
if (cap.available() == true) {
cap.read();
}
opencv.loadImage(cap);
opencv.blur(15);
image(opencv.getSnapshot(), 0, 0);
Hope this helps!

How to stop a for loop (OpenCV)

I am using Processing (processing.org) for a project that requires face tracking. The problem now is that the program is going to run out of memory because of a for loop. I want to stop the loop or at least solve the problem of running out of memory. This is the code.
import hypermedia.video.*;
import java.awt.Rectangle;
OpenCV opencv;
// contrast/brightness values
int contrast_value = 0;
int brightness_value = 0;
void setup() {
size( 900, 600 );
opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : "haarcascade_frontalface_alt.xml"
// print usage
println( "Drag mouse on X-axis inside this sketch window to change contrast" );
println( "Drag mouse on Y-axis inside this sketch window to change brightness" );
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
// display the image
image( opencv.image(), 0, 0 );
// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
}
}
void mouseDragged() {
contrast_value = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
Thank you!
A few points...
1 As George mentioned in the comments, you can reduce the size of the capture area, which will exponentially reduce the amount of RAM that your sketch is using to analyze the face tracking. Try making two global variables called CaptureWidth and CaptureHeight and set them to 320 and 240 - which is totally sufficient for this.
2 You can increase the amount of memory that your sketch uses by default in the Java Virtual Machine. Processing defaults to 128 I think, but if you go to the Preferences, you will see a checkbox to "Increase maximum available memory to [x]" ... I usually make mine 1500 mb, but it depends on your machine what you can handle. Dont try to make it bigger than 1800mb unless you are on a 64-bit machine and are using Processing 2.0 in 64-bit mode...
3 To actually break the loop... use the 'break' command http://processing.org/reference/break.html ... but please understand why you want to use that first, as this will simply jump you out of your loop.
4 If you only want to show a certain number of faces, you can test if faces[i] == 1, et cetera, which might help....
But I think the loop itself isn't the culprit here, it's more likely the memory footprint. Start with suggestions 1 & 2 and report back...

Resources