Web camera can't see any face in face_tracking project - opencv

I am making project face_detection using OpenCV, Arduino and Processing. However I have faced some problems.
The code is correctly written but after compiling, the web camera of my laptop can't see any face, only dark window.
'import hypermedia.video.*;
import java.awt.Rectangle;
import processing.video.*;
OpenCV opencv;
int contrast_value = 0;
int brightness_value = 0;
void setup() {
size( 1000, 500 );
opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection
description, here-> front face detection : "haarcascade_frontalface_alt.xml"
// print usage
println( "Drag mouse on X-axis inside this sketch window to change contrast" );
println( "Drag mouse on Y-axis inside this sketch window to change brightness" );
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
// display the image
image( opencv.image(), 0, 0 );
// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
}
}
void mouseDragged() {
contrast_value = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
Output is black window. Why is this happening? Here's my output image

I've never used OpenCV with Processing but I think you're missing something on the opencv.capture part, since I see you're not specifying anything about the source of the video stream.
Here's the code for a live camera test... It passes this, and then the other stuff.
The extract of the code I think may solve your problem is this:
void setup() {
size(640, 480);
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
video.start();
}
I think this might help!
Source: OpenCV for Processing on GitHub

Related

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

Constructor OpenCV is undefined

Trying to learn Face Recognition via Processing2 using OpenCV. This code is copied right out of Jan Vantomme's book. However, I am using an updated version of OpenCV from that listed in book. Error received is: The constructor(sketch_ORIGopenCV) is undefined. Does anyone know if OpenCV is not upward compatable, or what causes this error ?
import gab.opencv.*; //original code in Vantomme's book shows "import hypermedia.video.*;"
// I assume thats updated to gab.opencv.* but should still work.
OpenCV opencv;
void setup()
{
size( 640, 480 );
opencv = new OpenCV( this ); // <-----THIS IS THE ERROR LINE*********
opencv.capture( width, height );
}
void draw()
{
opencv.read();
opencv.flip( OpenCV.flip_HORIZONTAL );
image( opencv.image(), 0, 0 );
}

placing two images side by side, opencv 2.3, c++

I #m reading two images and want to get third one which is just combination of two.
img_object and img_scene don't have same size.
int main( int argc, char** argv )
Mat combine;
Mat img_object = imread( object_filename, CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene = imread( scene_filename , CV_LOAD_IMAGE_GRAYSCALE );
if( !img_object.data || !img_scene.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
namedWindow( "Display window oject", 0 );// Create a window for display.
namedWindow( "Display window scene ", 0 );
namedWindow( "Display window combine ", 0 );
imshow( "Display window oject", img_object );
imshow( "Display window scene", img_scene );
imshow( "Display window scene", combine );
waitKey(0);
return 0;
}
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax-> hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
// --------------------------------------------------------------
// Function to draw several images to one image.
// Small images draws into cells of size cellSize.
// If image larger than size of cell ot will be trimmed.
// If image smaller than cellSize there will be gap between cells.
// --------------------------------------------------------------
char showImages(string title, vector<Mat>& imgs, Size cellSize)
{
char k=0;
namedWindow(title);
float nImgs=imgs.size();
int imgsInRow=ceil(sqrt(nImgs)); // You can set this explicitly
int imgsInCol=ceil(nImgs/imgsInRow); // You can set this explicitly
int resultImgW=cellSize.width*imgsInRow;
int resultImgH=cellSize.height*imgsInCol;
Mat resultImg=Mat::zeros(resultImgH,resultImgW,CV_8UC3);
int ind=0;
Mat tmp;
for(int i=0;i<imgsInCol;i++)
{
for(int j=0;j<imgsInRow;j++)
{
if(ind<imgs.size())
{
int cell_row=i*cellSize.height;
int cell_col=j*cellSize.width;
imgs[ind].copyTo(resultImg(Range(cell_row,cell_row+tmp.rows),Range(cell_col,cell_col+tmp.cols)));
}
ind++;
}
}
imshow(title,resultImg);
k=waitKey(10);
return k;
}
If the images are not the same size, combine's width will be equal to the sum of the widths, but the height must be the bigger of the heights of the two images.
Define the combination image like this:
Mat combine(max(img_object.size().height, img_scene.size().height), img_object.size().width + img_scene.size().width, CV_8UC3);
Note that we're just creating a new Mat object with height equal to the maximum height and width equal to the combined width of the pictures (if you need a small margin between the pictures, you need to account for that here).
Then, you can define regions of interest for each side inside combine (using a convenient Matconstructor), and finally copy each image to the corresponding side (here I assume the object goes on the left and the scene goes on the right):
Mat left_roi(combine, Rect(0, 0, img_object.size().width, img_object.size().height));
img_object.copyTo(left_roi);
Mat right_roi(combine, Rect(img_object.size().width, 0, img_scene.size().width, img_scene.size().height));
img_scene.copyTo(right_roi);
Edit: Fixed the typo that TimZaman pointed out.
You can do this with a loop, supposing that your images have the same size :
Mat combine = Mat::zeros(img_object.rows,img_object.cols *2,img_object.type());
for (int i=0;i<combine.cols;i++) {
if (i < img_object.cols) {
combine.col(i) = img_object.col(i);
} else {
combine.col(i) = img_scene.col(i-img_object.col);
}
}
I didn't tested it, but that's the way you can do this
I have tried to put multiple images side by side, just try this.
Mat combine = Mat::zeros(img_buff[0].rows,
img_buff[0].cols * (int)img_index.size(), img_buff[0].type());
int cols = img_buff[0].cols;
for (int i=0;i<combine.cols;i++) {
int fram_index = i / img_buff[0].cols;
cout<<fram_index<<endl;
img_buff[fram_index].col(i % cols).copyTo(combine.col(i));
}
imshow("matching plot", combine);
Please pay attention, when you copy columns form one image to another do this:
A.row(j).copyTo(A.row(i));
Don't do this:
A.row(j) = A.row(i);

DirectX: How to make a 2D image constantly pan / scroll in place

I am trying to find an efficient way to pan a 2D image in place using DirectX 9
I've attached a picture below that hopefully explains what I want to do. Basically, I want to scroll the tu and tv coordinates of all the quad's vertices across the texture to produce a "scrolling in place" effect for a 2D texture.
The first image below represents my loaded texture.
The second image is the texture with the tu,tv coordinates of the four vertices in each corner showing the standard rendered image.
The third image illustrates what I want to happen; I want to move the vertices in such a way that the box that is rendered straddles the end of the image and wraps back around in such a way that the texture will be rendered as shown with the two halves of the cloud separated.
The fourth image shows my temporary (wasteful) solution; I simply doubled the image and pan across until I reach the far right edge, at which point I reset the vertices' tu and tv so that the box being rendered is back on the far right.
Is there a legitimate way to do this without breaking everything into two separate quads?
I've added details of my set up and my render code below, if that helps clarify a path to a solution with my current design.
I have a function that sets up DirectX for 2D render as follows. I've added wrap properties to texture stage 0 as recommended:
VOID SetupDirectXFor2DRender()
{
pd3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MIPFILTER, D3DTEXF_POINT );
// Set for wrapping textures to enable panning sprite render
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP );
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP );
pd3dDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL );
pd3dDevice->SetRenderState( D3DRS_ALPHAREF, 0 );
pd3dDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, true );
pd3dDevice->SetRenderState( D3DRS_ALPHATESTENABLE, false );
pd3dDevice->SetRenderState( D3DRS_SRCBLEND , D3DBLEND_SRCALPHA );
pd3dDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA) ;
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
return;
}
On each frame, I render things as follows:
VOID RenderAllEntities()
{
HRESULT hResult;
// Void pointer for DirectX buffer locking
VOID* pVoid;
hResult = pd3dDevice->Clear( 0,
NULL,
D3DCLEAR_TARGET,
0x0,
1.0f,
0 );
hResult = pd3dDevice->BeginScene();
// Do rendering on the back buffer here
hResult = pd3dDevice->SetFVF( CUSTOMFVF );
hResult = pd3dDevice->SetStreamSource( 0, pVertexBuffer, 0, sizeof(CUSTOM_VERTEX) );
for ( std::vector<RenderContext>::iterator renderContextIndex = queuedContexts.begin(); renderContextIndex != queuedContexts.end(); ++renderContextIndex )
{
// Render each sprite
for ( UINT uiIndex = 0; uiIndex < (*renderContextIndex).uiNumSprites; ++uiIndex )
{
// Lock the vertex buffer into memory
hResult = pVertexBuffer->Lock( 0, 0, &pVoid, 0 );
// Copy our vertex buffer to memory
::memcpy( pVoid, &renderContextIndex->vertexLists[uiIndex], sizeof(vertexList) );
// Unlock buffer
hResult = pVertexBuffer->Unlock();
hResult = pd3dDevice->SetTexture( 0, (*renderContextIndex).textures[uiIndex]->GetTexture() );
hResult = pd3dDevice->DrawPrimitive( D3DPT_TRIANGLELIST, 0, 6 );
}
}
// Complete and present the rendered scene
hResult = pd3dDevice->EndScene();
hResult = pd3dDevice->Present( NULL, NULL, NULL, NULL );
return;
}
To test SetTransform, I tried adding the following (sloppy but temporary) code block inside the render code before the call to DrawPrimitive:
{
static FLOAT di = 0.0f;
static FLOAT dy = 0.0f;
di += 0.03f;
dy += 0.03f;
// Build and set translation matrix
D3DXMATRIX ret;
D3DXMatrixIdentity(&ret);
ret(3, 0) = di;
ret(3, 1) = dy;
//ret(3, 2) = dz;
hResult = pd3dDevice->SetTransform( D3DTS_TEXTURE0, &ret );
}
This does not make any of my rendered sprites pan about.
I've been working through DirectX tutorials and reading the MS documentation to catch up on things but there are definitely holes in my knowledge, so I hope I'm not doing anything too completely brain-dead.
Any help super appreciated.
Thanks!
This should be quiet easy to do with one quad.
Assuming that you're using DX9 with the fixed function pipeline, you can translate your texture with IDirect3DDevice9::SetTransform (doc) with the properly texture as D3DTRANSFORMSTATETYPE(doc) and a 2D-Translation Matrix. You must ensure that your sampler state D3DSAMP_ADDRESSU and D3DSAMP_ADDRESSV (doc) is set to D3DTADDRESS_WRAP (doc). This tiles the texture virtually, so that negativ uv-values or values greater 1 are mapped to an infinit repetition of your texture.
If you're using shaders or another version of directx, you can translate you texture coordinate by yourself in the shader or manipulate the uv-values of your vertices.

How to stop a for loop (OpenCV)

I am using Processing (processing.org) for a project that requires face tracking. The problem now is that the program is going to run out of memory because of a for loop. I want to stop the loop or at least solve the problem of running out of memory. This is the code.
import hypermedia.video.*;
import java.awt.Rectangle;
OpenCV opencv;
// contrast/brightness values
int contrast_value = 0;
int brightness_value = 0;
void setup() {
size( 900, 600 );
opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : "haarcascade_frontalface_alt.xml"
// print usage
println( "Drag mouse on X-axis inside this sketch window to change contrast" );
println( "Drag mouse on Y-axis inside this sketch window to change brightness" );
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
// display the image
image( opencv.image(), 0, 0 );
// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
}
}
void mouseDragged() {
contrast_value = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
Thank you!
A few points...
1 As George mentioned in the comments, you can reduce the size of the capture area, which will exponentially reduce the amount of RAM that your sketch is using to analyze the face tracking. Try making two global variables called CaptureWidth and CaptureHeight and set them to 320 and 240 - which is totally sufficient for this.
2 You can increase the amount of memory that your sketch uses by default in the Java Virtual Machine. Processing defaults to 128 I think, but if you go to the Preferences, you will see a checkbox to "Increase maximum available memory to [x]" ... I usually make mine 1500 mb, but it depends on your machine what you can handle. Dont try to make it bigger than 1800mb unless you are on a 64-bit machine and are using Processing 2.0 in 64-bit mode...
3 To actually break the loop... use the 'break' command http://processing.org/reference/break.html ... but please understand why you want to use that first, as this will simply jump you out of your loop.
4 If you only want to show a certain number of faces, you can test if faces[i] == 1, et cetera, which might help....
But I think the loop itself isn't the culprit here, it's more likely the memory footprint. Start with suggestions 1 & 2 and report back...

Resources