How to correctly grab matrix from Queue (opencv) - opencv

I try to make frame grabber class from IP Camera using std::queue and opencv
In this class, It takes and store maximum 10 frame,
and Other class, get frame from buffer.
/* Camera.cpp */
struct FRAME
{
cv::Mat Img;
uint64_t Time;
bool ReadError;
};
class Camera
{
std::queue<FRAME> m_buffer;
std::mutex m_buffer_mutex;
...
};
void Camera::threadFunc_get_stream_from_camera()
{
while (true)
{
FRAME frame;
... // read sequence (if cv::read is fail, store frame routine is skipping)
m_buffer_mutex.lock();
while (m_buffer.size() >= 10)
m_buffer.pop();
m_buffer.push(frame);
m_buffer_mutex.unlock();
}
}
FRAME Camera::grab()
{
std::unique_lock ulock(m_buffer_mutex);
FRAME frame;
if (m_buffer.empty() == false)
{
frame = m_buffer.front();
m_buffer.pop();
return frame;
}
}
/***************/
/* Testing.cpp */
int main()
{
// Camera* cam ..
...
while (true)
{
auto frame = cam->grab();
if (frame.Img.data == nullptr || frame.Time == 0)
continue;
... // frame image processing
}
}
I kept the program this way for about 3 days, and no special bugs or memory sticks were found.
But the reason I'm asking this question is,
whether it's right to use non-pointer(FRAME) in grab() function.
I'm using C++17, and I think there is no problem because the uint64_t or bool values in the structure are copied in this version.
but, How about cv::Mat ?
I don't want to using Producer-Consumer pattern, because not want to contain testing class side's code in Camera class.

Related

OpenCV using Windows Form keeps on throwing exceptions about stack overflowing

I've been writing a program and every time I close my form it throws an exception saying my stack overflowed on the line which closes the background worker...
I have faced this problem for some time now, I've tried googling the problem but I haven't come up with any desired solution yet...
My troubleshooting guesses are as follows :
Pictures in picturebox takes up resources and are not cleared when aborting program
Background Worker is still busy
Here is my code, thanks in advance !
Elements
Picturebox called videobox
Button called btnOpenCamera_Click
Button called btnCloseCamera_Click
Label called backgroundworkerStatus
Variables
Global
Mat videoFrame;
VideoCapture video(0);
Private
bool flag_cameraStatus = true;
bool flag_backgroundworkerStatus = false;
Functions
private: System::Void MainForm_FormClosing
// this event is triggered when application is closing
video.release();
/* Exception is thrown here "System.StackOverflowException" */
backgroundWorker->CancelAsync();
if (backgroundWorker->IsBusy == false) {
checkBackgroundworkerStatus();
this->Close();
}
else {
MessageBox::Show("Backgroundworker not aborted");
e->Cancel = true;
}
private: System::Void btnOpenCamera_Click
// this is a button for turning on camera
flag_cameraStatus = true;
backgroundWorker->RunWorkerAsync();
private: System::Void btnCloseCamera_Click
// this is a button for turning off camera
flag_cameraStatus = false;
videoBox->Image = nullptr;
int openCamera()
// this is a button for turning off camera
if (!video.isOpened()) {
MessageBox::Show("Cannot Find Camera!");
return -1;
}
while (flag_camera) {
video >> videoFrame;
if (videoFrame.empty()) {
break;
}
/* These lines of code converts Opencv mat to bitmap for picturebox to display */
System::Drawing::Graphics^ graphics = videoBox->CreateGraphics();
System::IntPtr ptr(videoFrame.ptr());
System::Drawing::Bitmap^ b = gcnew System::Drawing::Bitmap(videoFrame.cols, videoFrame.rows, videoFrame.step, System::Drawing::Imaging::PixelFormat::Format24bppRgb, ptr);
System::Drawing::RectangleF rect(0, 0, videoBox->Width, videoBox->Height);
graphics->DrawImage(b, rect);
/* Was wondering if I need these lines of code below */
/* delete graphics; */
if (flag_camera == false) {
videoBox->Image = nullptr;
return 0;
}
void checkBackgroundworkerStatus()
// this function is used to check if backgroundworker is busy
// it will identify the status via the backcolor of label backgroundworkerStatus
if(backgroundWorker->IsBusy==true)
backgroundworkerStatus->BackColor = Color::Green;
else
backgroundworkerStatus->BackColor = SystemColors::Control;
private: System::Void backgroundWorker_DoWork
// this event is triggered when background worker is working
if (backgroundWorker->CancellationPending == false) {
checkBackgroundworkerStatus();
openCamera();
}
else {
this->backgroundWorker->CancelAsync();
}
private: System::Void backgroundWorker_RunWorkerCompleted
// this event is triggered when background worker is done working
checkBackgroundworkerStatus();
Some Last Notes...
I did set my
WorkerSupportsCancellation = true
WorkerReportsProgress = true
To solve this problem,add a delay to DoWork
private: System::Void backgroundWorker1_DoWork
using namespace System::Threading
/* your DoWork tasks */
Thread::Sleep(50)
Note
Thread::Sleep counts in milliseconds, you can change to the number you want according to your cameras fps.
My laptop webcam is approx. 15 fps, which is 66.66 ms per frames, the sleep time should be >= this ms/frame but I have no problems running it with a delay of only 50ms.

EmguCV 3.1 Capture.QueryFrame returning error intermittantly

I am using EmguCV to create a capture from a video file stored on disk. I set the capture property for frame position and then perform a QueryFrame. On certain frames from the video, when I go to process the Mat further I get the error '{"OpenCV: Unrecognized or unsupported array type"}'. This doesn't happen on all frames of the video but when I run it for the same video it happens for the same frames in the video. If I save the Mat to disk the image looks perfectly fine and saves without error. Here is the code for loading and processing the image:
Capture cap = new Capture(movieLocation);
int framePos = 0;
while (reading)
{
cap.SetCaptureProperty(CapProp.PosFrames, framePos);
using (var frame = cap.QueryFrame())
{
if (frame != null)
{
try
{
var fm = Rotate(frame); // Works fine
// Other Processing including classifier.DetectMultiScale -- Error occurs here
frameMap.Add(framePos, r);
}
catch (Exception ex)
{
var s = ""; // Done to just see the error
}
framePos = framePos + 2;
}
else
{
reading = false;
}
}
}
Line of code which throws exception in further processing
var r = _classifier.DetectMultiScale(matIn, 1.1, 2, new Size(200, 200), new Size(375, 375));
As I said, this does not fail for every frame of the video.
I'm trying to solve this because sometimes it skips 1 frame but at other times it will skip whole blocks of frames which is causing me to miss important events in the video.
After a bit more working on it, I figured out that the Mat had a ROI set on it before going to the cascade classifier. In the instances where the mat was failing the ROI was set to 0 height and 0 width. This caused the issue.

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

how can i draw lines using mouseclick in opencv in a webcam frame?

I want to draw a line using mouse-event in Opencv in a webcam frame. I also want to erase it just like an eraser in MS-Paint.How can i do it? I dont have much idea about it. But i have this scrambled pseduo code from my head which can be completely wrong but i will write it down anyway. I would like to know how to implement it in c++.
So, i will have two three mouse event-
event 1- Mouse leftbuttonup-- this will be used to start the drawing
event 2- Mouse move -- this will be used to move the mouse to draw
event 3:- Mouse leftbuttondown-this will be used to stop the drawing.
event 4- Mouse double click - this event i can use to erase the drawing.
I will also have a drawfunction for a line such as line(Mat image,Point(startx,starty),Point(endx,endy),(0,0,255),1));
Now, i dont know how to implement this in a code format. I tried a lot but i get wrong results. I have a sincere request that please suggest me the code in Mat format not the Iplimage format. Thanks.
please find working code below with inlined explained comments using Mat ;)
Let me know in case of any problem.
PS: In main function, I have changed defauld cam id to 1 for my code, you should keep it suitable for you PC, probably 0. Good Luck.
#include <iostream>
#include <opencv\cv.h>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
class WebCamPaint
{
public:
int cam_id;
std::string win_name;
cv::VideoCapture webCam;
cv::Size frame_size;
cv::Mat cam_frame, drawing_canvas;
cv::Point current_pointer, last_pointer;
cv::Scalar erase_color, paint_color;
int pointer_size;
//! Contructor to initialize basic members to defaults
WebCamPaint()
{
cam_id = 0;
pointer_size = 5;
win_name = std::string("CamView");
current_pointer = last_pointer = cv::Point(0, 0);
erase_color = cv::Scalar(0, 0, 0);
paint_color = cv::Scalar(250, 10, 10);
}
//! init function is required to set some members in case default members needed to change.
bool init()
{
//! Opening cam with specified cam id
webCam.open(cam_id);
//! Check if problem opening video
if (!webCam.isOpened())
{
return false;
}
//! Reading single frame and extracting properties
webCam >> cam_frame;
//! Check if problem reading video
if (cam_frame.empty())
{
return false;
}
frame_size = cam_frame.size();
drawing_canvas = cv::Mat(frame_size, CV_8UC3);
//! Creating Activity / Interface window
cv::namedWindow(win_name);
cv::imshow(win_name, cam_frame);
//! Resetting drawing canvas
drawing_canvas = erase_color;
//! initialization went successful ;)
return true;
}
//! This function deals wih all processing, drawing and displaying ie main UI to user
void startAcivity()
{
//! Keep doing until user presses "Esc" from Keyboard, wait for 20ms for user input
for (char user_input = cv::waitKey(20); user_input != 27; user_input = cv::waitKey(20))
{
webCam >> cam_frame; //Read a frame from webcam
cam_frame |= drawing_canvas; //Merge with actual drawing canvas or drawing pad, try different operation to merge incase you want different effect or solid effect
cv::imshow(win_name, cam_frame); //Display the image to user
//! Change size of pointer using keyboard + / -, don't they sound fun ;)
if (user_input == '+' && pointer_size < 25)
{
pointer_size++;
}
else if (user_input == '-' && pointer_size > 1)
{
pointer_size--;
}
}
}
//! Our function that should be registered in main to opencv Mouse Event Callback
static void onMouseCallback(int event, int x, int y, int flags, void* userdata)
{
/* NOTE: As it will be registered as mouse callback function, so this function will be called if anything happens with mouse
* event : mouse button event
* x, y : position of mouse-pointer relative to the window
* flags : current status of mouse button ie if left / right / middle button is down
* userdata: pointer o any data that can be supplied at time of setting callback,
* we are using here to tell this static function about the this / object pointer at which it should operate
*/
WebCamPaint *object = (WebCamPaint*)userdata;
object->last_pointer = object->current_pointer;
object->current_pointer = cv::Point(x, y);
//! Drawing a line on drawing canvas if left button is down
if (event == 1 || flags == 1)
{
cv::line(object->drawing_canvas, object->last_pointer, object->current_pointer, object->paint_color, object->pointer_size);
}
//! Drawing a line on drawing canvas if right button is down
if (event == 2 || flags == 2)
{
cv::line(object->drawing_canvas, object->last_pointer, object->current_pointer, object->erase_color, object->pointer_size);
}
}
};
int main(int argc, char *argv[])
{
WebCamPaint myCam;
myCam.cam_id = 1;
myCam.init();
cv::setMouseCallback(myCam.win_name, WebCamPaint::onMouseCallback, &myCam);
myCam.startAcivity();
return 0;
}

EZAudio: How do you separate the buffersize from the FFT window size(desire higher frequency bin resolution).

https://github.com/syedhali/EZAudio
I've been having success using this audio library, but now I'd like to increase the resolution of the microphone data that's read in, so that the FFT resolution, or frequency bin size goes down to 10Hz. To do that, I need a buffersize of 8820 instead of 512. Is the buffersize of the microphone and FFT windowing size separable? I can't see a way to separate it.
How do I set up the audio stream description, so that it can calculate the FFT with a larger window?
Any help would be much appreciated.
The FFT size and the audio buffer size should be completely independent. You can just save multiple audio input buffers (perhaps in a circular FIFO or queue), without processing them until you have enough samples for your desired length FFT.
Saving audio buffers this way also allows you to FFT overlapped frames for more time resolution.
Having browsed the source of the linked project, it appears that the audio callback passes a buffer size that is the preferred buffer size of the microphone device. I would recommend you buffer up the desired number of samples before calling the FFT. The following code is modified from FFTViewController.m in EZAudioFFTExample:
#pragma mark - EZMicrophoneDelegate
-(void) microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
dispatch_async(dispatch_get_main_queue(), ^{
// Update time domain plot
[self.audioPlotTime updateBuffer:buffer[0]
withBufferSize:bufferSize];
// Setup the FFT if it's not already setup
if( !_isFFTSetup ){
[self createFFTWithBufferSize:bufferSize withAudioData:buffer[0]];
_isFFTSetup = YES;
}
int samplesRemaining = bufferSize;
while (samplesRemaining > 0)
{
int samplestoCopy = max(bufferSize, FFTLEN - _fftBufIndex);
memcpy(_fftBuf, buffer[0], samplesToCopy*sizeof(float));
_fftBufIndex += samplesToCopy;
samplesRemaining -= samplesToCopy;
if (_fftBufIndex == FFTLEN)
{
_fftBufIndex = 0;
[self updateFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
}
}
});
}
In the modified program, FFTLEN a value that you defined, _fftBuf is an array of floats that you allocate and it needs to hold FFTLEN elements, and _fftBufIndex is an integer to track the write position into the array.
On a separate note, I would recommend you make a copy of the buffer parameter before calling the async delegate. The reason I say this is because looking at the source for EZMicrophone it looks like it's recycling the buffer so you'll have a race condition.
Thanks Jaket for suggestion. Buffer is the way to go and here is my working implementation of that same function now with adjustable FFT window:
-(void)microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
dispatch_async(dispatch_get_main_queue(),^{
[self.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
// Decibel Calculation.
float one = 1.0;
float meanVal = 0.0;
float tiny = 0.1;
vDSP_vsq(buffer[0], 1, buffer[0], 1, bufferSize);
vDSP_meanv(buffer[0], 1, &meanVal, bufferSize);
vDSP_vdbcon(&meanVal, 1, &one, &meanVal, 1, 1, 0);
// Exponential moving average to dB level to only get continous sounds.
float currentdb = 1.0 - (fabs(meanVal)/100);
if (lastdbValue == INFINITY || lastdbValue == -INFINITY || isnan(lastdbValue)) {
lastdbValue = 0.0;
}
dbValue = ((1.0 - tiny)*lastdbValue) + tiny*currentdb;
lastdbValue = dbValue;
// NSLog(#"dbval: %f",dbValue);
//
// Setup the FFT if it's not already setup
int samplestoCopy = fmin(bufferSize, FFTLEN - _fftBufIndex);
for ( size_t i = 0; i < samplestoCopy; i++ ) {
_fftBuf[_fftBufIndex+i] = buffer[0][i];
}
_fftBufIndex += samplestoCopy;
_samplesRemaining -= samplestoCopy;
if (_fftBufIndex == FFTLEN) {
if( !_isFFTSetup ){
[self createFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
_isFFTSetup = YES;
}
[self updateFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
_fftBufIndex = 0;
_samplesRemaining = FFTLEN;
}
});
}

Resources