I am developing and iOS app that uses OpenCV to detect Faces on realtime from the camera.
On iOS 7 devices it works without any kind of problem. However on iOS 8 devices it starts detecting (or trying to detect) the faces and after 4-5 seconds the FPS decrements from 30FPS to 20FPS (in the case of iPhone 6, other devices change the FPS).
I am using:
OpenCV 2.4.9
CvVideoCamera to get every frame from the camera.
Cascade Classifier for face detection.
My VieController.mm has the following code to setup the camera and process the image for face detection.
- (void)viewDidLoad {
[super viewDidLoad];
//Init camera and configuration
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:self.imageVideoView];
self.videoCamera.delegate = self;
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetMedium;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
//Load Cascade Model for Face Detection
NSString* cascadeModel = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_alt2" ofType:#"xml"];
const char* cascadeModel_path = [cascadeModel fileSystemRepresentation];
if(face_cascade.load(cascadeModel_path)){
cascadeLoad = true;
}else{
cascadeLoad = false;
}
//Start camera
[self.videoCamera start];
// Init frames to 0 and start timer to refresh the frames each second
totalFrames = 0;
[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:#selector(computeFPS) userInfo:nil repeats:YES];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void) processImage:(Mat&)image {
try{
if(cascadeLoad && !image.empty()){
// Opencv Face Detection
cv::Mat frameH;
pyrDown(image, frameH);
face_cascade.detectMultiScale(frameH, faces, 1.2, 2, 0, cv::Size(50, 50));
//Draw an ellipse on each detected Face
for( size_t i = 0; i < faces.size(); i++ )
{
cv::Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 );
ellipse( image, center, cv::Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
}
}
} catch (cv::Exception e) {
NSLog(#"CV EXCEPTION!");
}
//Add one frame
totalFrames++;
}
//Function called every second to update the frames and update the FPS label
- (void) computeFPS {
[self.lblFPS setText:[NSString stringWithFormat:#"%d fps", totalFrames]];
totalFrames = 0;
}
That's all my code.
With the following test project the image does not look on Portrait (I have not figure out yet why). But the problem with decreasing the FPS is happening.
If you want to test it, the link to my complete test project is here.
Has anyone faced this problem? Any idea what might be happening or what I am doing wrong?
Thanks a lot!
PS: I have also read that there is other Face Detectors for iOS like the CIDetector, but I am using OpenCV as I want to do some more processing in OpenCV with the faces.
Related
I want to display some CMSampleBuffer's with the AVSampleBufferDisplayLayer, but it freezes after showing the first sample.
I get the samplebuffers from the AVCaptureVideoDataOutputSampleBuffer delegate:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
[self imageToBuffer:sampleBuffer];
CFRelease(sampleBuffer);
}
put them into a vector
-(void) imageToBuffer: (CMSampleBufferRef )source{
//buffers is defined as: std::vector<CMSampleBufferRef> buffers;
CMSampleBufferRef newRef;
CMSampleBufferCreateCopy(kCFAllocatorDefault, source, &newRef);
buffers.push_back(newRef);
}
Then try to show them via AVSampleBufferDisplayLayer (in another ViewController)
AVSampleBufferDisplayLayer * displayLayer = [[AVSampleBufferDisplayLayer alloc] init];
displayLayer.bounds = self.view.bounds;
displayLayer.position = CGPointMake(CGRectGetMidX(self.displayOnMe.bounds), CGRectGetMidY(self.displayOnMe.bounds));
displayLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
displayLayer.backgroundColor = [[UIColor greenColor] CGColor];
[self.view.layer addSublayer:displayLayer];
self.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
dispatch_queue_t queue = dispatch_queue_create("My queue", DISPATCH_QUEUE_SERIAL);
[displayLayer setNeedsDisplay];
[displayLayer requestMediaDataWhenReadyOnQueue:queue
usingBlock:^{
while ([displayLayer isReadyForMoreMediaData]) {
if (samplesKey < buffers.size()) {
CMSampleBufferRef buf = buffers[samplesKey];
[displayLayer enqueueSampleBuffer:buffers[samplesKey]];
samplesKey++;
}else
{
[displayLayer stopRequestingMediaData];
break;
}
}
}];
but it shows the first sample then freezes, and does nothing.
And my video data output settings are as follows:
//set up our output
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
[_videoDataOutput setSampleBufferDelegate:self queue:queue];
[_videoDataOutput setVideoSettings:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,
nil]];
I came across this problem in the same context, trying to take the output from AVCaptureVideoDataOutput and display it in a AVSampleDisplay layer.
If your frames come out in display order, then the fix is very easy, just set the display immediately flag on the CMSampleBufferRef.
Get the sample buffer returned by the delegate and then...
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
If your frames come out in encoder order (not display order), then the time stamps on the CMSampleBuffer need to be zero biased and restamped such that the first frames timestamp is equal to time 0.
double pts = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer));
// ptsStart is equal to the first frames presentationTimeStamp so playback starts from time 0.
CMTime presentationTimeStamp = CMTimeMake((pts-ptsStart)*1000000,1000000);
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
Update:
I ran into a situation where some video still wasn't playing smoothly when I used the zero bias method and I investigated further. The correct answer seems to be using the PTS from the first frame you intend to play.
My answer is here, but I will post it here, too.
Set rate at which AVSampleBufferDisplayLayer renders sample buffers
The Timebase needs to be set to the presentation time stamp (pts) of the first frame you intend to decode. I was indexing the pts of the first frame to 0 by subtracting the initial pts from all subsequent pts and setting the Timebase to 0. For whatever reason, that didn't work with certain video.
You want something like this (called before a call to decode):
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
displayLayer.controlTimebase = controlTimebase;
// Set the timebase to the initial pts here
CMTimebaseSetTime(displayLayer.controlTimebase, CMTimeMake(ptsInitial, 1));
CMTimebaseSetRate(displayLayer.controlTimebase, 1.0);
Set the PTS for the CMSampleBuffer...
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
And maybe make sure display immediately isn't set....
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanFalse);
This is covered very briefly in WWDC 2014 Session 513.
I have graphic view which is drawn using -(void)drawRect:(CGRect)rect function, it is a little complex graphic which includes date as X axis and time as Y axis. So every time I draw it, it always takes 2,3 seconds to display the final graphic.The red point maybe red, green or yellow, its colour depends on the data read from data base.
My question is that how to draw this view using multithread or in a fast way.
For example, draw the X axis and Y axis in one thread, and draw each day's graphic in separated thread(using 30 threads) or draw each hour for 30 days' graphics in separated thread(using 5 threads)?
I have tried using the following function:
for(int i = 0; i < 31; i++){
NSString *threadStr = [NSString stringWithFormat:#"%i", i];
dispatch_queue_t queue1 = dispatch_queue_create([threadStr UTF8String], NULL);
dispatch_async(queue1, ^{
//Draw my graphics for one day
.....
});
}
But sometimes these threads read the database at the same time and some of them cannot get the data properly, this cause the final graphic is not complete.
Could anyone give me some suggestion?
Thank you very much.
I am trying to find out the reason why it delays 2, 3 seconds if I draw it using main thread.
Is it because of reading SQLite3 Data base ?
The procedure of drawing this graphic is :
- (void)drawRect:(CGRect)rect
{
1.draw X axis with date label;
2.draw Y axis with time label;
NSArray *timeArray = #[#"08:00", 11:00", 14:00", 17:00", 20:00"];
for(NSString *timeStr in timeArray){
for(int i = 0; i < 31; i++){
NSString *DateTimeStr = dateStr + timeStr;
int a = [DataManager getResult: DateTimeStr];
UIColor *RoundColor;
if(a == 1){
RoundColor = [UIColor colorWithRed:229/255.0 green:0/255.0 blue:145/255.0 alpha:1.0];
}
else if(a == 2){
RoundColor = [UIColor colorWithRed:0/255.0 green:164/255.0 blue:229/255.0 alpha:1.0];
}
else if(a == 3){
RoundColor = [UIColor colorWithRed:229/255.0 green:221/255.0 blue:0/255.0 alpha:colorAlpha];;
}
const CGFloat *components = CGColorGetComponents(RoundColor.CGColor);
CGContextSetRGBFillColor(context, components[0], components[1], components[2], components[3]);
CGContextFillEllipseInRect(context, CGRectMake(roundXnow,roundYnow,roundR,roundR));
}
}
}
I am thinking if the delay occurs when I am reading the SQLite3 data base, because I know there are a lot of record in my data base.
Drawing here is a very light operation, most likely the timing issues you're facing are related to the calculations (or fetching of data) or whatever else you're doing before actually drawing the dot. So use instruments, time profiler in particular. It will show you what operations take longer (look at the pikes in the graphic that will be displayed while your app is running).
This will give you the clue.
Besides as it's correctly mentioned here, all the UI should be performed in the main thread.
But those pre-calculations and fetches that you probably have with your data - that's where you should look for multithreaded solutions if it's at all possible.
I want to make a slider that can change the saturation of the image in an image view.
I'm currently using OpenCV. I've found some code on the web and tried it. It's working but it works in a little bit strange way..There is a white cup on the image but its color goes all the way rainbow regardless of the value(unless it's totally grayscale).
- (IBAction)stSlider:(id)sender {
float value = stSlider.value;
UIImage *image = [UIImage imageNamed:#"sushi.jpg"];
cv::Mat mat = [self cvMatFromUIImage:image];
cv::cvtColor(mat, mat, CV_RGB2HSV);
for (int i=0; i<mat.rows;i++)
{ for (int j=0; j<mat.cols;j++)
{
int idx = 1;
mat.at<cv::Vec3b>(i,j)[idx] = value;
}
}
cv::cvtColor(mat, mat, CV_HSV2RGB);
imageView.image = [self UIImageFromCVMat:mat];
}
This is the code I used.
Please tell me which part I have to change to make it work right.
I am having trouble with an app with an OpenGL component crashing on iPad. The app throws a memory warning and crashes, but it doesn't appear to be using that much memory. Am I missing something?
The app is based on the Vuforia augmented reality system (borrows heavily from the ImageTargets sample). I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
Anything odd in here?
I don't know much at all about OpenGL (part of the reason why I chose the Vurforia engine). Anything in this screen shot below that should concern me? Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
Any help would be appreciated!!
Here is the code that generates the 3D objects (in EAGLView):
// Load the textures for use by OpenGL
-(void)loadATexture:(int)texNumber {
if (texNumber >= 0 && texNumber < [tempTextureList count]) {
currentlyChangingTextures = YES;
[textureList removeAllObjects];
[textureList addObject:[tempTextureList objectAtIndex:texNumber]];
Texture *tex = [[Texture alloc] init];
NSString *file = [textureList objectAtIndex:0];
[tex loadImage:file];
[textures replaceObjectAtIndex:texNumber withObject:tex];
[tex release];
// Remove all old textures outside of the one we're interested in and the two on either side of the picker.
for (int i = 0; i < [textures count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
[textures replaceObjectAtIndex:i withObject:#""];
}
}
// Render - Generate the OpenGL texture objects
GLuint nID;
Texture *texture = [textures objectAtIndex:texNumber];
glGenTextures(1, &nID);
[texture setTextureID: nID];
glBindTexture(GL_TEXTURE_2D, nID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [texture width], [texture height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[texture pngData]);
// Set up objects using the above textures.
Object3D *obj3D = [[Object3D alloc] init];
obj3D.numVertices = rugNumVerts;
obj3D.vertices = rugVerts;
obj3D.normals = rugNormals;
obj3D.texCoords = rugTexCoords;
obj3D.texture = [textures objectAtIndex:texNumber];
[objects3D replaceObjectAtIndex:texNumber withObject:obj3D];
[obj3D release];
// Remove all objects except the one currently visible and the ones on either side of the picker.
for (int i = 0; i < [tempTextureList count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
Object3D *obj3D = [[Object3D alloc] init];
[objects3D replaceObjectAtIndex:i withObject:obj3D];
[obj3D release];
}
}
if (QCAR::GL_20 & qUtils.QCARFlags) {
[self initShaders];
}
currentlyChangingTextures = NO;
}
}
Here is the code in the textures object.
- (id)init
{
self = [super init];
pngData = NULL;
return self;
}
- (BOOL)loadImage:(NSString*)filename
{
BOOL ret = NO;
// Build the full path of the image file
NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
NSString* fullPath = [resourcePath stringByAppendingPathComponent:filename];
// Create a UIImage with the contents of the file
UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
if (uiImage) {
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = uiImage.CGImage;
// Get the image size
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
// Record the number of channels
channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
// Copy the image data for use by Open GL
ret = [self copyImageDataForOpenGL: imageData];
CFRelease(imageData);
}
return ret;
}
- (void)dealloc
{
if (pngData) {
delete[] pngData;
}
[super dealloc];
}
#end
#implementation Texture (TexturePrivateMethods)
- (BOOL)copyImageDataForOpenGL:(CFDataRef)imageData
{
if (pngData) {
delete[] pngData;
}
pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
return YES;
}
Odds are, you're not seeing the true memory usage of your application. As I explain in this answer, the Allocations instrument hides memory usage from OpenGL ES, so you can't use it to measure the size of your application. Instead, use the Memory Monitor instrument, which I'm betting will show that your application is using far more RAM than you think. This is a common problem people run into when trying to optimize OpenGL ES on iOS using Instruments.
If you're concerned about which objects or resources could be accumulating in memory, you can use the heap shots functionality of the Allocations instrument to identify specific resources that are allocated but never removed when performing repeated tasks within your application. That's how I've tracked down textures and other items that were not being properly deleted.
Seeing some code would help, but I can make some gusses:
I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
(...)
This kind of approach is not ideal; and it's most likely the reason for your problems, if the memory is not properly deallocated. Eventually you'll run out of memory and then your process dies if you don't take proper precautions. It's very likely that the engine used has some memory leak, exposed by your access scheme.
Today operating systems don't differentiate between RAM and storage. To them it's all just memory and all address space is backed by the block storage system anyway (if there's actually some storage device attached doesn't matter).
So here's what you should do: Instead of read-ing your models into memory, you should memory map them (mmap). This tells the OS "this part of storage should be visible in address space" and the OS kernel will do all the necessary transfers when they're due.
Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
This is a strong indicator, that OpenGL texture objects don't get properly deleted.
Any help would be appreciated!!
My advice: Stop programming like it was the 1970ies. Today's computers and operating systems work differently. See also http://www.varnish-cache.org/trac/wiki/ArchitectNotes
i try to create an app like this: http://www.youtube.com/watch?v=V9LY8JqKLqE&feature=my_liked_videos&list=LLIeJ9s3lwD-lrqYMU409iAQ
but sadly i dont know how to mark the place of finding
i was re-thinking this tutorial: http://aptogo.co.uk/2011/09/face-tracking/
my source code:
i implement the template image into the DemoVideoCaptureViewController.mm file
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *testImage = [UIImage imageNamed:#"tt2.jpg"];
tempMat = [testImage CVMat];
std::vector<cv::KeyPoint> keypoints;
cv::SurfFeatureDetector surf (250);
surf.detect(tempMat, keypoints);
cv::SurfDescriptorExtractor surfDesc;
surfDesc.compute(tempMat, keypoints, description1);
}
and i try to find object here:
- (void)processFrame:(cv::Mat &)mat videoRect:(CGRect)rect videoOrientation:(AVCaptureVideoOrientation)videOrientation
{
cv::FlannBasedMatcher matcher;
std::vector< cv::vector<cv::DMatch> > matches;
std::vector<cv::DMatch> good_matches;
cv::SurfFeatureDetector surf2 (250);
std::vector<cv::KeyPoint> kp_image;
surf2.detect(mat, kp_image);
cv::SurfDescriptorExtractor surfDesc2;
surfDesc2.compute(mat, kp_image, des_image);
if ((des_image.rows > 0) && (description1.rows > 0)) {
matcher.knnMatch(description1, des_image, matches, 2);
for (int i = 0; i < MIN(des_image.rows-1, (int) matches.size()); i++) {
if ((matches[i][0].distance < 0.6*(matches[i][1].distance)) && ((int) matches[i].size() <= 2 && (int) matches[i].size() > 0)) {
good_matches.push_back(matches[i][0]);
}
}
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
**//remove old layer**
for (CALayer *layer in self.view.layer.sublayers) {
NSString *layerName = [layer name];
if ([layerName isEqualToString:#"Layer"])
[layer setHidden:YES];
}
[CATransaction commit];
if (good_matches.size() >= 4) {
NSLog(#"Finding");
}
}
}
But i dont know how to put a layer on the camrea view
could someone help me?
The app in the video you posted can be created following the chapter 3 (Marker-less Augmented Reality) from the book "Mastering OpenCV with Practical Computer Vision Projects".
You still have to do some steps, like calculating the homography. And you don't need to use CATransaction or any other iOS class. CvVideoCamera and line is enough.