i try to create an app like this: http://www.youtube.com/watch?v=V9LY8JqKLqE&feature=my_liked_videos&list=LLIeJ9s3lwD-lrqYMU409iAQ
but sadly i dont know how to mark the place of finding
i was re-thinking this tutorial: http://aptogo.co.uk/2011/09/face-tracking/
my source code:
i implement the template image into the DemoVideoCaptureViewController.mm file
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *testImage = [UIImage imageNamed:#"tt2.jpg"];
tempMat = [testImage CVMat];
std::vector<cv::KeyPoint> keypoints;
cv::SurfFeatureDetector surf (250);
surf.detect(tempMat, keypoints);
cv::SurfDescriptorExtractor surfDesc;
surfDesc.compute(tempMat, keypoints, description1);
}
and i try to find object here:
- (void)processFrame:(cv::Mat &)mat videoRect:(CGRect)rect videoOrientation:(AVCaptureVideoOrientation)videOrientation
{
cv::FlannBasedMatcher matcher;
std::vector< cv::vector<cv::DMatch> > matches;
std::vector<cv::DMatch> good_matches;
cv::SurfFeatureDetector surf2 (250);
std::vector<cv::KeyPoint> kp_image;
surf2.detect(mat, kp_image);
cv::SurfDescriptorExtractor surfDesc2;
surfDesc2.compute(mat, kp_image, des_image);
if ((des_image.rows > 0) && (description1.rows > 0)) {
matcher.knnMatch(description1, des_image, matches, 2);
for (int i = 0; i < MIN(des_image.rows-1, (int) matches.size()); i++) {
if ((matches[i][0].distance < 0.6*(matches[i][1].distance)) && ((int) matches[i].size() <= 2 && (int) matches[i].size() > 0)) {
good_matches.push_back(matches[i][0]);
}
}
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
**//remove old layer**
for (CALayer *layer in self.view.layer.sublayers) {
NSString *layerName = [layer name];
if ([layerName isEqualToString:#"Layer"])
[layer setHidden:YES];
}
[CATransaction commit];
if (good_matches.size() >= 4) {
NSLog(#"Finding");
}
}
}
But i dont know how to put a layer on the camrea view
could someone help me?
The app in the video you posted can be created following the chapter 3 (Marker-less Augmented Reality) from the book "Mastering OpenCV with Practical Computer Vision Projects".
You still have to do some steps, like calculating the homography. And you don't need to use CATransaction or any other iOS class. CvVideoCamera and line is enough.
Related
I am developing and iOS app that uses OpenCV to detect Faces on realtime from the camera.
On iOS 7 devices it works without any kind of problem. However on iOS 8 devices it starts detecting (or trying to detect) the faces and after 4-5 seconds the FPS decrements from 30FPS to 20FPS (in the case of iPhone 6, other devices change the FPS).
I am using:
OpenCV 2.4.9
CvVideoCamera to get every frame from the camera.
Cascade Classifier for face detection.
My VieController.mm has the following code to setup the camera and process the image for face detection.
- (void)viewDidLoad {
[super viewDidLoad];
//Init camera and configuration
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:self.imageVideoView];
self.videoCamera.delegate = self;
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetMedium;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
//Load Cascade Model for Face Detection
NSString* cascadeModel = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_alt2" ofType:#"xml"];
const char* cascadeModel_path = [cascadeModel fileSystemRepresentation];
if(face_cascade.load(cascadeModel_path)){
cascadeLoad = true;
}else{
cascadeLoad = false;
}
//Start camera
[self.videoCamera start];
// Init frames to 0 and start timer to refresh the frames each second
totalFrames = 0;
[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:#selector(computeFPS) userInfo:nil repeats:YES];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void) processImage:(Mat&)image {
try{
if(cascadeLoad && !image.empty()){
// Opencv Face Detection
cv::Mat frameH;
pyrDown(image, frameH);
face_cascade.detectMultiScale(frameH, faces, 1.2, 2, 0, cv::Size(50, 50));
//Draw an ellipse on each detected Face
for( size_t i = 0; i < faces.size(); i++ )
{
cv::Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 );
ellipse( image, center, cv::Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
}
}
} catch (cv::Exception e) {
NSLog(#"CV EXCEPTION!");
}
//Add one frame
totalFrames++;
}
//Function called every second to update the frames and update the FPS label
- (void) computeFPS {
[self.lblFPS setText:[NSString stringWithFormat:#"%d fps", totalFrames]];
totalFrames = 0;
}
That's all my code.
With the following test project the image does not look on Portrait (I have not figure out yet why). But the problem with decreasing the FPS is happening.
If you want to test it, the link to my complete test project is here.
Has anyone faced this problem? Any idea what might be happening or what I am doing wrong?
Thanks a lot!
PS: I have also read that there is other Face Detectors for iOS like the CIDetector, but I am using OpenCV as I want to do some more processing in OpenCV with the faces.
I have graphic view which is drawn using -(void)drawRect:(CGRect)rect function, it is a little complex graphic which includes date as X axis and time as Y axis. So every time I draw it, it always takes 2,3 seconds to display the final graphic.The red point maybe red, green or yellow, its colour depends on the data read from data base.
My question is that how to draw this view using multithread or in a fast way.
For example, draw the X axis and Y axis in one thread, and draw each day's graphic in separated thread(using 30 threads) or draw each hour for 30 days' graphics in separated thread(using 5 threads)?
I have tried using the following function:
for(int i = 0; i < 31; i++){
NSString *threadStr = [NSString stringWithFormat:#"%i", i];
dispatch_queue_t queue1 = dispatch_queue_create([threadStr UTF8String], NULL);
dispatch_async(queue1, ^{
//Draw my graphics for one day
.....
});
}
But sometimes these threads read the database at the same time and some of them cannot get the data properly, this cause the final graphic is not complete.
Could anyone give me some suggestion?
Thank you very much.
I am trying to find out the reason why it delays 2, 3 seconds if I draw it using main thread.
Is it because of reading SQLite3 Data base ?
The procedure of drawing this graphic is :
- (void)drawRect:(CGRect)rect
{
1.draw X axis with date label;
2.draw Y axis with time label;
NSArray *timeArray = #[#"08:00", 11:00", 14:00", 17:00", 20:00"];
for(NSString *timeStr in timeArray){
for(int i = 0; i < 31; i++){
NSString *DateTimeStr = dateStr + timeStr;
int a = [DataManager getResult: DateTimeStr];
UIColor *RoundColor;
if(a == 1){
RoundColor = [UIColor colorWithRed:229/255.0 green:0/255.0 blue:145/255.0 alpha:1.0];
}
else if(a == 2){
RoundColor = [UIColor colorWithRed:0/255.0 green:164/255.0 blue:229/255.0 alpha:1.0];
}
else if(a == 3){
RoundColor = [UIColor colorWithRed:229/255.0 green:221/255.0 blue:0/255.0 alpha:colorAlpha];;
}
const CGFloat *components = CGColorGetComponents(RoundColor.CGColor);
CGContextSetRGBFillColor(context, components[0], components[1], components[2], components[3]);
CGContextFillEllipseInRect(context, CGRectMake(roundXnow,roundYnow,roundR,roundR));
}
}
}
I am thinking if the delay occurs when I am reading the SQLite3 data base, because I know there are a lot of record in my data base.
Drawing here is a very light operation, most likely the timing issues you're facing are related to the calculations (or fetching of data) or whatever else you're doing before actually drawing the dot. So use instruments, time profiler in particular. It will show you what operations take longer (look at the pikes in the graphic that will be displayed while your app is running).
This will give you the clue.
Besides as it's correctly mentioned here, all the UI should be performed in the main thread.
But those pre-calculations and fetches that you probably have with your data - that's where you should look for multithreaded solutions if it's at all possible.
I'm trying to fetch an image of PDF page and edit it. Everything works fine, but there is an huge memory growth. Profiler says that there is no any memory leak. Also profiler says that 90% memory allocated at UIGraphicsGetCurrentContext() and UIGraphicsGetImageFromCurrentImageContext(). This code not runs in the loop and there is no need to wrap it with #autorelease.
if ((pageRotation == 0) || (pageRotation == 180) ||(pageRotation == -180)) {
UIGraphicsBeginImageContextWithOptions(cropBox.size, NO, PAGE_QUALITY);
}
else {
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(cropBox.size.height, cropBox.size.width), NO, PAGE_QUALITY);
}
CGContextRef imageContext = UIGraphicsGetCurrentContext();
[PDFPageRenderer renderPage:_PDFPageRef inContext:imageContext pagePoint:CGPointMake(0, 0)];
UIImage *pageImage = UIGraphicsGetImageFromCurrentImageContext();
[[NSNotificationCenter defaultCenter] postNotificationName:#"PAGE_IMAGE_FETCHED" object:pageImage];
UIGraphicsEndImageContext();
But debug shows that the memory growing occurs only when I start editing the fetched image. For image editing I use the Leptonica library. For example:
+(void) testAction:(UIImage *) image{
PIX * pix =[self getPixFromUIImage:image];
pixConvertTo8(pix, FALSE);
pixDestroy(&pix);
}
Before pixConvertTo8 app takes 13MB, after - 50MB. Obviously growth depends on image size. Converting method:
+(PIX *) getPixFromUIImage:(UIImage *) image{
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
UInt8 const* pData = (UInt8*)CFDataGetBytePtr(data);
Pix *myPix = (Pix *) malloc(sizeof(Pix));
CGImageRef myCGImage = [image CGImage];
myPix->w = CGImageGetWidth (myCGImage)-1;
myPix->h = CGImageGetHeight (myCGImage);
myPix->d = CGImageGetBitsPerPixel([image CGImage]) ;
myPix->wpl = CGImageGetBytesPerRow (myCGImage)/4 ;
myPix->data = (l_uint32 *)pData;
myPix->colormap = NULL;
myPix->text="text";
CFRelease(data);
return myPix;
}
P.S. Sorry for my terrible English.
I want to make a slider that can change the saturation of the image in an image view.
I'm currently using OpenCV. I've found some code on the web and tried it. It's working but it works in a little bit strange way..There is a white cup on the image but its color goes all the way rainbow regardless of the value(unless it's totally grayscale).
- (IBAction)stSlider:(id)sender {
float value = stSlider.value;
UIImage *image = [UIImage imageNamed:#"sushi.jpg"];
cv::Mat mat = [self cvMatFromUIImage:image];
cv::cvtColor(mat, mat, CV_RGB2HSV);
for (int i=0; i<mat.rows;i++)
{ for (int j=0; j<mat.cols;j++)
{
int idx = 1;
mat.at<cv::Vec3b>(i,j)[idx] = value;
}
}
cv::cvtColor(mat, mat, CV_HSV2RGB);
imageView.image = [self UIImageFromCVMat:mat];
}
This is the code I used.
Please tell me which part I have to change to make it work right.
I wonder to know if there is any way to configure our MapKit maps like we do with the MapTypeStyle object in the Google Maps API.
If I refer to Apple doc's, MKMapView has a mapType option that takes MKMapType constant but no styles parameters like MapOptions with the MapTypeStyle and the MapTypeStyler wich is very powerfull for fast maps customizing.
So my question is : Is there any way to achieve something similar with the MapKit framework, if not, what is the best framework/library to do this ? I'm thinking of MapBox and similar products.
There are a few options for you my friend. You could use one of these frameworks
http://cloudmade.com/products/iphone-sdk
https://github.com/route-me/route-me
Or you could just use mapbox. Their api looks pretty good.
Alternatively you supply you own map tiles and overlay mapkit. Something like this in a MKOverlayView
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context {
NSURL* fileURL = [(HeatMap*)self.overlay localUrlForStyle:#"alien" withMapRect:mapRect andZoomScale:zoomScale];
NSData *imageData = [NSData dataWithContentsOfURL:fileURL ];
if (imageData != nil) {
UIImage* img = [UIImage imageNamed:#"aTileX.png"];
// Perform the image render on the current UI context
UIGraphicsPushContext(context);
[img drawInRect:[self rectForMapRect:mapRect] blendMode:kCGBlendModeNormal alpha:1.0];
UIGraphicsPopContext();
}
}
Also check this out if you want unsupported "terrain" mode
http://openradar.appspot.com/9621632
I'm actually in the middle of a program that requires overlaying tiles over a map. This example has been very helpful. You'll want to look into MKOverlay and MKOverlayView. The project that I am doing involves using gheat. I am accessing the tiles through an NSURLConnection and storing them locally. A gist of my implementation.
There is no way to customize the map styles natively with mapkit. Your only option for this is to opt for a hybrid app approach, and then customize the styles using html/javascript in the page itself.
As drawing the tiles takes place in a private class called MKMapTileView you can not simply write a category. You have to implement another class for the custom drawing. The Methods of this class will be used to overload the implementation of MKMapTileView during runtime:
Header file:
#interface MyColorMap : NSObject
+ (void)overLoadMethods:(Class)destinationClass;
#end
Imlementation:
#import "MyColorMap.h"
#import <objc/runtime.h>
#implementation MyColorMap
+ (void)overLoadMethods:(Class)destinationClass {
// get the original method for drawing a tile
Method originalDrawLayer = class_getInstanceMethod(destinationClass, #selector(drawLayer:inContext:));
// get the method we will replace with the original implementation of 'drawLayer:inContext:' later
Method backupDrawLayer = class_getInstanceMethod([self class], #selector(backupDrawLayer:inContext:));
// get the method we will use to draw our own colors
Method myDrawLayer = class_getInstanceMethod([self class], #selector(myDrawLayer:inContext:));
// dito with the implementations
IMP impOld = method_getImplementation(originalDrawLayer);
IMP impNew = method_getImplementation(myDrawLayer);
// replace the original 'drawLayer:inContext:' with our own implementation
method_setImplementation(originalDrawLayer, impNew);
// set the original 'drawLayer:inContext:' implementation to our stub-method, so wie can call it later on
SEL selector = method_getName(backupDrawLayer);
const char *types = method_getTypeEncoding(backupDrawLayer);
class_addMethod(destinationClass, selector, impOld, types);
}
- (void)backupDrawLayer:(CALayer*)l inContext:(CGContextRef)c {
// stub method, implementation will never be called. The only reason we implement this is so we can call the original method durring runtime
}
- (void)myDrawLayer:(CALayer*)l inContext:(CGContextRef)c {
// set background to white so wie can use it for blendmode
CGContextSetFillColorWithColor(c, [[UIColor whiteColor] CGColor]);
CGContextFillRect(c, CGContextGetClipBoundingBox(c));
// set blendmode so the map will show as grayscale
CGContextSetBlendMode(c, kCGBlendModeLuminosity);
// kCGBlendModeExclusion for inverted colors etc.
// calling the stub-method which will become the original method durring runtime
[self backupDrawLayer:l inContext:c];
// if you want more advanced manipulations you can alter the context after drawing:
// int w = CGBitmapContextGetWidth(c);
// int h = CGBitmapContextGetHeight(c);
//
// unsigned char* data = CGBitmapContextGetData(c);
// if (data != NULL) {
// int maxY = h;
// for(int y = 0; y<maxY; y++) {
// for(int x = 0; x<w; x++) {
//
// int offset = 4*((w*y)+x);
// char r = data[offset];
// char g = data[offset+1];
// char b = data[offset+2];
// char a = data[offset+3];
//
// // do what ever you want with the pixels
//
// data[offset] = r;
// data[offset+1] = g;
// data[offset+2] = b;
// data[offset+3] = a;
// }
// }
// }
}
now you have to call [MyColorMap overLoadMethods:NSClassFromString(#"MKMapTileView")] at some point before using a MKMapView