I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.
Related
I'm developing a AR Application which can use few different engines. One of them is based on SceneKit (not ARKit).
I used to make SceneView background transparent, and just display AVCaptureVideoPreviewLayer under it. But this have created a problem later - turns out, that if you use clear backgroundColor for SceneView, and then add a floor node to it, which has diffuse.contents = UIColor.clear (transparent floor), then shadows are not displaying on it. And the goal for now is to have shadows in this engine.
I think the best method of getting shadows to work is to set camera preview as SCNScene.background.contents. For this I tried using AVCaptureDevice.default(for: video). This worked, but it has one issue - you can't use video format that you want - SceneKit automatically changes format when it's assigned. I even asked Apple for help using one of two help requests you can send to them, but they replied, that for now there is no public api that would allow me to use this with the format I would like. And on iPhone 6s this format changes to 30 FPS, and I need it to be 60 FPS. So this option is no good.
Is there some other way I would assign camera preview to scene background property? From what I read I can use also CALayer for this property, so I tried assigning AVCaptureVideoPreviewLayer, but this resulted in black color only, and no video. I have updated frame of layer to correct size, but this didn't work anyway. Maybe I did something wrong, and there is a way to use this AVCaptureVideoPreviewLayer or something else?
Can you suggest some possible solutions? I know I could use ARKit, and I do for other engine, but for this particular one I need to keep using SceneKit.
I'm working on an app that records and saves video to the user's camera roll. My setup: I'm using an AVCaptureSession with an AVCaptureVideoPreviewLayer for the preview and AVCaptureMovieFileOutput for the saved file.
Under most circumstances, what is seen in the preview layer matches up with the saved video asset, but if I turn on stabilization (either AVCaptureVideoStabilizationMode.standard or AVCaptureVideoStabilizationMode.cinematic), and set the zoom factor very high (around 130), then the preview and the output become noticeably offset from each other.
The output is consistently above and slightly to the right of what's shown in the preview. I suspect this happens at smaller zoom factors as well, but the effect is minimal enough to only be noticeable at higher zoom factors.
Part of the reason for turning stabilization on in the first place is to more easily line objects up when zoomed in, so merely limiting zoom or turning off stabilization isn't really an option.
I'm curious to know why the preview and output aren't in sync, but ultimately I'm looking for a possible solution that lets me keep 1. zoom 2. stabilization and 3. an accurate preview.
UPDATE: Dec. 12
After messing around with this some more, it seems that sometimes the issue doesn't happen using cinematic stabilization, and my new theory is that it might have to do with specific combinations of AVCaptureDeviceFormats and stabilization settings.
For the first time when using a different GPUImage filter I am seeing strange performance where GPUImage is showing a fairly big difference between the live preview and outputted photo.
I am currently experiencing this with GPUImageSobelEdgeDetectionFilter as follows;
On the left hand side I have a screenshot of the device screen and on the right, the outputted photo. It seems significantly reduce the thickness and sharpness of the detected lines outputting a very different picture.
I have tried having SmoothlyScaleOutput on and off, but as I am not currently scaling the image this should not be effecting it.
The filter is set up like so;
filterforphoto = [[GPUImageSobelEdgeDetectionFilter alloc] init];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setShouldSmoothlyScaleOutput:NO];
[stillCamera addTarget:filterforphoto];
[filterforphoto addTarget:primaryView];
[stillCamera startCameraCapture];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setEdgeStrength:1.0];
And the photo is taken like so;
[stillCamera capturePhotoAsImageProcessedUpToFilter:filterforphoto withCompletionHandler:^(UIImage *processedImage, NSError *error){
Does anyone know why GPUImage is interpreting the live camera so differently to the outputted photo? Is it simply because the preview is of a much lower quality than the final image and therefore does look different on a full resolution image?
Thanks,
(p.s. Please ignore the slightly different sizing on the left and right image, I didn't quite light them up as well as I could have)
The reason is indeed because of the different resolution between the live preview and the photo.
The way that the edge detection filters (and others like them) work is that they sample the pixels immediately on either side of the pixel currently being processed. When you provide a much higher resolution input in the form of a photo, this means that the edge detection occurs over a much smaller relative area of the image. This is also why Gaussian blurs of a certain pixel radius appear much weaker when applied to still photos vs. a live preview.
To lock the edge detection at a certain relative size, you can manually set the texelWidth and texelHeight properties on the filter. These values are 1/width and 1/height of the target image, respectively. If you set those values based on the size of the live preview, you should see a consistent edge size in the final photo. Some details may be slightly different, due to the higher resolution, but it should mostly be the same.
Background: I am implementing a face mask to help people focus their camera and to produce a uniform result across every picture. Sadly, the face mask needs to adjust its size while switching between front and back facing camera to provide a great guideline for people.
Problem: I have been trying to detect this switch between camera to adjust my face mask accordingly. I have not yet found how to detect it.
Additional Info: I have tried looking into delegate and/or subclassing the pickerController. There are no methods visible for this detection. My last resort would be having a thread keep on checking camera source and adjust if needed. I welcome anything better :)
I would take a look at the UIImagePickerController documentation around cameraDevice property.
https://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController_Class.pdf
You can create an observer to run a selector when it changes:
http://farwestab.wordpress.com/2010/09/09/using-observers-on-ios/
I'm developing a game in as3 for iPhone, and I've gotten it running reasonably well (consistanty 24fps on iPhone 3G), but I've noticed that when the "character" goes partly off the screen, the frame rate drops to 10-12fps. Does anyone know why this is and what I can do to remedy it?
Update - Been through the code pretty thoroughly, even made a new project just to test animations. Started a image offscreen and moved it across the screen and back off. Any time the image is offscreen, even partially, the frame rates are terrible. Once the image is fully on the screen, things pick back up to a solid 24fps. I'm using cacheAsBitmap, I've tried masking the stage, I've tried placing the image in a movieclip and using scrollRect. I would keep objects from going off the screen, except that the nature of the game I'm working on has objects dropping from the top down (yes, I'm using object pooling. No, I'm not scaling anything. Striclt x,y translations). And yes, I realize that Obj-C is probably the best answer, but I'd really like to avoid that if I can. AS3 is so much nicer to write in
Try and take a look at the 'blitmasking' technique: http://www.greensock.com/blitmask
From Doyle himself:
A BlitMask is basically a rectangular Sprite that acts as a high-performance mask for a DisplayObject by caching a bitmap version of it and blitting only the pixels that should be visible at any given time, although its bitmapMode can be turned off to restore interactivity in the DisplayObject whenever you want. When scrolling very large images or text blocks, BlitMask can greatly improve performance, especially on mobile devices that have weaker processorst