I'm using AVCaptureMetadataOutput to detect faces on iOS, and I'm trying to set the orientation of the video after the user rotates their device. However, it appears that I can't do this as every time I call the getter isVideoOrientationSupported on the only AVCaptureConnection that my AVCaptureMetadataOutput has, it always returns false. I've tried the code below in every place imaginable, yet it always returns no. Is there any way to set orientation for my metadata?
AVCaptureConnection *conn = [self.metadataOutput connectionWithMediaType:AVMediaTypeMetadataObject];
NSLog(#"%#",self.metadataOutput.connections);
if (!conn) {
NSLog(#"NULL CONNECTION OBJ");
}
if ([conn isVideoOrientationSupported]) {
NSLog(#"Supported!");
}
else {
NSLog(#"Not supported");
}
An Apple Engineer solved this for me over on the Apple Developer Forums. Here's a link. This was their response:
If you want to translate your metadata objects' coordinate space to
that of another AVCaptureOutput (such as the
AVCaptureVideoDataOutput), use
- (AVMetadataObject *)transformedMetadataObjectForMetadataObject:(AVMetadataObject *)metadataObject connection:(AVCaptureConnection *)connection NS_AVAILABLE_IOS(6_0); It's in AVCaptureOutput.h. If you want to
translate the coordinates to the coordinate space of your video
preview layer, use AVCaptureVideoPreviewLayer.h's
- (AVMetadataObject *)transformedMetadataObjectForMetadataObject:(AVMetadataObject *)metadataObject NS_AVAILABLE_IOS(6_0);
Related
I have made a code that captures device video input and so far it is working fine. Here is what I have set
// add preview layer
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.videoView.layer addSublayer:_previewLayer];
// add movie output
_movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
[_session addOutput:_movieFileOutput];
AVCaptureConnection *movieFileOutputConnection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutputConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
// start session
[_session startRunning];
where:
- (AVCaptureVideoOrientation) videoOrientationFromCurrentDeviceOrientation {
switch ([[UIApplication sharedApplication] statusBarOrientation]) {
case UIInterfaceOrientationPortrait: {
return AVCaptureVideoOrientationPortrait;
}
case UIInterfaceOrientationLandscapeLeft: {
return AVCaptureVideoOrientationLandscapeLeft;
}
case UIInterfaceOrientationLandscapeRight: {
return AVCaptureVideoOrientationLandscapeRight;
}
case UIInterfaceOrientationPortraitUpsideDown: {
return AVCaptureVideoOrientationPortraitUpsideDown;
}
case UIInterfaceOrientationUnknown: {
return 0;
}
}
}
Now when interface orientation changes I want my output also to change, so I have this:
- (void) updatePreviewLayer {
_previewLayer.frame = CGRectMake(0, 0, self.videoView.frame.size.width, self.videoView.frame.size.height);
_previewLayer.connection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_session beginConfiguration];
AVCaptureConnection *movieFileOutpurConnection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutpurConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_session commitConfiguration];
}
But alas it is not working. It seems once I first set video orientation on movie output, it stays like than, it can not be changed later. So if I start filming in landscape mode, and then change to portrait, the video will be ok for the landscape mode, but portrait mode will be rotated. It is the same if I start in portrait mode, than landscape will be rotated.
Is there any way to do this right?
Try adding this before you start your session:
[_movieFileOutput setRecordsVideoOrientationAndMirroringChanges:YES asMetadataTrackForConnection:movieFileOutputConnection];
The header file documentation for this method makes it sound very much like what you're looking for:
Controls whether or not the movie file output will create a timed metadata track that records samples which
reflect changes made to the given connection's videoOrientation and videoMirrored properties during
recording.
There's more interesting information there, I'd read it all.
However, this method doesn't actually rotate your frames, it uses timed metadata to instruct players to do it at playback time, so it's possible that not all players will support this feature. If that's a deal breaker, then you can abandon AVCaptureMovieFileOutput in favour of the lower level AVCaptureVideoDataOutput + AVAssetWriter combination, where your videoOrientation changes actually rotate the frames, resulting in files that will playback correctly in any player:
If an AVCaptureVideoDataOutput instance's connection's videoOrientation or videoMirrored properties are set to
non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping
sample buffers as they pass through it.
p.s. I don't think you need the beginConfiguration/commitConfiguration pair if you're only changing one property as that's for batching multiple modifications into one atomic update.
Have you tried pausing the session before changing configuration?
I have an application which requires to use the microphone for recording user voice. I'm trying to make a speech to text.
I'm work with SpeechKit.framework and below is my code used:
-(void)starRecording{
self.voiceSearch = [[SKRecognizer alloc] initWithType:SKSearchRecognizerType
detection:SKShortEndOfSpeechDetection
language:[[USER_DEFAULT valueForKey:LANGUAGE_SPEECH_DIC] valueForKey:#"record"]
delegate:self];
}
- (void)recognizer:(SKRecognizer *)recognizer didFinishWithResults:(SKRecognition *)results {
long numOfResults = [results.results count];
if (numOfResults > 0) {
// update the text of text field with best result from SpeechKit
self.recordString = [results firstResult];
[self sendChatWithMediaType:#"messageCall" MediaUrl:#"" ContactDetail:#"{}" LocationDetail:#"{}"];
[self.voiceSearch stopRecording];
}
if (self.voiceSearch) {
[self.voiceSearch cancel];
}
[self starRecording];
}
That makes the SKRecognizer to be always open and that thing reduce the application performance.
I want to start the SKRecognizer when the microphone is detecting input audio.
I have a method for that? A method which is called when the microphone have input sound for me or a method which is always returning the level of audio detected?
Thank you!
You need to use the SpeechKit class to set up the audio.
Look here for details;
http://www.raywenderlich.com/60870/building-ios-app-like-siri
This project shows how to detect audio threshold;
github.com/picciano/iOS-Audio-Recoginzer
How to switch mode video converter photo In PBJVision
now
PBJVision *vision = [PBJVision sharedInstance];
vision.delegate = self;
[vision setCameraMode:PBJCameraModePhoto];
[vision setCameraOrientation:PBJCameraOrientationPortrait];
[vision setFocusMode:PBJFocusModeAutoFocus];
[vision setOutputFormat:PBJOutputFormatPreset];
[[PBJVision sharedInstance] capturePhoto];
You can change camera mode as adding just one line. The answer is already exist in your code. That is.
[vision setCameraMode:PBJCameraModeVideo];
And use this to recording video.
[[PBJVision sharedInstance] startVideoCapture];
[[PBJVision sharedInstance] endVideoCapture];
It might be better if you know additionally these.
Changing camera mode to another seems like need a bit time.
When I had used like this, error occurred.
(In my case, change to photo mode from video mode)
[vision setCameraMode:PBJCameraModePhoto];
[vision capturePhoto];
The cause is that session setting for camera mode changing is not end completely yet.
- (void)capturePhoto
{
if (![self _canSessionCaptureWithOutput:_currentOutput] || _cameraMode != PBJCameraModePhoto) {
DLog(#"session is not setup properly for capture");
return; <--- I'm returned;
}
....
}
So be careful to write sequentially changing camera mode and calling capture. :)
There are two types of focus detection right from iphone 6 introduction,
1. Contrast detection
2. Phase detection
from iphone 6.6+ it uses phase detection.
I am trying to get the current focus system
self.format = [[AVCaptureDeviceFormat alloc] init];
[self.currentDevice setActiveFormat:self.format];
AVCaptureAutoFocusSystem currentSystem = [self.format autoFocusSystem];
if (currentSystem == AVCaptureAutoFocusSystemPhaseDetection)
{
[self.currentDevice addObserver:self forKeyPath:#"lensPosition" options:NSKeyValueObservingOptionNew context:nil];
}
else if(currentSystem == AVCaptureAutoFocusSystemContrastDetection)
{
[self.currentDevice addObserver:self forKeyPath:#"adjustingFocus" options:NSKeyValueObservingOptionNew context:nil];
}
else{
NSLog(#"No observers added");
}
but now its crashing in the following line
AVCaptureAutoFocusSystem currentSystem = [self.format autoFocusSystem];
I am unable to find proper description for the crash.
You're creating some "AVCaptureDeviceFormat", but nothing is actually set up by default with it. It's just unusable garbage (and that's why you are getting a crash).
Each capture device has one or two formats it can work with.
You can see them by doing something like:
for (AVCaptureDeviceFormat *format in [self.currentDevice formats]) {
CFStringRef formatName = CMFormatDescriptionGetExtension([format formatDescription], kCMFormatDescriptionExtension_FormatName);
NSLog(#"format name is %#", (NSString *)formatName);
}
What you should be doing is deciding which AVCaptureDevice you want to use (e.g. the front camera, the back camera, whatever), getting the format from that, and then setting [self.currentDevice setActiveFormat:self.format]".
The WWDC 2013 session "What's New in Camera Capture" video has more information on how to do this.
I've a strange behavior with CMMotionManager. I try to calibrate the position of my device to enable my App to support multiple device orientations.
When I debug my App on a real device (not in Simulator), everything is working fine.
When I run the same App without debugging, the calibration does not work.
Here's my code:
static CMMotionManager* _motionManager;
static CMAttitude* _referenceAttitude;
// Returns a vector with the current orientation values
// At the first call a reference orientation is saved to ensure the motion detection works for multiple device positions
+(GLKVector3)getMotionVectorWithLowPass{
// Motion
CMAttitude *attitude = self.getMotionManager.deviceMotion.attitude;
if (_referenceAttitude==nil) {
// Cache Start Orientation
_referenceAttitude = [_motionManager.deviceMotion.attitude copy];
} else {
// Use start orientation to calibrate
[attitude multiplyByInverseOfAttitude:_referenceAttitude];
NSLog(#"roll: %f", attitude.roll);
}
return [self lowPassWithVector: GLKVector3Make(attitude.pitch,attitude.roll,attitude.yaw)];
}
+(CMMotionManager*)getMotionManager {
if (_motionManager==nil) {
_motionManager=[[CMMotionManager alloc]init];
_motionManager.deviceMotionUpdateInterval=0.25;
[_motionManager startDeviceMotionUpdates];
}
return _motionManager;
}
I've found a solution. The issue was caused due the different timing behavior between debug and non debug mode. CMMotionManager needs a little time for initializing, before it returns correct values. The solution was to postpone the calibration for 0.25 seconds.
This code works:
+(GLKVector3)getMotionVectorWithLowPass{
// Motion
CMAttitude *attitude = self.getMotionManager.deviceMotion.attitude;
if (_referenceAttitude==nil) {
// Cache Start Orientation
// NEW:
[self performSelector:#selector(calibrate) withObject:nil afterDelay:0.25];
} else {
// Use start orientation to calibrate
[attitude multiplyByInverseOfAttitude:_referenceAttitude];
NSLog(#"roll: %f", attitude.roll);
}
return [self lowPassWithVector: GLKVector3Make(attitude.pitch,attitude.roll,attitude.yaw)];
}
// NEW:
+(void)calibrate
_referenceAttitude = [self.getMotionManager.deviceMotion.attitude copy]
}