I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11.
Any help would be appreciable.
I'm not I understood the question correctly. I don't think you can achieve a 0.5x zoom only using the wide-angle camera.
If you refer to the 0.5x zoom as in the native iOS camera, you achieve that by switching from the wide-angle camera to the ultra wide-angle camera.
I think you have two options:
You can switch directly to the ultrawide camera via a button or so (below a short snippet for selecting the camera and assigning it to your active device.
if let device = AVCaptureDevice.default(.builtInUltraWideCamera, for: .video, position: .back) {
videoDevice = device
} else {
fatalError("no back camera")
}
On supported devices like the iphone11 you can use the .builtInDualCamera as your active device. In this case, the system will automatically switch between the different cameras depending on the zoomFactor. Of course, on iPhone 11pro and 12pro series, you can even use .buildInTripleCamera to go to 2x zoom. You can even check for when the switch happens, please refer to the link below for more information.
https://developer.apple.com/documentation/avfoundation/avcapturedevice/3153003-virtualdeviceswitchovervideozoom
Related
When checking ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces on an iPhone X running iOS 13 it returns 1. But looking at the ARKit promo page it says:
ARKit Face Tracking tracks up to three faces at once, using the TrueDepth camera on iPhone X, iPhone XS, iPhone XS Max, iPhone XR, and iPad Pro to power front-facing camera experiences like Memoji and Snapchat.
Is there any documentation that specifies what each device supports?
At first I should say that only devices with TrueDepth sensor support ARFaceTracingConfiguration, so the oldest gadget in a row is iPhone X.
Secondly, to track three faces at a time you need iOS 13+, as you already said.
Thirdly, the default value is 1 face to track, thus you have to use the following instance property if you wanna simultaneously track up to three faces:
var maximumNumberOfTrackedFaces: Int { get set }
or:
guard ARFaceTrackingConfiguration.isSupported
else {
print("You can't track faces on this device.")
return
}
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 3
sceneView.session.run(config, options: [.resetTracking, .removeExistingAnchors])
P.S. I haven't seen any documentation that specifies what each device supports except this one.
Is it possible to capture video in landscape while the device is in portrait mode?
something like this:
actually what i need is to capture in portrait but with width > height, i dont want the user to rotate the device, but i do want to capture a wider picture like in landscape mode.
just changing the preview layer frame to be wide (width>height) wont be enough of course.
i tried changing the video orientation of the preview layer, but that will rotate the picture, and thats not what i want.
previewLayer.connection.videoOrientation = .landscapeRight
is that make any sense?
No its not possible as you would have to physically rotate the camera.
You can CROP the output video to whatever aspect ratio you desire.
This will however make your vertical resolution be at most what your horizontal resolution currently is.
As well as decreasing your field of view.
If you still wanna crop the video to simulate this "smaller landscape mode" in real time i suggest you use the "GPUImageCropFilter" from the library GPUImage
You can, you need to use AVAssetWriter and set the dimensions of the output video.
However, remember that you're going to be reducing quality. If the camera orientation is portrait, then what you're receiving is a video that is (for arguments sake) 720H x 360W.
So you want to make that landscape, if you preserve the aspect ratio, you're going to end up with a video (by cropping the input) that's 180H x 360W.
Remember, there is a difference between what the camera sees, what you send to the preview layer and what you record to a file - they can all be independent of each other (you spoke about changing the preview layer frame, remember that has nothing to do with the video you write out).
Have you tried with setting gravity & bounds of previewLayer?
var bounds:CGRect = self.view.layer.bounds
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.bounds = bounds
previewLayer?.position = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds))
I would like to capture images using the secondary back camera on the iphone 7. How do I force the back camera to use the second lens to take images? When I print out the available devices there was only 1 back camera.
Availible devices: Optional([<AVCaptureFigVideoDevice: 0x105e12f70 [Back Camera][com.apple.avfoundation.avcapturedevice.built-in_video:0]>, <AVCaptureFigVideoDevice: 0x105d281e0 [Front Camera][com.apple.avfoundation.avcapturedevice.built-in_video:1]>, <AVCaptureFigAudioDevice: 0x174097c00 [iPhone Microphone][com.apple.avfoundation.avcapturedevice.built-in_audio:0]>])
Picture to show what I'm talking about..
I am developing a custom camera in which the camera is set to the Image Capture mode. I need to increase the zoom level of camera preview according to the app requirements. The preview currently being displayed is perfect I just need to increase the zoom-out in current preview. I searched over internet but didn't find any solution. Please tell me how can I do this. I am attaching the example image for better understanding. first image is of my camera app and second image is of Scanner Pro app which shows view with more covered area while I focus both the apps for the same object on the same distance. My camera don't have any space but the Scanner camera has spacing all over the image. Both the camera are on the same distance from the paper.
i don't know whether you still need this answer. Probably not, but still for you and everyone else looking out:
When you set the Session Preset, try using SessionPresetPhotofor the device object. This should resolve the weird zoom issue.
Your preview view is probably spilling over the edge of the screen. Make sure it is a 4:3 aspect ratio and that it doesn’t overflow your screen edges. With that you should see more of your image.
I have an app that I would like to have video capture for the front-facing camera only. That's no problem. But I would like the video capture to always be in landscape, even when the phone is being held in portrait.
I have a working implementation based on the AVCamDemo code that Apple published. And borrowing from the information in this tech note, I am able to specify the orientation. There's just one trick: while the video frame is oriented correctly, the contents still appear as though shot in portrait:
I'm wondering if I'm just getting boned by the physical constraints of the hardware: is the image sensor just oriented this way? The referenced tech note above makes this note:
Important: Setting the orientation on a still image output and movie
file output doesn't physically rotate the buffers. For the movie file
output, it applies a track transform (matrix) to the video track so
that the movie is rotated on playback, and for the still image output
it inserts exif metadata that image viewers use to rotate the image
properly when viewing later.
But my playback of that video suggests otherwise. Any insight or suggestions would be appreciated!
Thanks,
Aaron.
To answer your question, yes, the image sensor is just oriented that way. The video camera is an approx 1-megapixel "1080p" camera that has a fixed orientation. The 5MP (or 8MP for 4S, etc) still camera also has a fixed orientation. The lenses themselves don't rotate nor do any of the other camera bits, and hence the feed itself has a fixed orientation.
"But wait!", you say, "pictures I take with the camera app (or API) get rotated correctly. Why is that?" That's cuz iOS takes a look at the orientation of the phone when a picture is taken and stores that information with the picture (as an Exif attachment). Yet video isn't so flagged -- and each frame would have to be individually flagged, and then there's issues about what to do when the user rotates the phone during video....
So, no, you can't ask a video stream or a still image what orientation the phone was in when the video was captured. You can, however, directly ask the phone what orientation it is in now:
UIDeviceOrientation currentOrientation = [UIDevice currentDevice].orientation;
If you do that at the start of video capture (or when you grab a still image from a video feed) you can then use that information to do your own rotation of playback.