Video flips upside down when sharing to Facebook or Messenger - ios

When I record videos with the front-facing "selfie" camera with my app that I am developing in iOS, and save it to my camera roll, the orientation appears correct. It also appears correct if share the video to my Mail app an open it in Mac OS. As soon as I share this video from my app, or from the camera roll to Facebook or Messenger, the orientation gets flipped upside down. It also gets flipped upside down if I send it to Messenger or post it on Facebook on MacOS.
What is it about this video that makes it appear correct orientation on my device or in a web browser, but never on the Facebook app, Messenger or Facebook website?
If there's any extra information I need to write to the video to make it appear correct, how is that done in Swift?
Even without knowing how to change it in Swift, it would really help to know what about the video is responsible.
UPDATE
Upon another look at EXIF data using exiftool, I found two vital differences.
The video that flips incorrectly taken from the front-facing camera when posting to Facebook or Messenger has the two following different properties:
Matrix Structure: -1 0 0 0 1 0 0 0 1
Rotation: 180
The video that doesn't flip (correct) taken from the back-facing camera when posting to Facebook or Messenger has the two following different properties:
Matrix Structure: 1 0 0 0 1 0 0 0 1
Rotation: 0
As of now, I don't know what my method will be of changing these properties, or to change one or both. Since it appears correct in places other than Facebook or Messenger, I'm guessing it's only one property.
UPDATE 2
I'm fairly certain it's fixed now. I am totally mystified by why I was transforming the AVAssetWriterInput. I'm already compensating for it flipping after the video is captured. I commented everything within the isFrontCamera if statement:
let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
if isFrontCamera {
var transform: CGAffineTransform = .identity
if selectedInterfaceOrientation == interfaceOrientations.portrait {
transform = CGAffineTransform(scaleX: -1.0, y: 1.0)
}
else if selectedInterfaceOrientation == interfaceOrientations.landscapeLeft {
transform = CGAffineTransform(scaleX: 1.0, y: -1.0)
}
else if selectedInterfaceOrientation == interfaceOrientations.landscapeRight {
transform = CGAffineTransform(scaleX: 1.0, y: -1.0)
}
else if selectedInterfaceOrientation == interfaceOrientations.portraitUpsideDown {
transform = CGAffineTransform(scaleX: -1.0, y: 1.0)
}
assetWriterVideoInput.transform = transform
}

Related

Swift - How to crop a QR code properly using an ARSession and Vision library?

This is a long question so I wanted to put a TL;DR on top:
I want to track QR codes via on of two methods: image tracking by cropping them upon detection, or placing anchors with raycasting. Both of these methods fail when the phone is in portrait mode. Camera source is an ARSession, SceneKit and RealityKit not used. There's only ARKit. What to do?
I am currently working on an application with Swift in which I try to render some stuff on a server, transmit the video to iPhone and display it on screen using a MTKView. I only needed a custom Meal shader to apply some complex calculations to received frames, so I did not use SceneKit or RealityKit. I only have ARSession from ARKit and a Metal view here, and up to this point everything works fine.
I am able to do image tracking at this point. However, I want to apply this behaviour to QR codes. What I want is to detect a QR code (multiple if possible) and then track it just like images. Since I don't have the QR code as ARReferenceImages beforehand like normal image tracking, I was left with two options:
Option 1: Using raycast(_:) on ARSession
This is probably the right way to do it. However, for this I need to activate both plane tracking options on ARSession, which then creates many anchors and managing them with image tracking becomes harder. This is not the actual problem though. Actual problem is that when the phone is in landscape mode, raycasting works as intended. When phone goes into portrait mode, even if I pass the frame in correct orientation it misses everything and hit test results return empty. I am not using hitTest(_:) because it is deprecated.
I want to explain the "correct orientation" thing here before going into second option. ARSession is capturing frames and I am able to check each frame through didUpdate delegate function of the session. When I read the pixel buffer out of the frame using frame.capturedImage and turn it into a CIImage, the image is always in landscape mode (width > height). Doesn't matter if the phone is in portrait mode or not. So whenever I want to pass this image, I am using oriented(.right) for portrait and oriented(.up) for landscape. I got that idea from another question asked about QR bounding box, and so far it is the best option (but not good enough). Just want to note that when I tried raycasting, I tried it with the image size, not screen size (screen size = my Metal view size because it is fullscreen) since the image is larger than the screen in reality. I am able to see this if I put a breakpoint and quicklook my CIImage created from current camera frame.
Option 2: Cropping the QR and treating it as image tracking
This is another approach which I am currently working on. Algorithm is simple: check every frame with Vision. If there are detected QR codes, read their data first. If that data matches with an existing QR, then re-read it if the cropped QR size is larger than existing one. If not, do nothing. Then use this cropped QR image for tracking QR as an image. At this point we would have the data already so no problems here.
However, I tried many times to do the proper transformation explained here in the answer. Again, I think I am able to transform normalized bounding box into a real rect which can correctly crop the image. Yet, as it is in raycasting, works perfectly only if the phone is in landscape position. When in portrait it works good enough ONLY IF the phone is really close to QR code and it is centered on the screen.
For related code, I have this in my View controller:
private var ciContext: CIContext = CIContext.init(options: nil)
private var sequenceHandler: VNImageRequestHandler?
And then I have this code to extract QR codes from CIImage:
func extractQrCode(image: CIImage) -> [VNBarcodeObservation]? {
self.sequenceHandler = VNImageRequestHandler(ciImage: image)
let barcodeRequest = VNDetectBarcodesRequest()
barcodeRequest.symbologies = [.QR]
try? self.sequenceHandler?.perform([barcodeRequest])
guard let results = barcodeRequest.results else {
return nil
}
return results
}
An this is the delegate that checks and operates on every frame (code currently for Option 2):
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let rotImg = self.renderer?.getInterfaceOrientation() == .portrait ? CIImage(cvPixelBuffer: frame.capturedImage).oriented(.right) : CIImage(cvPixelBuffer: frame.capturedImage)
if let barcodes = self.extractQrCode(image: rotImg) {
for barcode in barcodes {
guard let payload = barcode.payloadStringValue else { continue }
var rect = CGRect()
rect = VNImageRectForNormalizedRect(barcode.boundingBox.botToTop(), Int(rotImg.extent.width), Int(rotImg.extent.height))
let existingQR = TrackedImagesManager.imagesToTrack.filter{ $0.isQR && $0.QRData == payload}.first
if ((rect.size.width < 800 || rect.size.height < 800 || abs(rect.size.height - rect.size.width) > 32) && existingQR == nil) {
DispatchQueue.main.async {
self.showToastMessage(message: "Please get closer to the QR code and try centering it on your screen.", font: UIFont.systemFont(ofSize: 18), duration: 3)
}
continue
} else if (existingQR != nil) {
if (rect.width > existingQR?.originalImage?.size.width ?? 999) {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
existingQR?.originalImage = trackImg
existingQR?.image = ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1)
} else {
continue
}
} else if rect.width != 0 {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
TrackedImagesManager.imagesToTrack.append(TrackedImage(id: 9, type: 1, image: ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1), originalImage: trackImg, isQR: true, QRData: payload))
print("qr norm rect: \(barcode.boundingBox) \n qr rect: \(rect) \nqr data: \(payload) \nqr hittestres: ")
}
}
}
}
Finally, for the transformation, I have this extension (tried various ways, this is the best so far):
extension CGRect {
func botToTop() -> CGRect {
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
return self.applying(transform)
}
}
So for both options I need some advice to make things right. Android side of the same thing is implemented as in Option 2, but Android returns a nicely cropped QR code upon detection. We don't have that. What do I do now?

How to handle a video overexposure in Swift

I'm working on a camera app, and I think the behavior of my app and the iPhone default camera app against overexposure is very different.
Like the image below, the default camera app adjusts the overexposure when it's detected. (I feel the whole screen gets slightly yellow-ish to get rid of the overexposed brightness area. So I can see the white keyboard even putting dark stuff covers most of the screen.
Here is my app and I set the exposure mode to the continuous exposure mode, but it won't adjust the overexposed area.
I want to adjust the brightness, but I also don't want to display the image including the overexposed part (I mean... I just want my app to show like the default camera does.)
This is the code for adjust the focus and exposure.
func setFocus(with focusMode: AVCaptureDevice.FocusMode, with exposureMode: AVCaptureDevice.ExposureMode, at point: CGPoint, monitorSubjectAreaChange: Bool, completion: #escaping (Bool) -> Void) {
guard let captureDevice = captureDevice else { return }
do {
try captureDevice.lockForConfiguration()
} catch {
completion(false)
return
}
if captureDevice.isSmoothAutoFocusSupported, !captureDevice.isSmoothAutoFocusEnabled { captureDevice.isSmoothAutoFocusEnabled = true }
if captureDevice.isFocusPointOfInterestSupported, captureDevice.isFocusModeSupported(focusMode) {
captureDevice.focusPointOfInterest = point
captureDevice.focusMode = focusMode
}
if captureDevice.isExposurePointOfInterestSupported, captureDevice.isExposureModeSupported(exposureMode) {
captureDevice.exposurePointOfInterest = point
captureDevice.exposureMode = exposureMode
}
captureDevice.isSubjectAreaChangeMonitoringEnabled = monitorSubjectAreaChange
captureDevice.unlockForConfiguration()
completion(true)
}
and this is how I call the function
func setFocusToCenter() {
let center: CGPoint = CGPoint(x: cameraView.bounds.width / 2, y: cameraView.bounds.height / 2)
let pointInCamera = cameraView.layer.captureDevicePointConverted(fromLayerPoint: center)
setFocus(with: .continuousAutoFocus, with: .continuousAutoExposure, at: pointInCamera, monitorSubjectAreaChange: false, completion: { [weak self] success in
guard let self = self, success else { return }
// do some animation
})
}
if I need to work on the camera exposure and even if I set the ExposureMode as continuous auto exposure, do I still need to handle overexposure in code?
Also, if you have experienced for adjusting the overexposure, how did you achieve that?
Added this part later...
I took screenshots to compare the my app camera and the native iPhone camera app.
Here is my camera app with .continuousAutoExposure and set the exposurePointOfInterest to center of the screen.
However, the native iPhone camera app wont overexposed if I shoot a dark image from the similar distance...
I think the native iPhone app is also .continuousAutoExposure mode until I touch the screen and adjust focus to a point.
I droped the image quality in order to paste on this post, but I don't really see the blur on the original screenshots. I configure the fps to 30 (also the native iPhone camera is also 30).
So waht could be the reason for getting this overexposure....

Rendering Alpha Channel Video Over Background (AVFoundation, Swift)

Recently, I have been following this tutorial, which has taught me how to play a video with alpha channel in iOS. This has been working great to build an AVPlayer over something like a UIImageView, which allows me to make it look like my video (with the alpha channel removed) is playing on top of the image.
Using this approach, I now need to find a way to do this while rendering/saving the video to the user's device. This is my code to generate the alpha video that plays in the AVPlayer;
let videoSize = CGSize(width: playerItem.presentationSize.width, height: playerItem.presentationSize.height / 2.0)
let composition = AVMutableVideoComposition(asset: playerItem.asset, applyingCIFiltersWithHandler: { request in
let sourceRect = CGRect(origin: .zero, size: videoSize)
let alphaRect = sourceRect.offsetBy(dx: 0, dy: sourceRect.height)
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: alphaRect)
.transformed(by: CGAffineTransform(translationX: 0, y: -sourceRect.height))
filter.maskImage = request.sourceImage.cropped(to: sourceRect)
return request.finish(with: filter.outputImage!, context: nil)
(That's been truncated a bit for ease, but I can confirm this approach properly returns an AVVideoComposition that I can play in AVPlayer.
I recognize that I can use an AVVideoComposition with an AVExportSession, but this only allows me to render my alpha video over a black background (and not an image or video, as I'd need).
Is there a way to overlay the now "background-removed" alpha channel, on top of another video and process out?

Mirroring (flipping) camera preview layer

So I am using AVCaptureSession to take pictures with front camera. I am also creating previewLayer from this session to display current image on screen.
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
It all works like it should.
But now I have a problem because I need to implement a button which will flip / mirror (transform) this preview layer - so users have a choice to take normal selfie picture or take mirrored one.
I have already tried transforming previewLayer and it KINDA works. The problem is that if you rotate device, preview picture rotates in the other way since it is transformed. (in the default or any other camera app picture rotates with camera). Anyone has any idea how to achieve that?
Mirroring preview layer: (I tried transforming layer and even view later, same result).
#IBAction func mirrorCamera(_ sender: AnyObject) {
cameraMirrored = !cameraMirrored
if cameraMirrored {
// TRANSFORMING VIEW
self.videoPreviewView.transform = CGAffineTransform(scaleX: -1, y: 1);
// OR LAYER
self.previewLayer.transform = CATransform3DMakeScale(-1, 1, 1);
} else {
self.videoPreviewView.transform = CGAffineTransform(scaleX: 1, y: 1);
self.videoPreviewView.transform = CATransform3DMakeScale(1, 1, 1);
}
}
Nowadays, if you use the mirrored property of the preview layer directly, you will get a deprecation warning at runtime. The current way to do it is using directly the connection from the camera. You must do something like this (code below is not real code, property names probably will differ, but you get the idea)
if (cameraPreviewLayer.connection.SupportsVideoMirroring) {
cameraPreviewLayer.connection.automaticallyAdjustsVideoMirroring = false
cameraPreviewLayer.connection.videoMirrored = true
}
AVCaptureVideoPreviewLayer has a property mirrored. Set this true or false, as required.

SKVideoNode only rendering in SCNScene when the node or camera moves

I am using a very simple method to setup a SKVideoNode and place it inside an SCNNode via the geometry's diffuse contents. When I do this, the only time the texture updates and shows the video properly is when the camera or node is moving. When both are stationary, the texture never updates (like the video isn't even playing) but the sound does play.
Obviously it's still playing the video, but not rendering properly. I have no idea why.
func setupAndPlay() {
// create the asset & player and grab the dimensions
let path = NSBundle.mainBundle().pathForResource("indycar", ofType: "m4v")!
let asset = AVAsset(URL: NSURL(fileURLWithPath: path))
let size = asset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
// setup the video SKVideoNode
let videoNode = SKVideoNode(AVPlayer: player)
videoNode.size = size
videoNode.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
// setup the SKScene that will house the video node
let videoScene = SKScene(size: size)
videoScene.addChild(videoNode)
// create a wrapper ** note that the geometry doesn't matter, it happens with spheres and planes
let videoWrapperNode = SCNNode(geometry: SCNSphere(radius: 10))
videoWrapperNode.position = SCNVector3(x: 0, y: 0, z: 0)
// set the material's diffuse contents to be the video scene we created above
videoWrapperNode.geometry?.firstMaterial?.diffuse.contents = videoScene
videoWrapperNode.geometry?.firstMaterial?.doubleSided = true
// reorient the video properly
videoWrapperNode.scale.y = -1
videoWrapperNode.scale.z = -1
// add it to our scene
scene.rootNode.addChildNode(videoWrapperNode)
// if I uncomment this, the video plays correctly; if i comment it, the texture on the videoWrapperNode only
// get updated when I'm moving the camera around. the sound always plays properly.
videoWrapperNode.runAction( SCNAction.repeatActionForever( SCNAction.rotateByAngle(CGFloat(M_PI * 2.0), aroundAxis: SCNVector3(x: 0, y: 1, z: 0), duration: 15.0 )))
videoNode.play()
}
Has anyone come across anything similar? Any help would be appreciated.
Sounds like you need to set .playing = true on your SCNView.
From the docs.
If the value of this property is NO (the default), SceneKit does not
increment the scene time, so animations associated with the scene do
not play. Change this property’s value to YES to start animating the
scene.
I also found that setting rendersContinously to true on the renderer (e.g. scnView) will make the video play.
If you log the SCNRendererDelegate's update calls, you can see when frames are drawn
I have been unsuccessful & getting a SKVideoNode to display video in Xcode 7.0 or 7.1. If I run my code or other samples on a hardware device like an iPad or iPhone the video display fine, but on the simulator only audio plays. Same code works fine in xCode 6.4's simulator.
I have the CatNap example from Ray Wenderlich iOS & tvOS Games by tutorial (iOS 9) & it does NOT run in the Simulator 9.1 that comes with Xcode 7.1. I believe the simulator is broken & have filed a bug with Apple but have had no response in a month.
Does anyone have sample code for a SKVideoNode the works on the simulator in xCode 7.1??

Resources