Recently, I have been following this tutorial, which has taught me how to play a video with alpha channel in iOS. This has been working great to build an AVPlayer over something like a UIImageView, which allows me to make it look like my video (with the alpha channel removed) is playing on top of the image.
Using this approach, I now need to find a way to do this while rendering/saving the video to the user's device. This is my code to generate the alpha video that plays in the AVPlayer;
let videoSize = CGSize(width: playerItem.presentationSize.width, height: playerItem.presentationSize.height / 2.0)
let composition = AVMutableVideoComposition(asset: playerItem.asset, applyingCIFiltersWithHandler: { request in
let sourceRect = CGRect(origin: .zero, size: videoSize)
let alphaRect = sourceRect.offsetBy(dx: 0, dy: sourceRect.height)
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: alphaRect)
.transformed(by: CGAffineTransform(translationX: 0, y: -sourceRect.height))
filter.maskImage = request.sourceImage.cropped(to: sourceRect)
return request.finish(with: filter.outputImage!, context: nil)
(That's been truncated a bit for ease, but I can confirm this approach properly returns an AVVideoComposition that I can play in AVPlayer.
I recognize that I can use an AVVideoComposition with an AVExportSession, but this only allows me to render my alpha video over a black background (and not an image or video, as I'd need).
Is there a way to overlay the now "background-removed" alpha channel, on top of another video and process out?
Related
I've created AR app which detects image, upon image detection I want to play gif on top of it.
I followed this tutorial to detect image: https://www.raywenderlich.com/6957-building-a-museum-app-with-arkit-2
In VC I added Imageview like this:
var imageView = GIFImageView(frame: CGRect(x: 0, y: 0, width: 600, height: 600))
Here is my code in ARSCNViewDelegate didAdd node for method.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { self.instructionLabel.isHidden = true }
if let imageAnchor = anchor as? ARImageAnchor {
// handleFoundImage(imageAnchor, node)
let size = imageAnchor.referenceImage.physicalSize
DispatchQueue.main.async(){ // If we remove this we are getting UIview setAnimation is being call from background thread error is coming.
self.imageView.animate(withGIFNamed: "tenor.gif") // I actually access gif from Document folder i.e Data format
}
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: size.width, height: size.height)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
}
After playing gif when I go back to my previous/next/same VC I can't tap on any UI elements(buttons etc).
In console I see this but I did not find solution to this. view animation is there in UIImage+gif swift file.
UIView setAnimationsEnabled being called from a background thread. Performing any operation from a background thread on UIView or a subclass is not supported and may result in unexpected and insidious behavior
Just run this in Device.
https://drive.google.com/file/d/1FKHPO6SkdOEZ-w_GFnrU5CeeeMQrNT-h/view?usp=sharing
You just run this project in device and scan dinosaur.png image(added ion xcode) you will gif playing on top of it. Once if you go back to firstVC that's all app is freezed you can't tap on any button in First VC and also hyou can't start AR scene again.
I can't figure out this issue why it's happening after palying GIF can you pleach check and let me know.
If anything is required please let me know.. Thanks in advance.
Gif are a bit annoying to handle is iOS, and because of that, I use mp4 resources instead of Gif, which can be loaded into a AVPlayer directly.
In your case I see that you have the resources in the Bundle, so you can convert them to mp4 and use it instead.
To play a video in SceneKit you can use this link How do I create a looping video material in SceneKit on iOS in Swift 3?.
I'm having an issue with displaying bounding box around recognized object using Core ML & Vision.
The horizontal detection seems to be working correctly, however, vertically the box is too tall, goes over the top edge of the video, doesn't go all the way to the bottom of the video, and it doesn't follow motion of the camera correctly. Here you can see the issue: https://imgur.com/Sppww8T
This is how video data output is initialized:
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: dataOutputQueue!)
self.videoDataOutput = videoDataOutput
session.addOutput(videoDataOutput)
let c = videoDataOutput.connection(with: .video)
c?.videoOrientation = .portrait
I've also tried other video orientations, without much success.
Performing the vision request:
let handler = VNImageRequestHandler(cvPixelBuffer: image, options: [:])
try? handler.perform(vnRequests)
And finally once the request is processed. viewRect is set to the size of the video view: 812x375 (I know, video layer itself is a bit shorter, but that's not the issue here):
let observationRect = VNImageRectForNormalizedRect(observation.boundingBox, Int(viewRect.width), Int(viewRect.height))
I've also tried doing something like (with more issues):
var observationRect = observation.boundingBox
observationRect.origin.y = 1.0 - observationRect.origin.y
observationRect = videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: observationRect)
I've tried to cut out as much of what I deemed to be irrelevant code as possible.
I've actually come across a similar issue using Apple's sample code, when the bounding box wouldn't vertically go around objects as expected: https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture Maybe that means that there is some issue with the API?
I use something like this:
let width = view.bounds.width
let height = width * 16 / 9
let offsetY = (view.bounds.height - height) / 2
let scale = CGAffineTransform.identity.scaledBy(x: width, y: height)
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -height - offsetY)
let rect = prediction.boundingBox.applying(scale).applying(transform)
This assumes portrait orientation and a 16:9 aspect ratio. It assumes the .imageCropAndScaleOption = .scaleFill.
Credits: The transform code was taken from this repo: https://github.com/Willjay90/AppleFaceDetection
I am creating an AVVideoComposition with CIFilters this way:
videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: filters)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
And then to correctly handle rotation, I create a passthrough instruction which sets identity transform on passthrough layer.
let passThroughInstruction = AVMutableVideoCompositionInstruction()
passThroughInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: asset.duration)
let passThroughLayer = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
passThroughLayer.setTransform(CGAffineTransform.identity, at: CMTime.zero)
passThroughInstruction.layerInstructions = [passThroughLayer]
videoComposition.instructions = [passThroughInstruction]
The problem is it crashes with the error:
'*** -[AVCoreImageFilterCustomVideoCompositor startVideoCompositionRequest:] Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction'
My issue is that if I do not specify this passThroughInstruction, output is incorrect if input asset's videotrack contains a preferredTransform which specifies rotation by 90 degrees. How do I use video composition with Core Image filters that correctly handles preferredTransform of video track?
EDIT: The question looks similar but is different from other questions that involve playback. In my case, playback is fine but it is rendering that creates distorted video.
Ok I found the real issue. Issue is applying videoComposition returned by AVMutableVideoComposition(asset:, applyingCIFiltersWithHandler:) function implicitly creates a composition instruction that physically rotates the CIImages in case a rotation transform is applied through preferredTransform on video track. Whether it should be doing that or not is debatable as the preferredTransform is applied at the player level. As a workaround, in addition to what is suggested in this answer. I had to adjust width and height passed via AVVideoWidthKey, AVVideoHeightKey in videoSettings which is passed to AVAssetReader/Writer.
I am using a very simple method to setup a SKVideoNode and place it inside an SCNNode via the geometry's diffuse contents. When I do this, the only time the texture updates and shows the video properly is when the camera or node is moving. When both are stationary, the texture never updates (like the video isn't even playing) but the sound does play.
Obviously it's still playing the video, but not rendering properly. I have no idea why.
func setupAndPlay() {
// create the asset & player and grab the dimensions
let path = NSBundle.mainBundle().pathForResource("indycar", ofType: "m4v")!
let asset = AVAsset(URL: NSURL(fileURLWithPath: path))
let size = asset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
// setup the video SKVideoNode
let videoNode = SKVideoNode(AVPlayer: player)
videoNode.size = size
videoNode.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
// setup the SKScene that will house the video node
let videoScene = SKScene(size: size)
videoScene.addChild(videoNode)
// create a wrapper ** note that the geometry doesn't matter, it happens with spheres and planes
let videoWrapperNode = SCNNode(geometry: SCNSphere(radius: 10))
videoWrapperNode.position = SCNVector3(x: 0, y: 0, z: 0)
// set the material's diffuse contents to be the video scene we created above
videoWrapperNode.geometry?.firstMaterial?.diffuse.contents = videoScene
videoWrapperNode.geometry?.firstMaterial?.doubleSided = true
// reorient the video properly
videoWrapperNode.scale.y = -1
videoWrapperNode.scale.z = -1
// add it to our scene
scene.rootNode.addChildNode(videoWrapperNode)
// if I uncomment this, the video plays correctly; if i comment it, the texture on the videoWrapperNode only
// get updated when I'm moving the camera around. the sound always plays properly.
videoWrapperNode.runAction( SCNAction.repeatActionForever( SCNAction.rotateByAngle(CGFloat(M_PI * 2.0), aroundAxis: SCNVector3(x: 0, y: 1, z: 0), duration: 15.0 )))
videoNode.play()
}
Has anyone come across anything similar? Any help would be appreciated.
Sounds like you need to set .playing = true on your SCNView.
From the docs.
If the value of this property is NO (the default), SceneKit does not
increment the scene time, so animations associated with the scene do
not play. Change this property’s value to YES to start animating the
scene.
I also found that setting rendersContinously to true on the renderer (e.g. scnView) will make the video play.
If you log the SCNRendererDelegate's update calls, you can see when frames are drawn
I have been unsuccessful & getting a SKVideoNode to display video in Xcode 7.0 or 7.1. If I run my code or other samples on a hardware device like an iPad or iPhone the video display fine, but on the simulator only audio plays. Same code works fine in xCode 6.4's simulator.
I have the CatNap example from Ray Wenderlich iOS & tvOS Games by tutorial (iOS 9) & it does NOT run in the Simulator 9.1 that comes with Xcode 7.1. I believe the simulator is broken & have filed a bug with Apple but have had no response in a month.
Does anyone have sample code for a SKVideoNode the works on the simulator in xCode 7.1??
I'm working on an app that exports CALayer animations over 2-10 second videos using AVMutableVideoComposition and AVVideoCompositionCoreAnimationTool (export via AVExportSession).
There can hundreds CAShapeLayers in each composition, and each will have animation(s) attached to it.
let animationLayer = CALayer()
animationLayer.frame = CGRectMake(0, 0, size.width, size.height)
animationLayer.geometryFlipped = true
// Add a ton of CAShapeLayers with CABasicAnimation's to animation Layer
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, size.width, size.height)
videoLayer.frame = CGRectMake(0, 0, size.width, size.height)
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(animationLayer)
mainCompositionInst.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exportSession.outputURL = finalUrl
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.shouldOptimizeForNetworkUse = true
exportSession.videoComposition = mainCompositionInst
exportSession.exportAsynchronouslyWithCompletionHandler(...)
Now, this totally works. However, the composition export can be very slow when the animations are numerous (15-25 secs to export). I'm interested in any ideas to speed up the export performance.
One idea I have thus far is to do multiple composition/export passes and add a "reasonable" number of animation layers each pass. But I have a feeling that would just make it slower.
Or, perhaps export lots of smaller videos that each contain a "reasonable" number of animation layers, and then compose them all together in a final export.
Any other ideas? Is the slowness just a fact of life? I'd appreciate any insight! I'm pretty novice with AVFoundation.
I went down the video composition path and didn't like the constant frame rate that AVAssetExportSession enforced, so I manually rendered my content onto AVAssetReader's output and encoded it to mov with AVAssetWriter.
If you have a lot of content, you could translate it to OpenGL/Metal and use the GPU to render it blazingly quickly directly onto your video frames via texture caches.
I bet the GPU wouldn't break a sweat, so you'd be limited by the video encoder speed. I'm not sure what that is - the 4s used to do 3x realtime, that can only have improved.
There's a lot of fiddling required to do what I suggest. You could get started very quickly on the OpenGL path by using GPUImage, although a Metal version would be very cool.