iOS Swift - how to get aspect ratio of local and remote video? - ios

Scenario:
I'm building a WebRTC view inside an app
The container for videos will always have a height of 160.
In the center of the container there should be displayed the remote video with a max height of 160, width should be scaled to respect the aspect ratio of the video. Width also cannot be bigger than the view width, in that case the width will be equal to view width and the height should be adapted to aspect ratio.
In top right corner there should be displayed the local video from front camera with a max width of 100 and the height should be adapted to respect the aspect ratio of local video
my code so far:
func createPeerConnection () {
// some other code
self.localStream = self.factory.mediaStream(withStreamId: "stream")
let videoSource = self.factory.videoSource()
let devices = RTCCameraVideoCapturer.captureDevices()
if let camera = devices.last,
let format = RTCCameraVideoCapturer.supportedFormats(for: camera).last,
let fps = format.videoSupportedFrameRateRanges.first?.maxFrameRate {
let intFps = Int(fps)
self.capturer = RTCCameraVideoCapturer(delegate: videoSource)
self.capturer?.startCapture(with: camera, format: format, fps: intFps)
videoSource.adaptOutputFormat(toWidth: 100, height: 160, fps: Int32(fps))
}
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
self.localStream.addVideoTrack(videoTrack)
DispatchQueue.main.async {
if self.localView == nil {
let videoView = RTCEAGLVideoView(frame: CGRect(x: self.view.frame.size.width - 105, y: 5, width: 100, height: 160))
videoView.backgroundColor = UIColor.red
self.view.addSubview(videoView)
self.localView = videoView
}
videoTrack.add(self.localView!)
}
}
func peerConnection(_ peerConnection: RTCPeerConnection, didAdd stream: RTCMediaStream) {
self.remoteStream = stream
if let videoTrack = stream.videoTracks.first {
DispatchQueue.main.async {
if self.remoteView == nil {
let videoView = RTCEAGLVideoView(frame: CGRect(x: self.view.frame.size.width - 50, y: 0, width: 100, height: 160))
videoView.backgroundColor = UIColor.green
if let local = self.localView {
self.view.insertSubview(videoView, belowSubview: local)
} else {
self.view.addSubview(videoView)
}
self.remoteView = videoView
}
videoTrack.add(self.remoteView!)
}
}
}
I don't know how to get the aspect ratio of either of the videos, local or remote. If i had that, i could compute the appropriate width and heights for each of them
// Edit with solution:
I did not find a way to get the exact size but I found a way to render the video at scale
All I had to do was:
let videoView = RTCEAGLVideoView(frame: CGRect(x: self.view.frame.size.width - 105, y: 5, width: 100, height: 134))
videoView.contentMode = .scaleAspectFill
Now the video scales itself based on the container size

You can use the AVURLAsset and CGSize to get the resolution for video
private func resolutionForLocalVideo(url: URL) -> CGSize? {
guard let track = AVURLAsset(url: url).tracks(withMediaType: AVMediaTypeVideo).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
Now, Use natural size and preferredTransform
var mediaAspectRatio: Double! // <- here the aspect ratio for video with url will be set
func initAspectRatioOfVideo(with fileURL: URL) {
let resolution = resolutionForLocalVideo(url: fileURL)
guard let width = resolution?.width, let height = resolution?.height else {
return
}
mediaAspectRatio = Double(height / width)
}
Also, you can find the scale factor
float xScale = destination.size.width / imageSize.width; //destination is the max image drawing area.
float yScale = destination.size.height / imageSize.height;
float scaleFactor = xScale < yScale ? xScale : yScale;
This can also be achieved by GPUImageMovie, GPUImageCropFilter and GPUImageMovieWriter

Related

Cropping AVCapturePhoto to overlay rectangle displayed on screen

I am trying to take a picture of a thin piece of metal, cropped to the outline displayed on the screen. I have seen almost every other post on here, but nothing has got it for me yet. This image will then be used for analysis by a library. I can get some cropping to happen, but never to the rectangle displayed. I have tried rotating the image before cropping, and calculating the rect based on the rectangle on screen.
Here is my capture code. PreviewView is the container, videoLayer is for the AVCapture video.
// Photo capture delegate
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imgData = photo.fileDataRepresentation(), let uiImg = UIImage(data: imgData), let cgImg = uiImg.cgImage else {
return
}
print("Original image size: ", uiImg.size, "\nCGHeight: ", cgImg.height, " width: ", cgImg.width)
print("Orientation: ", uiImg.imageOrientation.rawValue)
guard let img = cropImage(image: uiImg) else {
return
}
showImage(image: img)
}
func cropImage(image: UIImage) -> UIImage? {
print("Image size before crop: ", image.size)
//Get the croppedRect from function below
let croppedRect = calculateRect(image: image)
guard let imgRet = image.cgImage?.cropping(to: croppedRect) else {
return nil
}
return UIImage(cgImage: imgRet)
}
func calculateRect(image: UIImage) -> CGRect {
let originalSize: CGSize
let visibleLayerFrame = self.rectangleView.bounds
// Calculate the rect from the rectangleview to translate to the image
let metaRect = (self.videoLayer.metadataOutputRectConverted(fromLayerRect: visibleLayerFrame))
print("MetaRect: ", metaRect)
// check orientation
if (image.imageOrientation == UIImage.Orientation.left || image.imageOrientation == UIImage.Orientation.right) {
originalSize = CGSize(width: image.size.height, height: image.size.width)
} else {
originalSize = image.size
}
let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral
print("Calculated Rect: ", cropRect)
return cropRect
}
func showImage(image: UIImage) {
if takenImage != nil {
takenImage = nil
}
takenImage = UIImageView(image: image)
takenImage.frame = CGRect(x: 10, y: 50, width: 400, height: 1080)
takenImage.contentMode = .scaleAspectFit
print("Cropped Image Size: ", image.size)
self.previewView.addSubview(takenImage)
}
And here is along the line of what I keep getting.
What am I screwing up?
I managed to solve the issue for my use case.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage
The world of UIKit has the TOP LEFT corner as 0,0.
The 0,0 point in the AVFoundation world is the BOTTOM LEFT corner.
So you have to translate by rotating 90 degrees.
That's why your image is bonkers.
Also remember that because of the origin translation the following rules apply:
X is actually up and down
Y is actually left and right
width and height are swapped
Also be aware that the UIImageView content mode setting WILL impact how your image scales. You might want to use .scaleAspectFill and NOT AspectFit if you really want to see how your image looks in the UIView.
I used this code snippet to see what was behind the curtain:
// figure out how to cut/crop this
let realImageRect = AVMakeRect(aspectRatio: image.size, insideRect: (self.cameraPreview?.frame)!)
NSLog("real image rectangle = \(realImageRect.debugDescription)")
The 'cameraPreview' reference above is the control you're using for your AV Capture Session.
Good luck!

How to make a UIImageView have a square size based on a given bounds/container

So I am trying to create a simple UIImageView to make it have a square frame/size with CGSize. Based on a given bounds. So for example if the bounds container is the width & height of the screen then. The function should resize the UIImageView to fit like a perfect square base on those bounds on the screen.
Code:
let myImageView = UIImageView()
myImageView.frame.origin.y = (self.view?.frame.height)! * 0.0
myImageView.frame.origin.x = (self.view?.frame.width)! * 0.0
myImageView.backgroundColor = UIColor.blue
self.view?.insertSubview(myImageView, at: 0)
//("self.view" is the ViewController view that is the same size as the devices screen)
MakeSquare(view: myImageView, boundsOf: self.view)
func MakeSquare(view passedview: UIImageView, boundsOf container: UIView) {
let ratio = container.frame.size.width / container.frame.size.height
if container.frame.width > container.frame.height {
let newHeight = container.frame.width / ratio
passedview.frame.size = CGSize(width: container.frame.width, height: newHeight)
} else{
let newWidth = container.frame.height * ratio
passedview.frame.size = CGSize(width: newWidth, height: container.frame.height)
}
}
The problem is its giving me back the same bounds/size of the container & not changed
Note: I really have know idea how to pull this off, but wanted to see if its possible. My function comes from a question here. That takes a UIImage and resizes its parent view to make the picture square.
This should do it (and will centre the image view in the containing view):
func makeSquare(view passedView: UIImageView, boundsOf container: UIView) {
let minSize = min(container.bounds.maxX, container.bounds.maxY)
passedView.bounds = CGRect(x: container.bounds.midX - minSize / 2.0,
y: container.bounds.midY - minSize / 2.0,
width: minSize, height: minSize)
}

Issue getting the correct image height based on different image types

On my application I've been working on tapping an image and enlarging the image and display the image full size.
Ideally this is what I want to get working:
(Didn't let me add video here so here's the link to it)
https://media.giphy.com/media/F7wCO7miVMG6k/source.mp4
However, I'm only able to display images on the correct size if their height is not that tall.
If I have a square image, or an image too large, the image appears as it follows:
Here's the code that handles how the image is displayed:
var startingFrame: CGRect?
var blackBackgroundView: UIView?
var startingImageView: UIImageView?
func performZoomInForStartingImageView(startingImageView: UIImageView){
print("starting image view: ", startingImageView.bounds)
//startingImageView.sizeToFit()
self.startingImageView = startingImageView
self.startingImageView?.isHidden = true
startingFrame = startingImageView.superview?.convert(startingImageView.frame, to: nil)
print("starting frame: ", startingFrame!)
let zoomingImageView = UIImageView(frame: startingFrame!)
//zoomingImageView.backgroundColor = UIColor.red
zoomingImageView.image = startingImageView.image
print("zoom image view: ", zoomingImageView.bounds )
zoomingImageView.isUserInteractionEnabled = true
//zoomingImageView.autoresizingMask = [.flexibleWidth, .flexibleHeight, .flexibleBottomMargin, .flexibleRightMargin, .flexibleLeftMargin, .flexibleTopMargin]
//zoomingImageView.contentMode = UIViewContentMode.scaleAspectFit
zoomingImageView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleZoomOut)))
if let keyWindow = UIApplication.shared.keyWindow {
print("key window frame: ", keyWindow.frame)
blackBackgroundView = UIView(frame: keyWindow.frame)
blackBackgroundView?.backgroundColor = UIColor.black
blackBackgroundView?.alpha = 0
keyWindow.addSubview(blackBackgroundView!)
keyWindow.addSubview(zoomingImageView)
UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {
self.blackBackgroundView?.alpha = 1
print("keywindow frame width: ", keyWindow.frame.width)
print("keywindow frame height: ", keyWindow.frame.height)
print("\nstarting frame height: ", self.startingFrame!.height)
print("starting frame width: ", self.startingFrame!.width)
let height = (self.startingFrame!.height / self.startingFrame!.width) * keyWindow.frame.width
print("height in animate:", height)
//let height = self.startingFrame!.height * 2
zoomingImageView.frame = CGRect(x: 0, y: 0, width: keyWindow.frame.width, height: height)
print("zooming image view frame: ", zoomingImageView.frame)
zoomingImageView.center = keyWindow.center
}, completion: nil)
}
}
Note: I added a bunch of print statements to see exactly what was going on with the frame
I also noticed that my issue happens in the following line:
let height = (self.startingFrame!.height / self.startingFrame!.width) * keyWindow.frame.width
By using keyWindow.frame.width it displays the image with short height correctly, but the other photos height is not correct.
If I change the line to:
let height = (self.startingFrame!.height / self.startingFrame!.width) * keyWindow.frame.height
it displays the square images correctly, but not the ones with short height.
How can I detect the correct height of the image and display them correctly based on their height?
Edit:
Square image log
Wide image log
Edit 2: New Logs
Square image log
Wide image log
you can try like adding an if condition to check the image is landscape or portrait and set height accordingly,
let image = startingImageView.image
let w = image.size.width
let h = image.size.height
print("image width: ", w)
print("image height: ", h)
var height = 0
if (w > h) { //landscape
height = (self.startingFrame!.height / self.startingFrame!.width) * keyWindow.frame.width
} else { //portrait
height = (self.startingFrame!.height / self.startingFrame!.width) * keyWindow.frame.height
}

Adding AnchorPoint to SKNode breaks SKScene positioning

I am trying to have my SKCameraNode start in the bottom left corner, and have my background anchored there as well. When I set the anchor point to CGPointZero, here is what my camera shows:
EDIT:
Interestingly, If I set my AnchorPoint to CGPoint(x:0.5, y:0.2), I get it mostly lined up. Does it have to do with the camera scale?
EDIT 2:
If I change my scene size, I can change where the background nodes show up. Usually they appear with their anchor point placed in the center of the screen, which implies the anchorPoint of the scene is in the center of the screen.
I am new to using the SKCameraNode, and so I am probably setting it's constraints incorrectly.
Here are my camera constraints: I don't have my player added yet, but I want to set my world up first before I add my player. Again I am trying to have everything anchored off CGPointZero.
//Camera Settings
func setCameraConstraints() {
guard let camera = camera else { return }
if let player = worldLayer.childNodeWithName("playerNode") as? EntityNode {
let zeroRange = SKRange(constantValue: 0.0)
let playerNode = player
let playerLocationConstraint = SKConstraint.distance(zeroRange, toNode: playerNode)
let scaledSize = CGSize(width: SKMViewSize!.width * camera.xScale, height: SKMViewSize!.height * camera.yScale)
let boardContentRect = worldFrame
let xInset = min((scaledSize.width / 2), boardContentRect.width / 2)
let yInset = min((scaledSize.height / 2), boardContentRect.height / 2)
let insetContentRect = boardContentRect.insetBy(dx: xInset, dy: yInset)
let xRange = SKRange(lowerLimit: insetContentRect.minX, upperLimit: insetContentRect.maxX)
let yRange = SKRange(lowerLimit: insetContentRect.minY, upperLimit: insetContentRect.maxY)
let levelEdgeConstraint = SKConstraint.positionX(xRange, y: yRange)
levelEdgeConstraint.referenceNode = worldLayer
camera.constraints = [playerLocationConstraint, levelEdgeConstraint]
}
}
I have been using a Udemy course to learn the SKCameraNode, and I have been trying to modify it.
Here is where I set the SKMViewSize:
convenience init(screenSize: CGSize, canvasSize: CGSize) {
self.init()
if (screenSize.height < screenSize.width) {
SKMViewSize = screenSize
}
else {
SKMViewSize = CGSize(width: screenSize.height, height: screenSize.width)
}
SKMSceneSize = canvasSize
SKMScale = (SKMViewSize!.height / SKMSceneSize!.height)
let scale:CGFloat = min( SKMSceneSize!.width/SKMViewSize!.width, SKMSceneSize!.height/SKMViewSize!.height )
SKMUIRect = CGRect(x: ((((SKMViewSize!.width * scale) - SKMSceneSize!.width) * 0.5) * -1.0), y: ((((SKMViewSize!.height * scale) - SKMSceneSize!.height) * 0.5) * -1.0), width: SKMViewSize!.width * scale, height: SKMViewSize!.height * scale)
}
How can I get both the camera to be constrained by my world, and have everything anchored to the CGPointZero?

Swift / code refactoring / how to add parameters to a UIView

I have the following UIView extension to add a background.
extension UIView {
func addBackground() {
// screen width and height:
let width = UIScreen.mainScreen().bounds.size.width
let height = UIScreen.mainScreen().bounds.size.height
let imageViewBackground = UIImageView(frame: CGRectMake(0, 0, width, height))
imageViewBackground.image = UIImage(named: "index_clear")
imageViewBackground.clipsToBounds = true
// you can change the content mode:
imageViewBackground.contentMode = UIViewContentMode.ScaleAspectFill
self.addSubview(imageViewBackground)
self.sendSubviewToBack(imageViewBackground)
}}
I call it with
self.view.addBackground()
What's the best practise to make the extension generic? I want to change the picture like this:
self.view.addBackground("index_clear")
or
self.view.addBackground("other_background_image")
Help is very appreciated.
If you want to avoid breaking any existing implementations within your code, you can take the default parameter approach and do something like this:
extension UIView {
func addBackground(imageName: String = "index_clear") {
// screen width and height:
let width = UIScreen.mainScreen().bounds.size.width
let height = UIScreen.mainScreen().bounds.size.height
let imageViewBackground = UIImageView(frame: CGRectMake(0, 0, width, height))
imageViewBackground.image = UIImage(named: imageName)
imageViewBackground.clipsToBounds = true
// you can change the content mode:
imageViewBackground.contentMode = UIViewContentMode.ScaleAspectFill
self.addSubview(imageViewBackground)
self.sendSubviewToBack(imageViewBackground)
}
}
// You can continue to use it like so
myView.addBackground() // uses index_clear
// or
myView.addBackground("index_not_clear") // uses index_not_clear
Try this :
extension UIView {
func addBackground(imgName : String) {
// screen width and height:
let width = UIScreen.mainScreen().bounds.size.width
let height = UIScreen.mainScreen().bounds.size.height
let imageViewBackground = UIImageView(frame: CGRectMake(0, 0, width, height))
imageViewBackground.image = UIImage(named: imgName)
imageViewBackground.clipsToBounds = true
// you can change the content mode:
imageViewBackground.contentMode = UIViewContentMode.ScaleAspectFill
self.addSubview(imageViewBackground)
self.sendSubviewToBack(imageViewBackground)
}}

Resources