I'm trying to develop an app with custom camera where the user can add filters or sticker (like in TextCamera app) and share in social feed.
But I found my first problem.
I show the preview to the user with AVCaptureVideoPreviewLayer, take the photo and pass it to another view controller in a UiImageView but the second picture is bigger than first one.
I tried to resize the picture with this function:
func resize(image: UIImage) -> UIImage {
let size = image.size
let newWidth = CGFloat(size.width)
let newHeight = CGFloat(size.height - blackBottomTab.bounds.size.height)
let newSize = CGSizeMake(newWidth,newHeight)
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
In this function I subtract the height of the black view (under the button) from the image height. But the result that I have is different (see the photo attached).
This is my preview with a black view under the button
This is the photo taken larger than preview one
I also tried to use Aspect Fit in Storyboard Image View of the second View Controller but the result is the same.
Where is my error? Thank you to everyone that help me!
I think that the AVCaptureVideoPreviewLayer frame is the same as the screen's frame (UIScreen.mainScreen().bounds), and you added the "Shoot Photo" black view on top of it. Instead, you should change the frame of the AVCaptureVideoPreviewLayer.
Your case (what I think):
Assuming that the green rectangle is the AVCaptureVideoPreviewLayer frame and the red one is the black view frame. So, it covers (on top) of the green rectangle.
Make them look like this:
Hope that helped.
I had to solve a similar problem. As the question notes, there does not appear to be an easy way to detect the size of the video preview.
My solution is hinted at at the end of the answer to https://stackoverflow.com/a/57996039/10449843 which explains in detail how I take the snapshot and create the combined snapshot with the sticker.
Here is an elaboration of that hint.
While I use AVCapturePhotoCaptureDelegate to take the snapshot, I also use AVCaptureVideoDataOutputSampleBufferDelegate to sample the buffer once when the preview is first shown to detect the proportions of the snapshot.
// The preview layer is displayed with aspect fill
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = .resizeAspect
previewLayer.frame = self.cameraView.bounds
Detect the size of the preview:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// Only need to do this once
guard self.sampleVideoSize == true else {
return
}
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
// Sample the size of the preview layer and then store it
DispatchQueue.main.async {
let width = CGFloat(CVPixelBufferGetWidth(pixelBuffer))
let height = CGFloat(CVPixelBufferGetHeight(pixelBuffer))
// Store the video size in a local variable
// We don't care which side is which, we just need the
// picture ratio to decide how to align it on different
// screens.
self.videoSize = CGSize.init(width: width, height: height)
// Now we can set up filters and stickers etc
...
// And we don't need to sample the size again
self.sampleVideoSize = false
}
return
}
Related
I added a UIImageView on an ImageView, so i want to take screenshot of these two imageviews so that they look like one, any other recommendation is also appreciated.
I made a button that helped me take a screen shot, I have added x-axis and y axis,
func takeScreenshot(_ shouldSave: Bool = true) -> UIImage {
var screenshotImage :UIImage?
let layer = UIApplication.shared.keyWindow!.layer
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: 20, height: 104), false, scale);
guard let context = UIGraphicsGetCurrentContext() else {return screenshotImage ?? UIImage(imageLiteralResourceName: "loading")}
layer.render(in:context)
screenshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let image = screenshotImage, shouldSave {
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
return screenshotImage ?? UIImage(named: "loading")!
}
I expect that the screenshot taken, takes the screenshot of the imageview. Screenshot is attached,enter image description here
Put those two image views inside a UIView, constraint them all properly. Then take a screenshot of that UIView. Follow this for screenshot:
How to take screenshot of a UIView in swift?
I believe you're trying to say
You have 2 UIImageViews with different Images and then you want to take a screenshot of those 2 UIImageViews to make 1 image.
If that's the case, the easiest way to do solve it is to wrap those 2 UIImageViews inside a UIView. Then convert that UIView into UIImage. So you can have the Screenshot of those 2 UIImageViews in one.
In case you need code to convert UIView to UIImage:
extension UIView {
func toImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
I'm not sure if this the solution to your problem.
This question already has answers here:
Cropping image with Swift and put it on center position
(15 answers)
Closed 3 years ago.
I am looking to crop a rectangle segment of a screen shot. I am currently getting back no images when i perform the screen shot and subsequent crop. Below is a image of what i would like to crop (the rectangular grid) and my code
The solution presented in : Cropping image with Swift and put it on center position
is not applicable as I am trying to perform a crop of a screenshot, not crop an existing image. Cropping an image as presented in the above solution does not work for rotated UIImageView.
Code:
private func takeScreenShotCrop(cropGridRect: CGRect) -> UIImage?{
let cropGridOrigin = CGPoint.init(x: cropGridRect.origin.x, y: cropGridRect.origin.y)
let cropGridSize = CGSize.init(width: cropGridRect.width, height: cropGridRect.height)
let cropZoneRect = CGRect.init(origin: cropGridOrigin, size: cropGridSize)
UIGraphicsBeginImageContext(cropGridSize)
view.drawHierarchy(in: cropZoneRect, afterScreenUpdates: true)
guard let croppedIm: UIImage = UIGraphicsGetImageFromCurrentImageContext() else{return nil}
UIGraphicsEndImageContext()
return croppedIm
}
Your question is a bit too general perhaps since you gave no data about your hierarchy. I will expect that there is a view (UIView or it's subclass) which has a certain transform that includes rotation (will call it imageView from now on). I will expect there is a view that shows that grid (will call it panel from now on). I will expect that imageView and panel have some common superview parent (they don't need to exactly be same superview, they just need to be in the same window hierarchy).
You can convert frames in relation from one and another. It is a bit of a pain due to transformations but generally you could write something like the following:
func screenshotView(_ viewToScreenshot: UIView, croppedBy cropView: UIView) -> UIImage? {
guard let originalScreenshot: UIImage = {
UIGraphicsBeginImageContext(viewToScreenshot.bounds.size)
viewToScreenshot.drawHierarchy(in: viewToScreenshot.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}() else { return nil }
let panel = UIView(frame: cropView.bounds)
let imageView = UIImageView(frame: viewToScreenshot.bounds)
imageView.image = originalScreenshot
let viewOriginalTransform = viewToScreenshot.transform
viewToScreenshot.transform = .identity
imageView.frame = viewToScreenshot.convert(viewToScreenshot.bounds, to: cropView)
viewToScreenshot.transform = viewOriginalTransform
panel.addSubview(imageView)
imageView.transform = viewOriginalTransform
UIGraphicsBeginImageContext(panel.bounds.size)
panel.drawHierarchy(in: panel.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
And the usage in your case is then
let image = screenshotView(imageView, croppedBy: panel)
So what we do is first grab the screenshot of your image view. This will not crop it nor will it rotate it. It is just an internal snapshot of your view. This may be skipped and just the image is used IF the imageView is in fact UIImageView and the image is presented as it is (no effects like rounded corners).
The snapshot is then placed into a newly created panel and it's frame adjusted depending on the frame relation between the two views. We create a new snapshot from newly generated panel which is the result we are looking for.
To create "the image view" optimization you can simply add a single line:
guard let originalScreenshot: UIImage = {
if let image = (viewToScreenshot as? UIImageView)?.image { return image }
UIGraphicsBeginImageContext(viewToScreenshot.bounds.size)
viewToScreenshot.drawHierarchy(in: viewToScreenshot.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}() else { return nil }
ISSUE
When using various UIView extensions to take a snapshot of a UIView, the result can appear bigger (kind of zoomed in) than it should be.
Details on the Issue
I take snapshots of 2 UIViews, that we can call viewA and viewB in what follows.
viewA:
The view is added to the view hierarchy when the snapshot is created.
viewB:
The is fully defined but has not been added yet to the view hierarchy when the snapshot is created. The snapshot is then cropped x number of times and the result is added to some UIView for display.
I have tested 3 different codes to obtain the result I am looking for: one provides the desired snapshot images but with low quality rendering; the other two provides better quality image rendering and the right result for viewA but for viewB, although the snapshot appears to have the right rect (I checked the rects), the images shown appear too big (as if they were zoomed in twice).
CODE
Extension #1: Provides the right results but with low quality image
extension UIView {
func takeSnapshot() -> UIImage? {
UIGraphicsBeginImageContext(self.frame.size)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return snapshot
}
}
Extension #2: Provides the right image quality but cropped images of viewB are displayed twice too big
extension UIView {
func snapshot(of rect: CGRect? = nil, afterScreenUpdates: Bool = true) -> UIImage {
return UIGraphicsImageRenderer(bounds: rect ?? bounds).image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: true)
}
}
}
Extension #3: Likewise, provides the right image quality but cropped images of viewB are displayed twice too big
extension UIView {
func takeScreenshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if (image != nil) { return image! }
return UIImage()
}
}
Last but not least, a (simplified) code to extract cropped images from the snapshot
for i in (0...numberOfCroppedImages) {
let rect = CGRect(x: CGFloat(i) * snapshot!.size.width / CGFloat(numberOfCroppedImages), y: 0, width: snapshot!.size.width / CGFloat(numberOfCroppedImages), height:snapshot!.size.height)
// defines the container view for the fragmented snapshots
let view = UIView()
view.frame = rect
// defines the layer with the fragment of the snapshot
let layer = CALayer()
layer.frame.size = rect.size
let img = snapshot?.cgImage?.cropping(to: rect)
layer.contents = img
view.layer.addSublayer(layer)
}
Attempted actions so far
I have doubled checked the all the CGRect and I have not found any issue with them: the view rectangles as well as the snapshot sizes seem to be the one expected, yet, I keep having too large rendered images with extension #2 & #3 for viewB (viewA is properly rendered), whilst extension #1 gives images of the right sizes but with too low quality to be adequately used.
Could it be an issue with drawInHierarchy(in: afterScreenUpdates:)? or alternatively the conversion to cgImage with
let img = snapshot?.cgImage?.cropping(to: rect) ?
I'd be very thankful for any pointers!
I am posting a solution I have found that works for me:
First, a snapshot extension that renders (for my case) the expected results (both quality and size wise) whether the view is added to hierarchy or not, cropped or not:
extension UIView {
func takeSnapshot(of rect: CGRect? = nil, afterScreenUpdates: Bool = true) -> UIImage {
return UIGraphicsImageRenderer(bounds: rect ?? bounds).image { (context) in
self.layer.render(in: context.cgContext)
}
}
}
Then the updated version of the code that takes a snapshot of a rect in a view:
let view = UIView()
view.frame = rect
let img = self.takeSnapshot(of: rect, afterScreenUpdates: true).cgImage
view.layer.contents = img
The difference here consists in taking a snapshot of a rectangle in the view instead of cropping the snapshot of the whole view.
I hope this helps.
I'm trying to make a simple app which will show in the top half of the iphone screen a raw preview of what the back camera sees, while in the bottom half the same preview but with various filters applied.
I first got the raw preview part working, not too hard thanks to several SO and blog posts. The UIImageView I'm displaying to takes up the entire screen for that part.
To get a half-screen view I just divide the image view's height by two, then set its contentMode to show everything while keeping the same aspect ratio:
imageView = UIImageView(frame: CGRectMake(0,0, self.view.frame.size.width, self.view.frame.size.height/2))
imageView.contentMode = UIViewContentMode.ScaleAspectFit
The height reduction works, but the image in the view is compressed vertically (e.g. a coin viewed straight-on looks like a horizontal oval). I don't think it's a coincidence that the appearance of the preview looks like the contentMode default ScaleToFill, but nothing I've tried changes the mode.
The complete code is below - the project has one scene with one view controller class; everything's done programatically.
Thanks!
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate
{
var imageView : UIImageView!
override func viewDidLoad()
{
super.viewDidLoad()
imageView = UIImageView(frame: CGRectMake(0,0, self.view.frame.size.width, self.view.frame.size.height/2))
imageView.contentMode = UIViewContentMode.ScaleAspectFit
view.addSubview(imageView)
let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
do
{
let input = try AVCaptureDeviceInput(device: backCamera)
captureSession.addInput(input)
}
catch
{
print("Camera not available")
return
}
// Unused but required for AVCaptureVideoDataOutputSampleBufferDelegate:captureOutput() events to be fired
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: dispatch_queue_create("SampleBufferDelegate", DISPATCH_QUEUE_SERIAL))
if captureSession.canAddOutput(videoOutput)
{
captureSession.addOutput(videoOutput)
}
videoOutput.connectionWithMediaType(AVMediaTypeVideo).videoOrientation = AVCaptureVideoOrientation.Portrait
captureSession.startRunning()
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
dispatch_async(dispatch_get_main_queue())
{
self.imageView.image = UIImage(CIImage: cameraImage)
}
}
}
I guess this line has problem because it is in viewDidLoad() method.
Not sure it is related to your issue but I feel this is important note to you.
imageView = UIImageView(frame: CGRectMake(0,0, self.view.frame.size.width, self.view.frame.size.height/2))
You can not get correct view's size in viewDidLoad() function.
Furthermore, from iOS 10, I found controls are initially sized (0, 0, 1000, 1000) from storyboard before they are correctly laid out by iOS layout engine.
Also, the size you get in viewDidLoad() can be size of view controller in storyboard. So if you laid out controls in 4 Inch screen, it will return size of 4 inch screen in viewDidLoad() method even you run the app on iPhone6 or bigger screens.
Also, please set imageView.layer.maskToBounds property to true to prevent any out bounding of image.
One more thing is that you should place code laying out your image view appropriately when the view bounds changes (Like rotation of screen).
I'm trying to recreate something like this, where I pull the user's image, and then overlay the "Change" label on top- but I can't seem to figure out how.
(I also want to have some sort of action associated with this label (eg, segue to new page))
My issue: I cannot seem to figure out how to overlay the text label but still keep the image round and have the bottom part of the image have that opaque label.
Code+Details: I have a custom UIView, which contains an Imageview- When I want to add an image I call the following code:
self.userProfilePic.addImage((userImg).roundImage(), factorin: 0.95)
Within the custom view, this is how the image is added:
func addImage(imagein: UIImage, factorin:Float)
{
let img = imagein
imageScalingFactor = factorin
if imageView == nil
{
imageView = UIImageView(image: img)
imageView?.contentMode = .ScaleAspectFit
self.addSubview(imageView!)
self.sendSubviewToBack(imageView!)
imageView!.image = img
}
}
This is my code for the image rounding (which I do not want to touch):
extension UIImage
{
func roundImage() -> UIImage
{
let newImage = self.copy() as! UIImage
let cornerRadius = self.size.height/2
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
let bounds = CGRect(origin: CGPointZero, size: self.size)
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Any help would be appreciated!
Try something like this view hierarchy
UIView
!
!--UIImageView
!--UIButton
so you take 'UIView', inside that first add the UIImageView than on the lower portion of the image view add a UIButton. Now set the clipToBounds to yes. Now set the desire corner radius of this parent view's layer as following
parentView.layer.cornerRadius = parentVirew.frame.size.width;
Remember you have to make the parent view of square size, means the height & width should be the same, for getting the circular masking. Adjust the button position a bit. You will definetly get the result.