Aspect fill AVCaptureVideoDataOutput when drawing in GLKView with CIContext - ios

I'm drawing a camera output from AVCaptureVideoDataOutput in a GLKView, but the camera is 4:3, which doesn't match the aspect ratio of the GLKView (which is full screen). I'm trying to get an aspect fill, but the camera output just seems to get squashed so that it doesn't go over the edge the frame of the view. How can I get a full screen camera view using GLKView without messing up the aspect ratio?
Initialising the view:
videoDisplayView = GLKView(frame: superview.bounds, context: EAGLContext(api: .openGLES2))
videoDisplayView.transform = CGAffineTransform(rotationAngle: CGFloat(M_PI_2))
videoDisplayView.frame = superview.bounds
superview.addSubview(videoDisplayView)
superview.sendSubview(toBack: videoDisplayView)
renderContext = CIContext(eaglContext: videoDisplayView.context)
sessionQueue = DispatchQueue(label: "AVSessionQueue", attributes: [])
videoDisplayView.bindDrawable()
videoDisplayViewBounds = CGRect(x: 0, y: 0, width: videoDisplayView.drawableWidth, height: videoDisplayView.drawableHeight)
Initialising the video output:
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: sessionQueue)
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
Rendering the output:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// Need to shimmy this through type-hell
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Force the type change - pass through opaque buffer
let opaqueBuffer = Unmanaged<CVImageBuffer>.passUnretained(imageBuffer!).toOpaque()
let pixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(opaqueBuffer).takeUnretainedValue()
let sourceImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
// Do some detection on the image
let detectionResult = applyFilter?(sourceImage)
var outputImage = sourceImage
if detectionResult != nil {
outputImage = detectionResult!
}
if videoDisplayView.context != EAGLContext.current() {
EAGLContext.setCurrent(videoDisplayView.context)
}
videoDisplayView.bindDrawable()
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(outputImage, in: videoDisplayViewBounds, from: outputImage.extent)
videoDisplayView.display()
}
Things I've tried:
// Results in 4:3 stream leaving a gap at the bottom
renderContext.draw(outputImage, in: outputImage.extent, from: outputImage.extent)
// Results in same 4:3 stream
let rect = CGRect(x: 0, y: 0, width: outputImage.extent.width, height: videoDisplayViewBounds.height)
renderContext.draw(outputImage, in: rect, from: outputImage.extent)

I actually ended up having to crop my output to the size of the view I was displaying the output in.
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// Need to shimmy this through type-hell
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Force the type change - pass through opaque buffer
let opaqueBuffer = Unmanaged<CVImageBuffer>.passUnretained(imageBuffer!).toOpaque()
let pixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(opaqueBuffer).takeUnretainedValue()
let sourceImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
// Make a rect to crop to that's the size of the view we want to display the image in
let cropRect = AVMakeRect(aspectRatio: CGSize(width: videoDisplayViewBounds.width, height: videoDisplayViewBounds.height), insideRect: sourceImage.extent)
// Crop
let croppedImage = sourceImage.cropping(to: cropRect)
// Cropping changes the origin coordinates of the cropped image, so move it back to 0
let translatedImage = croppedImage.applying(CGAffineTransform(translationX: 0, y: -croppedImage.extent.origin.y))
// Do some detection on the image
let detectionResult = applyFilter?(translatedImage)
var outputImage = translatedImage
if detectionResult != nil {
outputImage = detectionResult!
}
if videoDisplayView.context != EAGLContext.current() {
EAGLContext.setCurrent(videoDisplayView.context)
}
videoDisplayView.bindDrawable()
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0)
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303)
renderContext.draw(outputImage, in: videoDisplayViewBounds, from: outputImage.extent)
videoDisplayView.display()
}

Related

Vision framework gives incorrect coordinates for rectangles in frames from captured video output

I need to recognize rectangles in frames from a captured video. I use the following method to display a rectangle on top of an observed image.
func displayRect(for observation: VNRectangleObservation) {
DispatchQueue.main.async { [weak self] in
guard let size = self?.imageView.frame.size else { return }
guard let origin = self?.imageView.frame.origin else { return }
let transform = CGAffineTransform(scaleX: size.width, y: size.height)
let rect = observation.boundingBox.applying(transform)
.applying(CGAffineTransform(scaleX: 1.0, y: -1.0))
.applying(CGAffineTransform(translationX: 0.0, y: size.height))
.applying(CGAffineTransform(translationX: -origin.x, y: -origin.y))
let path = UIBezierPath(rect: rect)
let layer = CAShapeLayer()
layer.path = path.cgPath
layer.fillRule = kCAFillRuleEvenOdd
layer.fillColor = UIColor.red.withAlphaComponent(0.2).cgColor
self?.overlay.sublayers = nil
self?.overlay.addSublayer(layer)
}
}
This works just fine with images taken from the camera, but for frames from captured video the rectangle is off. In fact, it looks like it (and thus the entire coordinate system for the image) if off by 90 degrees. Please see the screenshots below.
Am I missing something about video frames that could cause the observation's boundingBox property to be in an entirely different coordinate system?
Below is my implementation of the captureOutput delegate method.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let buffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// Also tried converting to CGImage, creating handler from that, but made no difference
let handler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:])
let request = VNDetectRectanglesRequest()
request.minimumAspectRatio = VNAspectRatio(0.2)
request.maximumAspectRatio = VNAspectRatio(1.0)
request.minimumSize = Float(0.3)
try? handler.perform([request])
// Note: Only ever captures one rectangle, so calling `first` not the issue.
guard let observations = request.results as? [VNRectangleObservation],
let observation = observations.first else {
return removeShapeLayer()
}
displayRect(for: observation, buffer: buffer)
}
This issue is that you're not passing the orientation of the buffer to the VNImageRequestHandler so it is trading the video as landscape. Then when it return that rect, you place that above the video that is being displayed in portrait.
You'll either need to pass the orientation to the VNImageRequestHandler, or modify (rotate) the rectangle returned to take that into account.

CMSampleBuffer rotate from portrait to landscape in Swift 3

I'm handling ReplayKit2 in iOS, for some reasons I need to rotate CMSampleBuffer from portrait to landscape, I found the result is not correct.
What I miss ?
this is original sample buffer
this is actual output buffer
width & height are dimensions of sampleBuffer
func rotation(sampleBuffer: CMSampleBuffer, width: Int, height: Int) -> CMSampleBuffer {
//create pixelbuffer from the delegate method samplebuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
//create CI image from the buffer
let image = CIImage(cvImageBuffer: pixelBuffer)
let extent = CGRect(x: 0, y: 0, width: width, height: height)
var tx = CGAffineTransform(translationX: extent.midX, y: extent.midY)
tx = tx.rotated(by: CGFloat(Double.pi / 2))
tx = tx.translatedBy(x: -extent.midX, y: -extent.midY)
var transformImage = CIFilter(
name: "CIAffineTransform",
withInputParameters: [
kCIInputImageKey: image,
kCIInputTransformKey: NSValue.init(cgAffineTransform: tx)])!.outputImage!
//create empty pixelbuffer
var newPixelBuffer : CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
nil,
&newPixelBuffer)
//render the context to the new pixelbuffer, context is a global
//CIContext variable. creating a new one each frame is too CPU intensive
self.ciContext.render(transformImage, to: newPixelBuffer!)
//finally, write this to the pixelbufferadaptor
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))
var videoInfo: CMVideoFormatDescription?
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, newPixelBuffer!, &videoInfo)
var sampleTimingInfo = CMSampleTimingInfo(duration: CMSampleBufferGetDuration(sampleBuffer), presentationTimeStamp: CMSampleBufferGetPresentationTimeStamp(sampleBuffer), decodeTimeStamp: CMSampleBufferGetDecodeTimeStamp(sampleBuffer))
var newSampleBuffer: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, newPixelBuffer!, true, nil, nil, videoInfo!, &sampleTimingInfo, &newSampleBuffer)
return newSampleBuffer!
}
just found a very sweet method in iOS 11!
/* Returns a new image representing the original image transformeded for the given CGImagePropertyOrientation */
#available(iOS 11.0, *)
open func oriented(_ orientation: CGImagePropertyOrientation) -> CIImage
May be it will be useful
func rotate(_ sampleBuffer: CMSampleBuffer) -> CVPixelBuffer? {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return nil
}
var newPixelBuffer: CVPixelBuffer?
let error = CVPixelBufferCreate(kCFAllocatorDefault,
CVPixelBufferGetHeight(pixelBuffer),
CVPixelBufferGetWidth(pixelBuffer),
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
nil,
&newPixelBuffer)
guard error == kCVReturnSuccess else {
return nil
}
let ciImage = CIImage(cvPixelBuffer: pixelBuffer).oriented(.right)
let context = CIContext(options: nil)
context.render(ciImage, to: newPixelBuffer!)
return newPixelBuffer
}

CAMetalLayer drawable texture is weird on some devices

I am using the below code to get and append a pixel buffer from a metal layer. On some non specific devices the output looks like below and the drawable textures pixelFormat is .invalid
static func make(with currentDrawable: CAMetalDrawable, usingBuffer pool: CVPixelBufferPool) -> (CVPixelBuffer?, UIImage) {
let destinationTexture = currentDrawable.texture
var pixelBuffer: CVPixelBuffer?
_ = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &pixelBuffer)
if let pixelBuffer = pixelBuffer {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.init(rawValue: 0))
let region = MTLRegionMake2D(0, 0, Int(currentDrawable.layer.drawableSize.width), Int(currentDrawable.layer.drawableSize.height))
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let tempBuffer = CVPixelBufferGetBaseAddress(pixelBuffer)
destinationTexture.getBytes(tempBuffer!, bytesPerRow: Int(bytesPerRow), from: region, mipmapLevel: 0)
let image = imageFromCVPixelBuffer(buffer: pixelBuffer)
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.init(rawValue: 0))
return (pixelBuffer, image)
}
return (nil, UIImage())
}
static func imageFromCVPixelBuffer(buffer: CVPixelBuffer) -> UIImage {
let ciimage = CIImage(cvPixelBuffer: buffer)
let cgimgage = context.createCGImage(ciimage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(buffer), height: CVPixelBufferGetHeight(buffer)))
let uiimage = UIImage(cgImage: cgimgage!)
return uiimage
}
Does anybody have any idea why this happens and how to prevent it?
There are several more feedback from people experiencing this can be found here: https://github.com/svtek/SceneKitVideoRecorder/issues/3

Rotate CMSampleBuffer by arbitrary angle and append to AVAssetWriterInput in swift 3

I convert the sample buffer to a CGContext. Then I apply a transformation to the context and create a CIImage from that, which in turn gets displayed in an UIImageView.
At the same time I want to append this to the AVAssetWriterInput to create a movie of these transformations.
So far the transformations I apply to the context have no effect whatsoever. When I display the so called transformed image in the imageview. it looks exactly the same.
UPDATE:
I managed to record the sample buffer to a video file (it's still stretched because of the wrong orientation though). I've used this code as a base
http://geek-is-stupid.github.io/blog/2017/04/13/how-to-record-detect-face-overlay-video-at-real-time-using-swift/
But I'm still struggling with applying the rotating to the CGContext. basically everything I do to the context is completely ignored.
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let writable = canWrite()
if writable , sessionAtSourceTime == nil {
print("starting session")
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
assetWriter!.startSession(atSourceTime: sessionAtSourceTime!)
}
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
if writable {
autoreleasepool {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
var renderedOutputPixelBuffer: CVPixelBuffer? = nil
let options = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,] as CFDictionary
let status = CVPixelBufferCreate(kCFAllocatorDefault,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
kCVPixelFormatType_32BGRA, options,
&renderedOutputPixelBuffer)
guard status == kCVReturnSuccess else { return }
CVPixelBufferLockBaseAddress(renderedOutputPixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
let renderedOutputPixelBufferBaseAddress = CVPixelBufferGetBaseAddress(renderedOutputPixelBuffer!)
memcpy(renderedOutputPixelBufferBaseAddress,CVPixelBufferGetBaseAddress(pixelBuffer),CVPixelBufferGetHeight(pixelBuffer) * CVPixelBufferGetBytesPerRow(pixelBuffer))
CVPixelBufferLockBaseAddress(renderedOutputPixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let context = CGContext(data: renderedOutputPixelBufferBaseAddress,
width: CVPixelBufferGetWidth(renderedOutputPixelBuffer!),
height: CVPixelBufferGetHeight(renderedOutputPixelBuffer!),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(renderedOutputPixelBuffer!),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: bitmapInfo!)
let radians : Float = atan2f(Float(boxView!.transform.b), Float(boxView!.transform.a));
context!.translateBy(x: self.view.frame.size.width/2, y: self.view.frame.size.height/2)
context!.rotate(by:CGFloat(radians))
let image: CGImage = context!.makeImage()!
self.imageView!.image = UIImage(cgImage: image)
if (bufferAdaptor?.assetWriterInput.isReadyForMoreMediaData)!, canWrite() {
bufferAdaptor?.append(renderedOutputPixelBuffer!, withPresentationTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
CVPixelBufferUnlockBaseAddress(renderedOutputPixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))
}
}
found the solution. Below the important part of the code.
//create pixelbuffer from the delegate method samplebuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
//create CI image from the buffer
let ci = CIImage.init(cvPixelBuffer: pixelBuffer, options: options)
//create filter to rotate
let filter = CIFilter.init(name: "CIAffineTransform")
//create transform, move rotation point to center
var transform = CGAffineTransform(translationX: self.view.frame.midX, y: self.view.frame.midY)
//rotate it
transform = transform.rotate(angle: CGFloat(radians))
// move the transform point back to the original
transform = transform.translatedBy(x: -self.view.frame.midX, y: -self.view.frame.midY)
filter!.setValue(transform, forKey: kCIInputTransformKey)
filter!.setValue(ci, forKey: kCIInputImageKey)
//take the output from the filter
let output = filter?.outputImage
//create empty pixelbuffer
var newPixelBuffer : CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault, Int(self.view.frame.width) ,
Int(self.view.frame.height),
kCVPixelFormatType_32BGRA,
nil,
&newPixelBuffer)
//render the context to the new pixelbuffer, context is a global
//CIContext variable. creating a new one each frame is too CPU intensive
context.render(output!, to: newPixelBuffer!)
//finally, write this to the pixelbufferadaptor
if (bufferAdaptor?.assetWriterInput.isReadyForMoreMediaData)!, canWrite() {
bufferAdaptor?.append(newPixelBuffer!,
withPresentationTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))

AVFoundation camera image quality degraded upon processing

I made an AVFoundation camera to crop square images based on #fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!
Setting up camera:
func setUpCamera() {
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if ((backCamera?.hasFlash) != nil) {
do {
try backCamera.lockForConfiguration()
backCamera.flashMode = AVCaptureFlashMode.Auto
backCamera.unlockForConfiguration()
} catch {
// error handling
}
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewVideoView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
}
}
}
Snapping photo:
#IBAction func onSnapPhotoButtonPressed(sender: UIButton) {
if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
self.processImage(image)
self.clearPhotoButton.hidden = false
self.nextButton.hidden = false
self.view.bringSubviewToFront(self.imageView)
}
})
}
}
Process image to square:
func processImage(image:UIImage) {
let deviceScreen = previewLayer?.bounds
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
let aspectRatio:CGFloat = image.size.height * width / image.size.width
image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
let smallImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let cropRect = CGRectMake(0, 0, width, width)
let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
imageView.image = UIImage(CGImage: imageRef)
}
There are a few things wrong with your processImage() function.
First of all, you're creating a new graphics context with UIGraphicsBeginImageContext().
According to the Apple docs on this function:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Because the scale factor is 1.0, it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.
You want to be using the UIGraphicsBeginImageContextWithOptions() function, and pass 0.0 for the scale factor. According to the docs on this function, for the scale argument:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
For example:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);
Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.
Second of all, there's a problem with the width you're passing in.
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:
let width:CGFloat = image.size.width
You will then have to change the aspectRatio variable to take the screen width, such as:
let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width
Third of all, you can simplify your cropping function significantly.
func processImage(image:UIImage) {
let screenWidth = UIScreen.mainScreen().bounds.size.width
let width:CGFloat = image.size.width
let height:CGFloat = image.size.height
let aspectRatio = screenWidth/width;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(screenWidth, screenWidth), false, 0.0) // create context
let ctx = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(ctx, 0, (screenWidth-(aspectRatio*height))*0.5) // shift context up, to create a sqaured 'frame' for your image to be drawn in
image.drawInRect(CGRect(origin:CGPointZero, size: CGSize(width:screenWidth, height:height*aspectRatio))) // draw image
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = img
}
There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.

Resources