How do I capture QR Code data in specific area of AVCaptureVideoPreviewLayer using Swift? - ios

I am creating an iPad app and one of it's features is scanning QR codes. I have the QR scanning part working, but the issue I have is that the iPad screen is very large and I will be scanning small QR codes of of a sheet of paper with many QR codes visible at once. I want to designate a smaller area of the display to be the only area that can actually capture a QR code so it is easier for the user to scan the specific QR code they want.
I currently have made a temporary UIView with red borders that is centered on the page as an example of where I will want the user to scan the QR codes. It looks like this:
I have looked all over to find an answer to how I can target a specific region of the AVCaptureVideoPreviewLayer to collect the QR code data, and what I have found is suggestions to use "rectOfInterest" with AVCaptureMetadataOutput. I have attempted to do that, but when I set rectOfInterest to the same coordinates and size as those I use for my UIView that shows up correctly, I can no longer scan/recognize any QR codes. Can someone please tell me why the scannable area does not match the location of the UIView that is seen and how can I get the rectOfInterest to be within the red borders I have added to the screen?
Here is the code for the scan function I am currently using:
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
captureMetadataOutput.rectOfInterest = scanRect
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.addSubview(scanAreaView!)
}
Update
The reason I do not think this is a duplicate is because the other post referenced is in Objective-C and my code is in Swift. For those of us that are new to iOS it is not as easy to translate the two. Also, the referenced post's answer does not show the actual update made in the code that resolved his issue. He left a good explanation about having to use the metadataOutputRectOfInterestForRect method to convert the rectangle coordinates, but I still cannot seem to get this method to work, as it is unclear to me how this should work without an example.

After fighting with the metedataOutputRectOfInterestForRect method all morning, I got tired of it and decided to write my own conversion.
func convertRectOfInterest(rect: CGRect) -> CGRect {
let screenRect = self.view.frame
let screenWidth = screenRect.width
let screenHeight = screenRect.height
let newX = 1 / (screenWidth / rect.minX)
let newY = 1 / (screenHeight / rect.minY)
let newWidth = 1 / (screenWidth / rect.width)
let newHeight = 1 / (screenHeight / rect.height)
return CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
}
Note: I have an image view with a square to show the user where to scan, be sure to use the imageView.frame and not imageView.bounds in order to get the correct location on the screen.
This has been working successfully for me.

let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.rectOfInterest = convertRectOfInterest(rect: scanRect)
After reviewing other source(https://www.jianshu.com/p/8bb3d8cb224e),
the convertRectOfInterest function has a slight mistake, the return field should be:
return CGRect(x: newY, y: newX, width: newHeight, height: newWidth)
where x and y, Width and Height input should be interchanged to get it working.

You need to convert the rect represented in the UIView's coordinates into the coordinate system of the AVCaptureVideoPreviewLayer:
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
For more info: https://stackoverflow.com/a/55778152/6898849

let scanView = CGRect(x: centerX, y: centerY, width: width, height: height)
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: scanView)

This works for me.
extension AVCaptureVideoPreviewLayer {
func rectOfInterestConverted(parentRect: CGRect, fromLayerRect: CGRect) -> CGRect {
let parentWidth = parentRect.width
let parentHeight = parentRect.height
let newX = (parentWidth - fromLayerRect.maxX)/parentWidth
let newY = 1 - (parentHeight - fromLayerRect.minY)/parentHeight
let width = 1 - (fromLayerRect.minX/parentWidth + newX)
let height = (fromLayerRect.maxY/parentHeight) - newY
return CGRect(x: newX, y: newY, width: width, height: height)
}
}
Usage:
if let rect = videoPreviewLayer?.rectOfInterestConverted(parentRect: self.view.frame, fromLayerRect: scanAreaView.frame) {
captureMetadataOutput.rectOfInterest = rect
}

metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: yourView.frame)
previewLayer it's AVCaptureVideoPreviewLayer

Related

Disable anti-aliasing in UIKit

I've got a pixel art game app that uses UIKit for its menus and SpriteKit for the gameplay scene. The pixels are getting blurred due to anti-aliasing.
With the sprites I can turn off the anti-aliasing using...
node.texture?.filteringMode = .nearest
but in UIKit I can't find a way to turn off the anti-aliasing in the UIImageView's.
I saw this post but there's no example and the answer wasn't accepted. Not sure how to turn it off using CGContextSetShouldAntialias, or where to call it.
Based on the example I found here, tried using this subclass but it didn't seem to make a difference; according to my breakpoints the method is never called:
class NonAliasingView: UIImageView {
override func draw(_ rect: CGRect) {
guard let ctx = UIGraphicsGetCurrentContext() else { return }
// fill background with black color
ctx.addRect(bounds)
ctx.setFillColor(UIColor.black.cgColor)
ctx.fillPath()
if let img = image {
let pixelSize = CGSize(width: img.size.width * layer.contentsScale, height: img.size.height * layer.contentsScale)
UIGraphicsBeginImageContextWithOptions(pixelSize, true, 1)
guard let imgCtx = UIGraphicsGetCurrentContext() else { return }
imgCtx.setShouldAntialias(false)
img.draw(in: CGRect(x: 0, y: 0, width: pixelSize.width, height: pixelSize.height))
guard let cgImg = imgCtx.makeImage() else { return }
ctx.scaleBy(x: 1, y: -1)
ctx.translateBy(x: 0, y: -bounds.height)
ctx.draw(cgImg, in: CGRect(x: (bounds.width - img.size.width) / 2, y: (bounds.height - img.size.height) / 2, width: img.size.width, height: img.size.height))
}
}
}
Here's the code from my view controller where I tried to implement the subclass (modeImage is an IBOutlet to a UIImageView):
// modeImage.image = gameMode.displayImage
let modeImg = NonAliasingView()
modeImg.image = gameMode.displayImage
modeImage = modeImg
If I try to use UIGraphicsGetCurrentContext in the view controller it is nil and never passes the guard statement.
I've confirmed view.layer.allowsEdgeAntialiasing defaults to false. I don't need anti-aliasing at all, so if there's a way to turn off anti-aliasing app wide, or in the whole view controller, I'd be happy to use it.
How do you disable anti-aliasing with a UIImageView in UIKit?
UPDATE
Added imgCtx.setShouldAntialias(false) to method but still not working.
To remove all antialiasing on your image view and just use nearest-neighbor filtering, set the magnificationFilter and minificationFilter of the image view's layer to CALayerContentsFilter.nearest, as in:
yourImageView.layer.magnificationFilter = .nearest
yourImageView.layer.minificationFilter = .nearest

How can I handle only scan in specific CGRect with AVCaptureVideoDataOutput of vision frameWork IOS swift [duplicate]

I am building a QR code scanner with Swift and everything works in that regard. The issue I have is that I am trying to make only a small area of the entire visible AVCaptureVideoPreviewLayer be able to scan QR codes. I have found out that in order to specify what area of the screen will be able to read/capture QR codes I would have to use a property of AVCaptureMetadataOutput called rectOfInterest. The trouble is when I assigned that to a CGRect, I couldn't scan anything. After doing more research online I have found some suggesting that I would need to use a method called metadataOutputRectOfInterestForRect to convert a CGRect into a correct format that the property rectOfInterest can actually use. HOWEVER, the big issue I have run into now is that when I use this method metadataoutputRectOfInterestForRect I am getting an error that states CGAffineTransformInvert: singular matrix. Can anyone tell me why I am getting this error? I believe I am using this method properly according to the Apple developer documentation and I believe I need to use this according to all the information I have found online to accomplish my goal. I will include links to the documentation I have found so far as well as a code sample of the function I am using to scan QR codes
CODE SAMPLE
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
view.addSubview(scanAreaView!)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.bringSubviewToFront(scanAreaView!)
}
Please note that the line of interest causing the error is this:
captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)
Other things I have tried are passing in a CGRect directly as a parameter and that has caused the same error. I have also passed in scanAreaView!.bounds as a parameter as that is really the exact size/area I am looking for and that also causes the same exact error. I have seen this done in other's code examples online and they do not seem to have the errors I am having. Here are some examples:
AVCaptureSession barcode scan
Xcode AVCapturesession scan Barcode in specific frame (rectOfInterest is not working)
Apple documentation
metadataOutputRectOfInterestForRect
rectOfInterest
Image of scanAreaView I am using as the designated area I am trying to make the only scannable area of the video preview layer:
I wasn't really able to clarify the issue with metadataOutputRectOfInterestForRect, however, you can directly set the property as well. You need to the have the resolution in width and height of your video, which you can specify in advance. I quickly used the 640*480 setting. As stated in the documentation, these values have to be
"extending from (0,0) in the top left to (1,1) in the bottom right, relative to the device’s natural orientation".
See https://developer.apple.com/documentation/avfoundation/avcaptureoutput/1616304-metadataoutputrectofinterestforr
Below is the code I tried
var x = scanRect.origin.x/480
var y = scanRect.origin.y/640
var width = scanRect.width/480
var height = scanRect.height/640
var scanRectTransformed = CGRectMake(x, y, width, height)
captureMetadataOutput.rectOfInterest = scanRectTransformed
I just tested it on an iOS device and it seems to work.
Edit
At least I've solved the metadataOutputRectOfInterestForRect problem. I believe you have to do this after the camera has been properly set up and is running, as the camera's resolution is not yet available.
First, add a notification observer method within viewDidLoad()
NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("avCaptureInputPortFormatDescriptionDidChangeNotification:"), name:AVCaptureInputPortFormatDescriptionDidChangeNotification, object: nil)
Then add the following method
func avCaptureInputPortFormatDescriptionDidChangeNotification(notification: NSNotification) {
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectOfInterestForRect(scanRect)
}
Here you can then reset the rectOfInterest property. Then, in your code, you can display the AVMetadataObject within the didOutputMetadataObjects function
var rect = videoPreviewLayer.rectForMetadataOutputRectOfInterest(YourAVMetadataObject.bounds)
dispatch_async(dispatch_get_main_queue(),{
self.qrCodeFrameView.frame = rect
})
I've tried, and the rectangle was always within the specified area.
In iOS 9.3.2 I was able to make metadataoutputRectOfInterestForRect work calling it right after startRunning method of AVCaptureSession:
captureSession.startRunning()
let visibleRect = previewLayer.metadataOutputRectOfInterestForRect(previewLayer.bounds)
captureMetadataOutput.rectOfInterest = visibleRect
Swift 4:
captureSession?.startRunning()
let scanRect = CGRect(x: 0, y: 0, width: 100, height: 100)
let rectOfInterest = layer.metadataOutputRectConverted(fromLayerRect: scanRect)
metaDataOutput.rectOfInterest = rectOfInterest
I managed to create an effect of having a region of interest. I tried all the proposed solutions but the region was either a CGPoint.zero or had inappropriate size (after converting frames to a 0-1 coordinate). It's actually a hack for those who can't get the regionOfInterest to work and doesn't optimize the detection.
In:
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
I have the following code:
let visualCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if self.viewfinderView.frame.contains(visualCodeObject.bounds) {
//visual code is inside the viewfinder, you can now handle detection
}
/// After
captureSession.startRunning()
/// Add this
if let videoPreviewLayer = self.videoPreviewLayer {
self.captureMetadataOutput.rectOfInterest =
videoPreviewLayer.metadataOutputRectOfInterest(for:
self.getRectOfInterest())
fileprivate func getRectOfInterest() -> CGRect {
let centerX = (self.frame.width / 2) - 100
let centerY = (self.frame.height / 2) - 100
let quadr: CGFloat = 200
let myRect = CGRect(x: centerX, y: centerY, width: quadr, height: quadr)
return myRect
}
To read a QRCode/BarCode from a small rect(specific region) from a full camera view.
<br> **Mandatory to keep the below line after (start running)** <br>
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];
[_captureSession startRunning];
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];
Note:
captureMetadataOutput --> AVCaptureMetadataOutput
_videoPreviewLayer --> AVCaptureVideoPreviewLayer
scanRect --> Rect where you want the QRCode to be read.
I know there are already solutions present and it's pretty late but i achieved mine by capturing the complete view image and then cropping it with specific rect.
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let imageData = photo.fileDataRepresentation() {
print(imageData)
capturedImage = UIImage(data: imageData)
var crop = cropToPreviewLayer(originalImage: capturedImage!)
let sb = UIStoryboard(name: "Main", bundle: nil)
let s = sb.instantiateViewController(withIdentifier: "KeyFobScanned") as! KeyFobScanned
s.image = crop
self.navigationController?.pushViewController(s, animated: true)
}
}
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
let scanRect = CGRect(x: stackView.frame.origin.x, y: stackView.frame.origin.y, width: innerView.frame.size.width, height: innerView.frame.size.height)
let outputRect = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
Potentially unrelated, but the issue for me was screen orientation. On my portrait only app, I wanted to have a barcode scanner that just detects codes in a horizontal line in the middle of the screen. I thought this would work:
CGRect(x: 0, y: 0.4, width: 1, height: 0.2)
instead i had to switch x with y and width with height
CGRect(x: 0.4, y: 0, width: 0.2, height: 1)
I wrote the following:
videoPreviewLayer?.frame = view.layer.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
And this worked for me, but I still don't know why.

Swift 5: Better way/approach to add image border on photo editing app?

In case the title doesn't make sense, i'm trying to make a photo editing app where user can add border to their photo. For now, i'm testing a white border.
here is a gif sample of the app. (see how slow the slider is. It's meant to be smooth like any other slider.)
Gif sample
My approach was, to render the white background to the image's size, and then render the image n% smaller to shrink it hence the border.
But i have come to a problem where when i'm testing on my device (iphone 7 plus) the slider was so laggy and slow as if it's taking so much time to compute the function.
Here are the codes for the function. This function serves as blend the background with the foreground. Background being plain white colour.
blendImages is a function located on my adjustmentEngine class.
func blendImages(backgroundImg: UIImage,foregroundImg: UIImage) -> Data? {
// size variable
let contentSizeH = foregroundImg.size.height
let contentSizeW = foregroundImg.size.width
// the magic. how the image will scale in the view.
let topImageH = foregroundImg.size.height - (foregroundImg.size.height * imgSizeMultiplier)
let topImageW = foregroundImg.size.width - (foregroundImg.size.width * imgSizeMultiplier)
let bottomImage = backgroundImg
let topImage = foregroundImg
let imgView = UIImageView(frame: CGRect(x: 0, y: 0, width : contentSizeW, height: contentSizeH))
let imgView2 = UIImageView(frame: CGRect(x: 0, y: 0, width: topImageW, height: topImageH))
// - Set Content mode to what you desire
imgView.contentMode = .scaleAspectFill
imgView2.contentMode = .scaleAspectFit
// - Set Images
imgView.image = bottomImage
imgView2.image = topImage
imgView2.center = imgView.center
// - Create UIView
let contentView = UIView(frame: CGRect(x: 0, y: 0, width: contentSizeW, height: contentSizeH))
contentView.addSubview(imgView)
contentView.addSubview(imgView2)
// - Set Size
let size = CGSize(width: contentSizeW, height: contentSizeH)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
contentView.drawHierarchy(in: contentView.bounds, afterScreenUpdates: true)
guard let i = UIGraphicsGetImageFromCurrentImageContext(),
let data = i.jpegData(compressionQuality: 1.0)
else {return nil}
UIGraphicsEndImageContext()
return data
}
Below are the function i called to render it into uiImageView
guard let image = image else { return }
let borderColor = UIColor.white.image()
self.adjustmentEngine.borderColor = borderColor
self.adjustmentEngine.image = image
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
This function will get the image and blend it with a new background colour for the border.
And finally, below are the codes for the slider's didChange function.
#IBAction func sliderDidChange(_ sender: UISlider) {
print(sender.value)
let borderColor = adjustmentEngine.borderColor
let image = adjustmentEngine.image
adjustmentEngine.imgSizeMultiplier = CGFloat(sender.value)
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
}
So the question is, Is there a better way or optimised way to do this? Or a better approach?

UIDragInteractionDelegate: How to display transparent parts in the drag preview returned by dragInteraction(_:previewForLifting:session:)

I'm building a drag and drop interaction for an iOS app. I want to enable the user to drag and drop images containing transparent parts.
However, the default preview for the dragged contents is a rectangle with an opaque white background that covers my app's background.
When I create a custom preview by implementing the UIDragInteractionDelegate method
dragInteraction(_:previewForLifting:session:), as in Apple's code sample Adopting Drag and Drop in a Custom View, the transparency of my source image is still not taken into account, meaning my preview image is still displayed in a rectangle with an opaque white background:
func dragInteraction(_ interaction: UIDragInteraction, previewForLifting item: UIDragItem, session: UIDragSession) -> UITargetedDragPreview? {
guard let image = item.localObject as? UIImage else { return nil }
// Scale the preview image view frame to the image's size.
let frame: CGRect
if image.size.width > image.size.height {
let multiplier = imageView.frame.width / image.size.width
frame = CGRect(x: 0, y: 0, width: imageView.frame.width, height: image.size.height * multiplier)
} else {
let multiplier = imageView.frame.height / image.size.height
frame = CGRect(x: 0, y: 0, width: image.size.width * multiplier, height: imageView.frame.height)
}
// Create a new view to display the image as a drag preview.
let previewImageView = UIImageView(image: image)
previewImageView.contentMode = .scaleAspectFit
previewImageView.frame = frame
/*
Provide a custom targeted drag preview that lifts from the center
of imageView. The center is calculated because it needs to be in
the coordinate system of imageView. Using imageView.center returns
a point that is in the coordinate system of imageView's superview,
which is not what is needed here.
*/
let center = CGPoint(x: imageView.bounds.midX, y: imageView.bounds.midY)
let target = UIDragPreviewTarget(container: imageView, center: center)
return UITargetedDragPreview(view: previewImageView, parameters: UIDragPreviewParameters(), target: target)
}
I tried to force the preview not to be opaque, but it did not help:
previewImageView.isOpaque = false
How can I get transparent parts in the lift preview?
Overwrite the backgroundColor in the UIDragPreviewParameters, as it defines the color for the background of a drag item preview.
Set it to UIColor.clear which is a color object whose grayscale and alpha values are both 0.0.
let previewParameters = UIDragPreviewParameters()
previewParameters.backgroundColor = UIColor.clear // transparent background
return UITargetedDragPreview(view: previewImageView,
parameters: previewParameters,
target: target)
You can define a UIBezierPath according to your image and set it to previewParameters.visiblePath
Example (Swift 4.2):
let previewParameters = UIDragPreviewParameters()
previewParameters.visiblePath = UIBezierPath(roundedRect: CGRect(x: yourX, y: yourY, width: yourWidth, height: yourHeight), cornerRadius: yourRadius)
//... Use the created previewParameters

only detect in a section of camera preview layer, iOS, Swift

I am trying to get a detection zone in a live preview on my camera preview layer.
Is it possible for this, say there is a live feed and you have face detect on and as you look around it will only put a box around the face in a certain area for example a rectangle in the centre of the screen. all other faces in the preview that are outside of the rectangle don't get detected?
Im using Vision, iOS, Swift.
I figured this out by adding a guard before the CALayer adding
Before View did load
#IBOutlet weak var scanAreaImage: UIImageView!
var regionOfInterest: CGRect!
In View did load
scanAreaImage.frame is a image view that I put in via storyboard and this would represent the area I only wanted detection in,
let someRect: CGRect = scanAreaImage.frame
regionOfInterest = someRect
then in the vision text detection section.
func highlightLetters(box: VNRectangleObservation) {
let xCord = box.topLeft.x * (cameraPreviewlayer?.frame.size.width)!
let yCord = (1 - box.topLeft.y) * (cameraPreviewlayer?.frame.size.height)!
let width = (box.topRight.x - box.bottomLeft.x) * (cameraPreviewlayer?.frame.size.width)!
let height = (box.topLeft.y - box.bottomLeft.y) * (cameraPreviewlayer?.frame.size.height)!
// This is the section I Added for the rec of interest detection zone.
//////////////////////////////////////////////
let wordRect = CGRect(x: xCord, y: yCord, width: width, height: height)
guard regionOfInterest.contains(wordRect.origin) else { return } // only draw a box if the orgin of the word box is within the regionOfInterest
// regionOfInterest being the cgRect you created earlier
//////////////////////////////////////////////
let outline = CALayer()
outline.frame = CGRect(x: xCord, y: yCord, width: width, height: height)
outline.borderWidth = 1.0
if textColour == 1 {
outline.borderColor = UIColor.blue.cgColor
}else {
outline.borderColor = UIColor.clear.cgColor
}
cameraPreviewlayer?.addSublayer(outline)
this will only show outlines of the things inside the rectangle you created in storyboard. (Mine being the scanAreaImage)
I hope this helps someone.

Resources