I added a sticker to image view.
So I saved it with the following source.
imageView.addSubview(StickerView[0])
// Create the image context to draw in
UIGraphicsBeginImageContextWithOptions((imageView.bounds.size), false, 0)
// Get that context
let context = UIGraphicsGetCurrentContext()
// Draw the image view in the context
imageView.layer.render(in: context!)
// You may or may not need to repeat the above with the imageView's subviews
// Then you grab the "screenshot" of the context
let image = UIGraphicsGetImageFromCurrentImageContext()
// Be sure to end the context
UIGraphicsEndImageContext()
CustomPhotoAlbum.sharedInstance.save(image: image! )
Well saved. However, the black background on both sides of the image is saved as well. Can I save only part of the image except the black background?
This is saved together. I want to save only the part of the image except this one.
Related
I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.
I have a UIImageView in a UIScrollView in which can be zoomed in and out. Now, after the user has selected the specific content to be zoomed in, I want to crop that part of image present on the scrollview and get it in the form on UIImage.
For that I am using
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
But when I implement this, the quality of the image get extremely degraded. Even If I use a 4K image, the final product looks like a 360p resolution.
This logic is just basic capturing of the screen content.
I know there can be a better way but I am not able to find a solution.
Any help is highly appreciated.
You can try this:
let context:CGContext = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .high
Also I'm not sure but image quality could be improve if you initialize image context with this code: UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
So far in my attempts, I'm able to draw lines on a plain image, Like create my own plain context of CGRect size and draw lines on it. All the tutorials I see is how to draw on a created image context of x*y size. But I would like to draw lines on an already present image context like a picture of a dog and make some drawings. But so far I'm not getting results I look for. This is the code I tested with, without assigning the image, i'm able to get a line draw. But with import of image, I do not get the desired line on the picture.
let myImage = UIImage(named: "hqdefault.jpg")!
let myRGBA = RGBAImage(image: myImage)!
UIGraphicsBeginImageContextWithOptions(CGSize(width: rgbaImage.width, height: rgbaImage.height), false, 0)
let context:CGContextRef = UIGraphicsGetCurrentContext()!
CGContextMoveToPoint(context, CGFloat(100.0), CGFloat(100.0))
CGContextAddLineToPoint(context, CGFloat(150.0), CGFloat(150.0))
CGContextSetStrokeColorWithColor(context, UIColor.blackColor().CGColor)
CGContextStrokePath(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
CGContextRestoreState(context)
UIGraphicsEndImageContext()
//: Return image
let view = UIImageView.init(image: image)
view.setNeedsDisplay()
These are the lines I wrote to draw a line inside an image and return it's context. I am not able to pick the imported picture context or draw lines if I try to , as it returns nil in the end. I couldn't figure out my mistake so far. Can you suggest how to draw a simple line on a picture in image view ?
Can you suggest how to draw a simple line on a picture in image view
Sure. Make an image context. Draw the image view's image into the context. Draw the line into the context. Extract the resulting image from the context and close the context. Assign the extracted image to the image view.
Example:
let im = self.iv.image! // iv is the image view
UIGraphicsBeginImageContextWithOptions(im.size, true, 0)
im.drawAtPoint(CGPointMake(0,0))
let p = UIBezierPath()
p.moveToPoint(CGPointMake(CGFloat(100.0), CGFloat(100.0)))
p.addLineToPoint(CGPointMake(CGFloat(150.0), CGFloat(150.0)))
p.stroke()
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Before:
After:
In the second image, notice the line running from (100,100) to (150,150).
I am building a simple motivational app - my pet project. Pretty simple. It prints a random motivational message when a button is pressed.
I would like to user to be able to press a button and crop the motivational message itself on the screen and save it to the camera roll.
I found a tutorial that does what I wanted, but it takes a FULL screenshot AND a PARTIAL screenshot.
I'm trying to modify the code so it takes ONLY a partial screenshot.
Here's the Xcode:
print("SchreenShot")
// Start full screenshot
UIGraphicsBeginImageContext(view.frame.size)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
var sourceImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(sourceImage,nil,nil,nil)
//partial Screen Shot
print("partial ss")
UIGraphicsBeginImageContext(view.frame.size)
sourceImage.drawAtPoint(CGPointMake(0, -100))
var croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(croppedImage,nil,nil,nil)
Also, in the PARTIAL screenshot, it takes a snapshot of the "page" 100 pixels from the top down to the bottom. How can I make it take a snapshot of the contents of the page say 100 pixels from the top of page to 150 pixels from bottom of page?
Many, many, many thanks!
Your sample code draws the view into a graphics context (the snapshot), crops it, and saves it. I am altering it a little with some extra comments because it looks like you are new to this API
// Declare the snapshot boundaries
let top: CGFloat = 100
let bottom: CGFloat = 150
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
CGContextTranslateCTM(context, 0, -top)
// Draw the view into the context (this is the snapshot)
view.layer.renderInContext(context)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
// End the context (this is required to not leak resources)
UIGraphicsEndImageContext()
// Save to photos
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)
I created some code to create a UIBezierPath within a UIImage context. Then take the image and create a base64 string. I believe I am supposed to draw the path within the beginning and end of the UIImage. However after many hours, it is not working. I am copying the base64 string to a website to download the image to see if it works. I am writing this in playground:
import Foundation
import UIKit
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), false, 0.0)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIColor.blackColor().setStroke()
let path = UIBezierPath()
path.lineWidth = 2
path.moveToPoint(CGPointMake(100, 0))
path.addLineToPoint(CGPointMake(200, 40))
path.addLineToPoint(CGPointMake(160, 140))
path.addLineToPoint(CGPointMake(40, 140))
path.addLineToPoint(CGPointMake(0, 40))
path.closePath()
UIGraphicsEndImageContext();
let data = UIImagePNGRepresentation(image)
let b64 = data?.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))
print(b64!)
The resulting PNG is 1000x1000 I am guessing because of my retina display. The image itself is completely transparent and nothing is visible. I am using this site to decode base64 and save a file, I've tried 2 others to see if it was the site. http://www.motobit.com/util/base64-decoder-encoder.asp
EDIT
I just tried the following code to see if the issue with with my image or bezier or saving to bas64. This code worked great. So I think it's a problem with my bezier.
import Foundation
import UIKit
let image = UIImage(data: NSData(contentsOfURL: NSURL(string: "http://i.imgur.com/crr4m48.jpg")!)!)!
let data = UIImagePNGRepresentation(image)
let b64 = data?.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))
print(b64!)
It looks like you're doing things out of order. For example:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), false, 0.0)
let image = UIGraphicsGetImageFromCurrentImageContext()
Why are you getting image before you draw anything into the graphics context? I think you're going to get an empty image -- it's not like the image you get at the outset will change as you draw things in the context. Wait to get the image until after you finish all your drawing. Also, the docs for that function say:
You should call this function only when a bitmap-based graphics context is the current graphics context. If the current context is nil or was not created by a call to UIGraphicsBeginImageContext, this function returns nil.
So, make sure that you've satisfied that requirement and that the image you get back when you do call UIGraphicsBeginImageContext() is not nil.
Next, you're creating a bezier path:
let path = UIBezierPath()
path.lineWidth = 2
path.moveToPoint(CGPointMake(100, 0))
path.addLineToPoint(CGPointMake(200, 40))
//...
but in order for that path to actually be drawn in your context, you have to do something that draws, such as calling path.fill() or path.stroke(). I don't see that anywhere, so even if you fix the first problem above, you're still going to end up with an empty image until you do some drawing.
You need to work with the image context directly. You can try create CGContextRef with current image context like:
UIGraphicsBeginImageContext(CGSizeMake(200, 200))
var context: CGContextRef = UIGraphicsGetCurrentContext()
And then directly add your path to it:
CGContextAddPath(context, path)
Or you can create you curves with context methods like CGContextAddCurveToPoint and then stroke path with:
CGContextStrokePath(context)
And the last action to create image from it:
let image = UIGraphicsGetImageFromCurrentImageContext()
One more Point In addition to #Caleb :-
If you are drawing image.png which result of 1000x1000 pixels then UIGraphicsBeginImageContext parameter must be different as above one :-
//Swift-3 Syntax
UIGraphicsBeginImageContextWithOptions( CGSize(width:500, height:500 ), false , 2.0 )
Reference