How to Circle the image - ios

Hello I am using SDWebImage in my app. This is my code to make the image in circle
extension UIImage {
var circle: UIImage? {
let square = CGSize(width: min(size.width, size.height), height: min(size.width, size.height))
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 0, y: 0), size: square))
imageView.contentMode = .ScaleAspectFill
imageView.image = self
imageView.layer.cornerRadius = square.width/2
imageView.layer.masksToBounds = true
UIGraphicsBeginImageContext(imageView.bounds.size)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
}
copied it from here
I used to do images in cicle like this
let profilePicture = UIImage(data: NSData(contentsOfURL: NSURL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!)!)!
profilePicture.circle
But Now As I am using SDWebImage its not working
cell.profileImageView.sd_setImageWithURL(UIImage().absoluteURL(profileImageUrl), placeholderImage: UIImage.init(named: "default-profile-icon")?.circle!)
Please let me know how can I make this extension work for SDWebImage

You can use the SDWebImageManager to download the image or take it from the cache and apply the circle in the completion block like this:
SDWebImageManager.sharedManager().downloadWithURL(NSURL(string:"img"), options: [], progress: nil) { (image:UIImage!, error:NSError!, cacheType:SDImageCacheType, finished:Bool) -> Void in
if (image != nil){
let circleImage = image.circle
cell.profileImageView.image = circleImage
}
}
Or you can use the version of the sd_setImageWithURL method that takes a completion block as a parameter
let completionBlock: SDWebImageCompletionBlock! = {(image: UIImage!, error: NSError!, cacheType: SDImageCacheType!, imageURL: NSURL!) -> Void in
if (image != nil){
let circleImage = image.circle
cell.profileImageView.image = circleImage
}
}
cell.profileImageView.sd_setImageWithURL(UIImage().absoluteURL(profileImageUrl), placeholderImage: UIImage.init(named: "default-profile-icon")?.circle!, completed: completionBlock)

Related

Add text to image in loop swift

I am adding text to an array of images using swift. Below is my code for the loop:
var holderClass = HolderClass()
var newImages = [UIImage]()
var counter = 0
for (index, oldImage) in holderClass.oldImages.enumerated(){
let newImage = drawTextAtLoaction(text: "testing", image: oldImage)
newImages[index] = newImage
counter+=1
if(counter == self.storyModule.getImageCount()){
completionHandler(newImages)
}
}
Here is the adding text function:
func drawTextAtLoaction(text: String, image: UIImage) -> UIImage{
let textColor = UIColor.white
let textFont = UIFont(name: "Helvetica Bold", size: 12)!
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(image.size, false, scale)
let textFontAttributes = [
NSAttributedString.Key.font: textFont,
NSAttributedString.Key.foregroundColor: textColor,
] as [NSAttributedString.Key : Any]
image.draw(in: CGRect(origin: CGPoint.zero, size: image.size))
let rect = CGRect(origin: CGPoint(x: 0, y: 0), size: image.size)
text.draw(in: rect, withAttributes: textFontAttributes)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if newImage != nil{
return newImage!
}else{
return image
}
}
Intuitively this loop should take constant storage, however, as shown by the following image it takes linear storage and causes memory errors with larger numbers of images.
How can I use constant memory for such an operation?
edit: I think that I may have oversimplified this a bit. The oldImages array is really an array of images inside an object. So the initialization of old images looks as so.
class HolderClass{
private var oldImages: [UIImage]!
init(){
oldImages = [UIImage(), UIImage()]
}
}
edit 2:
This is how the image data is loaded. The following code is in viewDidLoad.
var holderClass = HolderClass()
DispatchQueue.global(qos: .background).async {
var dataIntermediate = [StoryData]()
let dataRequest:NSFetchRequest<StoryData> = StoryData.fetchRequest()
do {
dataIntermediate = try self.managedObjectContext.fetch(dataRequest)
for storyData in dataIntermediate{
var retrievedImageArray = [UIImage]()
if let loadedImage = DocumentSaveManager.loadImageFromPath(imageId: Int(storyData.id)){
retrievedImageArray.append(loadedImage)
}
holderData.oldImages = retrievedImageArray
}
}catch{
print("Failed to load data \(error.localizedDescription)")
}
}
This is the DocumentSaveManager class.
static func documentDirectoryURL() -> URL {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
return documentsURL
}
static func saveInDocumentsDirectory(foldername:String, filename: String) -> URL {
let fileURL = documentDirectoryURL().appendingPathComponent(foldername).appendingPathComponent(filename)
return fileURL
}
static func loadImageFromPath(moduleId: Int, imageId: Int) -> UIImage? {
let path = saveInDocumentsDirectory(foldername: String(describing: moduleId), filename: String(describing: imageId)).path
let image = UIImage(contentsOfFile: path)
if image == nil {
return nil // Remember to alert user
}
return image
}

captureStillImageAsynchronously Issue

I'm currently having an issue with AVCaptureStillImageOutput where when I try to take a picture the image is currently nil. My current attempts at bug fixing have found that captureStillImageAsynchronously method isn't being called at all and I haven't been able to test whether the sample buffer is nil or not. I'm using this method to feed the camera image into another method that combines the camera image and another image into a single image. The thread fails during that last method. When I try to examine the image from the capture method it is unavailable. What do I need to do to get the camera capture working?
public func capturePhotoOutput()->UIImage
{
var image:UIImage = UIImage()
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
image = camImage
}
else
{
print("nil sample buffer ---------------------")
}
})
}
if (stillImageOutput?.isCapturingStillImage)!
{
print("image capture in progress ---------------------")
}
else
{
print("capture not in progress -------------------")
}
return image
}
EDIT: Added below method where the camera image is being used.
func takePicture()-> UIImage
{
/*
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
//Your code
guard let topImage = image else
{
print("No image")
return
}
}
*/
let topImage = videoComponent!.getVideoController().capturePhotoOutput() //overlay + Camera
let bottomImage = captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
If you use async method the function will return a wrong value, because the async call is still in progress. You can use a completion block, like that:
public func capturePhotoOutput(completion: (UIImage?) -> ())
{
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
completion(camImage)
}
else
{
completion(nil)
}
})
}
else
{
completion(nil)
}
}
How to use it:
capturePhotoOutput
{ (image) in
guard let topImage = image else{
print("No image")
return
}
//Your code
}
Edit:
func takePicture()
{
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
guard let topImage = image else
{
print("No image")
return
}
let bottomImage = self.captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
self.setPicture(image: newImage)
}
}
func setPicture(image:UIImage)
{
//Your code after takePicture
}

Resize UIImage before uploading to Firebase storage in swift 3

I have set up my application so that when I press the button "cambiaimmagineutente" a picker controller appears and I can choose the image which I then upload to FIRStorage using the "UIImagePickerControllerReferenceURL". I cannot find a way to resize the image before uploading it to save space and to place it in a smaller image view.
Here is the code:
#IBAction func cambiaImmagineUtente(_ sender: UIButton) {
imagePicker.allowsEditing = false
imagePicker.sourceType = .photoLibrary
present(imagePicker, animated: true, completion:nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
picker.dismiss(animated: true, completion:nil)
// if it's a photo from the library, not an image from the camera
if #available(iOS 8.0, *), let referenceUrl = info[UIImagePickerControllerReferenceURL] as? URL {
let assets = PHAsset.fetchAssets(withALAssetURLs: [referenceUrl], options: nil)
let asset = assets.firstObject
asset?.requestContentEditingInput(with: nil, completionHandler: { (contentEditingInput, info) in
let imageFile = contentEditingInput?.fullSizeImageURL
let filePath = FIRAuth.auth()!.currentUser!.uid +
"/\(Int(Date.timeIntervalSinceReferenceDate * 1000))/\(imageFile!.lastPathComponent)"
// [START uploadimage]
self.storageRef.child(filePath)
.putFile(imageFile!, metadata: nil) { (metadata, error) in
if let error = error {
//an error occured
print("Error uploading: \(error)")
return
}
self.uploadSuccess(metadata!, storagePath: filePath)
}
// [END uploadimage]
})
} else {
guard let image = info[UIImagePickerControllerOriginalImage] as? UIImage else { return }
guard let imageData = UIImageJPEGRepresentation(image, 0.8) else { return }
let imagePath = FIRAuth.auth()!.currentUser!.uid +
"/\(Int(Date.timeIntervalSinceReferenceDate * 1000)).jpg"
let metadata = FIRStorageMetadata()
metadata.contentType = "image/jpeg"
self.storageRef.child(imagePath)
.put(imageData, metadata: metadata) { (metadata, error) in
if let error = error {
//an error occured
print("Error uploading: \(error)")
return
}
self.uploadSuccess(metadata!, storagePath: imagePath)
}
}
}
func uploadSuccess(_ metadata: FIRStorageMetadata, storagePath: String) {
print("Upload Succeeded!")
//self.urlTextView.text = metadata.downloadURL()?.absoluteString
UserDefaults.standard.set(storagePath, forKey: "storagePath")
UserDefaults.standard.synchronize()
//self.downloadPicButton.isEnabled = true
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.dismiss(animated: true, completion:nil)
}
You can use this:
func resizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
Use:
let resizedImage = resizeImage(image: selectedImage, targetSize: CGSize.init(width: 300, height: 300))
make sure you also make a write rule to a max value in your storage rules!

CIDetector , detected face image is not showing?

I am using CIDetector to detect face in a UIImage. i am getting the face rect correctly but when i crop the image to detected face rect. it is not showing on my image view.
I have already checked. my image is not nil
Here is my code :-
#IBAction func detectFaceOnImageView(_: UIButton) {
let image = myImageView.getFaceImage()
myImageView.image = image
}
extension UIView {
func getFaceImage() -> UIImage? {
let faceDetectorOptions: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh as AnyObject]
let faceDetector: CIDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: faceDetectorOptions)!
let viewScreenShotImage = generateScreenShot(scaleTo: 1.0)
if viewScreenShotImage.cgImage != nil {
let sourceImage = CIImage(cgImage: viewScreenShotImage.cgImage!)
let features = faceDetector.features(in: sourceImage)
if features.count > 0 {
var faceBounds = CGRect.zero
var faceImage: UIImage?
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
faceImage = UIImage(ciImage: faceCroped)
}
return faceImage
} else {
return nil
}
} else {
return nil
}
}
func generateScreenShot(scaleTo: CGFloat = 3.0) -> UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
self.layer.render(in: context!)
let screenShotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let aspectRatio = screenShotImage.size.width / screenShotImage.size.height
let resizedScreenShotImage = screenShotImage.scaleImage(toSize: CGSize(width: self.bounds.size.height * aspectRatio * scaleTo, height: self.bounds.size.height * scaleTo))
return resizedScreenShotImage!
}
}
For More Information, I am attaching Screen Shots of values .
Screen Shot 1
Screen Shot 2
Screen Shot 3
Try this:
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
//faceImage = UIImage(ciImage: faceCroped)
let cgImage: CGImage = {
let context = CIContext(options: nil)
return context.createCGImage(faceCroped, from: faceCroped.extent)!
}()
faceImage = UIImage(cgImage: cgImage)

resize sdwebImage swift

Hello I am displaying images on my app using sdwebImage. I have a code here to resize the image
func resizeImage(image: UIImage, newWidth: CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
The problem is the above function accepts the UIImage as parameter and sdwebimage accepts the URL. How can I call the above resize function in sdwebimage. or in short how Can I resize the image that are presenting through sdwebImage here
cell.profileImageView.sd_setImageWithURL(UIImage().absoluteURL(profileImageUrl as! String), placeholderImage: UIImage.init(named: "default-profile-icon")?.circle!, completed: completionBlock)
do like
cell.profileImageView.sd_setImageWithURL(
NSURL(string: profileImageUrl as! String),
placeholderImage: UIImage.init(named: "default-profile-icon"),
options: nil,
progress: nil,
completed: { (image: UIImage?, error: NSError?, cacheType: SDImageCacheType!, imageURL: NSURL?) in
guard let image = image else { return }
print("Image arrived!")
cell.profileImageView.image = resizeImage(image, newWidth: 200)
}
)
SDWebImage supports image handling directly through its SDWebImageManagerDelegate protocol. You can use imageManager:transformDownloadedImage:withURL: method to transform the downloaded image.
You can set the image manager delegate like this:
SDWebImageManager.sharedManager.delegate = self;
Use shared SDWebImageManager is not good idea. Some processes can download images in this time.
My swift 3 example with custom SDWebImageManager in the class and custom SDWebImageManagerDelegate resizer.
import SDWebImage
class ImageResizer: NSObject, SDWebImageManagerDelegate {
private func resizeImage(_ image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newHeight), false, 0)
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
public func imageManager(_ imageManager: SDWebImageManager, transformDownloadedImage image: UIImage?, with imageURL: URL?) -> UIImage? {
guard let _image = image else {
return nil
}
return resizeImage(_image, newHeight: 20)
}
}
class BasicTrainView: XibView {
static let imageManager: SDWebImageManager = SDWebImageManager()
static let imageResizer = ImageResizer()
func xxx() {
BasicTrainView.imageManager.delegate = BasicTrainView.imageResizer
BasicTrainView.imageManager.loadImage(with: logoURL, options: [], progress: nil) { (image, _, error, sdImageCacheType, _, url) -> Void in
guard let _image = image else {
self.carrierLogoImageView.image = nil
return
}
self.carrierLogoImageView.image = _image
}
}
}
SDWebImage has this functionality built in. This is how to use it
let imageSize = cell.fanartImageView.bounds.size * UIScreen.main.scale
let transformer = SDImageResizingTransformer(size: imageSize, scaleMode: .fill)
cell.fanartImageView.sd_setImage(with: url, placeholderImage: image,
options: SDWebImageOptions(rawValue: 0),
context: [.imageTransformer: transformer],
progress: nil) { (image, error, cache, url) in
if error != nil {
// handle error
}
}

Resources