I have a UItextView which I place images and type text into, and once I have finished with the TextView I then upload the contents of that textView to Parse.
If I only add 1 image to the textView it lets me upload the UITextView contents to Parse without any problems, but when I add more than 1 Image I get the error "data is larger than 10mb etc...".
Now I am wondering how I can reduce the size of this PFFile?
Or is their a way to reduce the size of the images before or after adding the to the textView? possibly extract the from th etextView and reduce there size before uploading to Parse?
This is my code:
Here is where the text & images from the textView are stored:
var recievedText = NSAttributedString()
And here Is how I upload it to Parse:
let post = PFObject(className: "Posts")
let uuid = NSUUID().UUIDString
post["username"] = PFUser.currentUser()!.username!
post["titlePost"] = recievedTitle
let data: NSData = NSKeyedArchiver.archivedDataWithRootObject(recievedText)
post["textPost"] = PFFile(name:"text.txt", data:data)
post["uuid"] = "\(PFUser.currentUser()!.username!) \(uuid)"
if PFUser.currentUser()?.valueForKey("profilePicture") != nil {
post["profilePicture"] = PFUser.currentUser()!.valueForKey("profilePicture") as! PFFile
}
post.saveInBackgroundWithBlock ({(success:Bool, error:NSError?) -> Void in
})
Best regards.
How I add the images
image1.image = images[1].imageWithBorder(40)
let oldWidth1 = image1.image!.size.width;
let scaleFactor1 = oldWidth1 / (blogText.frame.size.width - 10 )
image1.image = UIImage(CGImage: image1.image!.CGImage!, scale: scaleFactor1, orientation: .Up)
let attString1 = NSAttributedString(attachment: image1)
blogText.textStorage.insertAttributedString(attString1, atIndex: blogText.selectedRange.location)
You should to resize the picture to make sure it is small size for upload. See this answer
Related
A gif image is loaded into a UIImageView (by using this extension) and another UIImageView is overlaid on it. Everything works fine but the problem is when I going for combine both via below code, it shows a still image (.jpg). I wanna combine both and after combine it should be a animated image (.gif) too.
let bottomImage = gifPlayer.image
let topImage = UIImage
let size = CGSize(width: (bottomImage?.size.width)!, height: (bottomImage?.size.height)!)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.draw(in: areaSize)
topImage!.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Click here to know more about this problem please.
When using an animated GIF in a UIImageView, it becomes an array of UIImage.
We can set that array with (for example):
imageView.animationImages = arrayOfImages
imageView.animationDuration = 1.0
or, we can set the .image property to an animatedImage -- that's how the GIF-Swift code you are using works:
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
in that case, the image also contains the duration:
img.images?.duration
So, to generate a new animated GIF with the border/overlay image, you need to get that array of images and generate each "frame" with the border added to it.
Here's a quick example...
This assumes:
you are using GIF-Swift
you have added bottomImageView and topImageView in Storyboard
you have a GIF in the bundle named "funny.gif" (edit the code if yours is different)
you have a "border.png" in assets (again, edit the code as needed)
and you have a button to connect to the #IBAction:
import UIKit
import ImageIO
import UniformTypeIdentifiers
class animImageViewController: UIViewController {
#IBOutlet var bottomImageView: UIImageView!
#IBOutlet var topImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
if let img = UIImage(named: "border") {
topImageView.image = img
}
}
#IBAction func saveButtonTapped(_ sender: Any) {
generateNewGif(from: bottomImageView, with: topImageView)
}
func generateNewGif(from animatedImageView: UIImageView, with overlayImageView: UIImageView) {
var images: [UIImage]!
var delayTime: Double!
guard let overlayImage = overlayImageView.image else {
print("Could not get top / overlay image!")
return
}
if let imgs = animatedImageView.image?.images {
// the image view is using .image = animatedImage
// unwrap the duration
if let dur = animatedImageView.image?.duration {
images = imgs
delayTime = dur / Double(images.count)
} else {
print("Image view is using an animatedImage, but could not get the duration!" )
return
}
} else if let imgs = animatedImageView.animationImages {
// the image view is using .animationImages
images = imgs
delayTime = animatedImageView.animationDuration / Double(images.count)
} else {
print("Could not get images array!")
return
}
// we now have a valid [UIImage] array, and
// a valid inter-frame duration, and
// a valid "overlay" UIImage
// generate unique file name
let destinationFilename = String(NSUUID().uuidString + ".gif")
// create empty file in temp folder to hold gif
let destinationURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(destinationFilename)
// metadata for gif file to describe it as an animated gif
let fileDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0]]
// create the file and set the file properties
guard let animatedGifFile = CGImageDestinationCreateWithURL(destinationURL as CFURL, UTType.gif.identifier as CFString, images.count, nil) else {
print("error creating file")
return
}
CGImageDestinationSetProperties(animatedGifFile, fileDictionary as CFDictionary)
let frameDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: delayTime]]
// use original size of gif
let sz: CGSize = images[0].size
let renderer: UIGraphicsImageRenderer = UIGraphicsImageRenderer(size: sz)
// loop through the images
// drawing the top/border image on top of each "frame" image with 80% alpha
// then writing the combined image to the gif file
images.forEach { img in
let combinedImage = renderer.image { ctx in
img.draw(at: .zero)
overlayImage.draw(in: CGRect(origin: .zero, size: sz), blendMode: .normal, alpha: 0.8)
}
guard let cgFrame = combinedImage.cgImage else {
print("error creating cgImage")
return
}
// add the combined image to the new animated gif
CGImageDestinationAddImage(animatedGifFile, cgFrame, frameDictionary as CFDictionary)
}
// done writing
CGImageDestinationFinalize(animatedGifFile)
print("New GIF created at:")
print(destinationURL)
print()
// do something with the newly created file...
// maybe move it to documents folder, or
// upload it somewhere, or
// save to photos library, etc
}
}
Notes:
the code is based on this article: How to Make an Animated GIF Using Swift
this should be considered Example Code Only!!! -- a starting-point for you, not a "production ready" solution.
I've created several UIViews inside of a UIScrollView that resize dynamically based on values I type into Height and Width text fields. Once the UIView resizes I save the contents of the UIScrollView as PDF data.
I find that the dimensions of the UIView within the PDF (when measured in Adobe Illustrator) are always rounded to a third.
For example:
1.5 -> 1.333
1.75 -> 1.666
I check the constant values each time before the constraints are updated and they are accurate. Can anyone explain why the UIViews have incorrect dimensions once rendered as a PDF?
#IBAction func updateDimensions(_ sender: Any) {
guard let length = NumberFormatter().number(from:
lengthTextField.text ?? "") else { return }
guard let width = NumberFormatter().number(from:
widthTextField.text ?? "") else { return }
guard let height = NumberFormatter().number(from:
heightTextField.text ?? "") else { return }
let flapHeight = CGFloat(truncating: width)/2
let lengthFloat = CGFloat(truncating: length)
let widthFloat = CGFloat(truncating: width)
let heightFloat = CGFloat(truncating: height)
UIView.animate(withDuration: 0.3) {
self.faceAWidthConstraint.constant = lengthFloat
self.faceAHeightConstraint.constant = heightFloat
self.faceBWidthConstraint.constant = widthFloat
self.faceA1HeightConstraint.constant = flapHeight
self.view.layoutIfNeeded()
}
}
func createPDFfrom(aView: UIView, saveToDocumentsWithFileName fileName: String)
{
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil)
UIGraphicsBeginPDFPage()
guard let pdfContext = UIGraphicsGetCurrentContext() else { return }
aView.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
if let documentDirectories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first {
let documentsFileName = documentDirectories + "/" + fileName
debugPrint(documentsFileName)
pdfData.write(toFile: documentsFileName, atomically: true)
}
}
You should not be using layer.render(in:) to render your pdf. The reason its always a third is because you must be on a 3x device (it would be 1/2 on a 2x device and simply 1 on a 1x device), so there are 3 pixels per point. When iOS converts your constraints to pixels, then best it can do is round to the nearest third because its picking an integer pixel. The pdf can have much higher pixel density (or use vector art with infinite) resolution, so instead of using layer.render(in:), which is dumping pixels in the rasterized vector layer into your PDF, you should actually draw the contents into the PDF context manually (ie use UIBezier curve, UIImage.draw, etc). This will allow the pdf to capture the full resolution of any rasterized images you have and will allow it to capture any vectors you use without degrading them into rasterized pixels that are constrained by the device screen that you happen to be on.
I am trying to shrink the size of an image using the code below after converting the UIImage to a compressed JPEG representation and back to a UIImage the UIImage file is still to large how can I shrink the file size of the UIImage?
func changeFileSize()->UIImage{
var needToCompress:Bool = true
var compressingValue:CGFloat = 1.0
let bcf = ByteCountFormatter()
while needToCompress && compressingValue > 0.0{
let data = image.jpegData(compressionQuality: compressingValue)!
if data.count < 1024 * 100{
needToCompress = false
image = UIImage(data: data)
bcf.allowedUnits = [.useKB] // optional: restricts the units to MB only
bcf.countStyle = .file
var newImage = UIImage(data: data)
let string = bcf.string(fromByteCount: Int64(newImage!.jpegData(compressionQuality: compressingValue)!.count))
print("Image Pixels: \(CGSize(width: newImage!.size.width*newImage!.scale, height: newImage!.size.height*newImage!.scale))")
print("final formatted result to be returned: \(string)")
print("New comrpession value: \(compressingValue)")
return UIImage(data: (newImage?.jpegData(compressionQuality: compressingValue))!)!
break
}
else{
compressingValue -= 0.1
bcf.allowedUnits = [.useKB] // optional: restricts the units to MB only
bcf.countStyle = .file
let string = bcf.string(fromByteCount: Int64(image.jpegData(compressionQuality: compressingValue)!.count))
print("formatted result: \(string)")
print("New comrpession value: \(compressingValue)")
}
}
bcf.allowedUnits = [.useKB] // optional: restricts the units to MB only
bcf.countStyle = .file
let string = bcf.string(fromByteCount: Int64(image.jpegData(compressionQuality: 1.0)!.count))
print("formatted result: \(string)")
compressionLabel.text = string
print("Image Pixels: \(CGSize(width: image.size.width*image.scale, height: image.size.height*image.scale))")
return image
}
The JPEG (or PNG) data just tells UIImage how to re-create the original image. Your code is good at reducing the size of the JPEG data, but that has no effect on the actual image UIKit will render from said JPEG data. Compressing the JPEG data is good for saving locally or sending to an API, but it won't help you save memory during runtime (which I assume is what you want).
Instead the process you may be looking for is down-sampling. Which will reduce the actual pixels contained in the image, instead of reducing the size of the JPEG representation like you are currently doing.
You can downsample by making a thumbnail like this:
func reduceImageSize(for image: UIImage, maxDimension: CGFloat) -> UIImage? {
if let imageData = image.jpegData(compressionQuality: 1.0),
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) {
let downsamplingOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimension
] as CFDictionary
let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsamplingOptions)!
return UIImage(cgImage: downsampledImage)
}
return nil
}
Here's a quick test to show it working:
let largeImage = UIImage(named: "fifteenpointsevenmb")!
print("large image size: \(largeImage.jpegData(compressionQuality: 1.0)!)")
print("large image dimensions: \(largeImage.size.width) x \(largeImage.size.height)")
let smallImage = reduceImageSize(for: largeImage, maxDimension: 300)!
print("small image size: \(smallImage.jpegData(compressionQuality: 1.0)!)")
print("small image dimensions: \(smallImage.size.width) x \(smallImage.size.height)")
>> large image size: 6172365 bytes
>> large image dimensions: 2532.0 x 1170.0
>> small image size: 97415 bytes
>> small image dimensions: 300.0 x 139.0
There is indeed a WWDC 2018 video that discusses much of these details, and shows how to solve the original problem. WWDC18 - Image and Graphics Best Practices. In fact, the code above is a modification of the code provided in the WWDC video. Cheers.
There is no such thing as a compressed UIImage. That is the whole point of UIImage. The UIImage is the bitmap by which the image is actually drawn — what you probably think of as the actual pixels of the image. The JPEG data is just data, and uses compression. But to turn this into a UIImage, we must uncompress the data and derive the pixels.
In the function below(didPressTakePhoto), I am trying to take a series of pictures(10 in this case), store them into an array and display them as an animation in the "gif". Yet the program keeps crashing and I have no idea why. This is all after one button click, hence the function name. Also, I tried taking the animation code outside the for loop, but the imageArray would then lose it's value for some reason.
func didPressTakePhoto(){
if let videoConnection = stillImageOutput?.connectionWithMediaType(AVMediaTypeVideo){
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {
(sampleBuffer, error) in
//var counter = 0
if sampleBuffer != nil {
for var index = 0; index < 10; ++index {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
var imageArray: [UIImage] = []
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
imageArray.append(image)
imageArray.insert(image, atIndex: index++)
self.tempImageView.image = image
self.tempImageView.hidden = false
//UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
var gif: UIImageView!
gif.animationImages = imageArray
gif.animationRepeatCount = -1
gif.animationDuration = 1
gif.startAnimating()
}
}
})
}
}
Never try to make an array of images (i.e., a [UIImage] as you are doing). A UIImage is very big, so an array of many images is huge and you will run out of memory and crash.
Save your images to disk, maintaining only references to them (i.e. an array of their names).
Before using your images in the interface, reduce them to the actual physical size you will need for that interface. Using a full-size image for a mere screen-sized display (or smaller) is a huge waste of energy. You can use the ImageIO framework to get a "thumbnail" smaller version of the image from disk without wasting memory.
You are creating a new [UIImage] in every iteration of the loop, so in the last iteration there's only one image, you should take the imageArray creation out of the loop. Having said that, you should take into account what #matt answered
Is there a way to check the image dimensions (i.e. height and width) before downloading (or partially downloading) the image from a URL? I have found ways to get the image size, but that doesn't help.
Basically I want to calculate the correct height of a UITableView row before the image is downloaded. Is this possible?
You can do a partial download of the image data and then extract the image size from that. You will have to get the data structure of the image format you are using and parse it to some extent. It is possible and not that hard if you are capable of lower level coding.
You can do it by accessing its header details
In Swift 3.0 below code will help you,
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}
Create a IBOutlet of HeightConstraint
e.g Here image is in 16:9 ratio from server
This will automatically adjust height for all screen. ImageView ContentMode is AspectFit
override func viewDidLoad() {
cnstHeight.constant = (self.view.frame.width/16)*9
}
Swift 4 Method:
func getImageDimensions(from url: URL) -> (width: Int, height: Int) {
if let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
return (pixelWidth, pixelHeight)
}
}
return (0, 0)
}