Is there a way to check the image dimensions (i.e. height and width) before downloading (or partially downloading) the image from a URL? I have found ways to get the image size, but that doesn't help.
Basically I want to calculate the correct height of a UITableView row before the image is downloaded. Is this possible?
You can do a partial download of the image data and then extract the image size from that. You will have to get the data structure of the image format you are using and parse it to some extent. It is possible and not that hard if you are capable of lower level coding.
You can do it by accessing its header details
In Swift 3.0 below code will help you,
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}
Create a IBOutlet of HeightConstraint
e.g Here image is in 16:9 ratio from server
This will automatically adjust height for all screen. ImageView ContentMode is AspectFit
override func viewDidLoad() {
cnstHeight.constant = (self.view.frame.width/16)*9
}
Swift 4 Method:
func getImageDimensions(from url: URL) -> (width: Int, height: Int) {
if let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
return (pixelWidth, pixelHeight)
}
}
return (0, 0)
}
Related
I am working on a video editing app where each video gets squared in such a way that no portion of the video gets cropped.For this, in case of portrait video, it contains black portion on left & right and for landscape video, it contains black portion on top & bottom side of the video. Black portions are part of the video, they are not for AVPlayerViewController. Here is the sample,
I need to cover these black portions with some CALayers.
What will be the frame(CGRect) of the CALayer?
I am getting the video dimension with naturalSize property which includes the black portions.
Is there any way to get the video dimension without the black portions?(I mean the dimension of actual video content) or
is there any way to get the CGRect of black area of the video?
func initAspectRatioOfVideo(with fileURL: URL) -> Double {
let resolution = resolutionForLocalVideo(url: fileURL)
guard let width = resolution?.width, let height = resolution?.height else {
return 0
}
return Double(height / width)
}
private func resolutionForLocalVideo(url: URL) -> CGSize? {
guard let track = AVURLAsset(url: url).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: abs(size.width), height: abs(size.height))
}
This is a more concise version of Vlad Pulichev's answer.
var aspectRatio: CGFloat! // use the function to assign your variable
func getVideoResolution(url: String) -> CGFloat? {
guard let track = AVURLAsset(url: URL(string: url)!).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return abs(size.height) / abs(size.width)
}
A gif image is loaded into a UIImageView (by using this extension) and another UIImageView is overlaid on it. Everything works fine but the problem is when I going for combine both via below code, it shows a still image (.jpg). I wanna combine both and after combine it should be a animated image (.gif) too.
let bottomImage = gifPlayer.image
let topImage = UIImage
let size = CGSize(width: (bottomImage?.size.width)!, height: (bottomImage?.size.height)!)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.draw(in: areaSize)
topImage!.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Click here to know more about this problem please.
When using an animated GIF in a UIImageView, it becomes an array of UIImage.
We can set that array with (for example):
imageView.animationImages = arrayOfImages
imageView.animationDuration = 1.0
or, we can set the .image property to an animatedImage -- that's how the GIF-Swift code you are using works:
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
in that case, the image also contains the duration:
img.images?.duration
So, to generate a new animated GIF with the border/overlay image, you need to get that array of images and generate each "frame" with the border added to it.
Here's a quick example...
This assumes:
you are using GIF-Swift
you have added bottomImageView and topImageView in Storyboard
you have a GIF in the bundle named "funny.gif" (edit the code if yours is different)
you have a "border.png" in assets (again, edit the code as needed)
and you have a button to connect to the #IBAction:
import UIKit
import ImageIO
import UniformTypeIdentifiers
class animImageViewController: UIViewController {
#IBOutlet var bottomImageView: UIImageView!
#IBOutlet var topImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
if let img = UIImage(named: "border") {
topImageView.image = img
}
}
#IBAction func saveButtonTapped(_ sender: Any) {
generateNewGif(from: bottomImageView, with: topImageView)
}
func generateNewGif(from animatedImageView: UIImageView, with overlayImageView: UIImageView) {
var images: [UIImage]!
var delayTime: Double!
guard let overlayImage = overlayImageView.image else {
print("Could not get top / overlay image!")
return
}
if let imgs = animatedImageView.image?.images {
// the image view is using .image = animatedImage
// unwrap the duration
if let dur = animatedImageView.image?.duration {
images = imgs
delayTime = dur / Double(images.count)
} else {
print("Image view is using an animatedImage, but could not get the duration!" )
return
}
} else if let imgs = animatedImageView.animationImages {
// the image view is using .animationImages
images = imgs
delayTime = animatedImageView.animationDuration / Double(images.count)
} else {
print("Could not get images array!")
return
}
// we now have a valid [UIImage] array, and
// a valid inter-frame duration, and
// a valid "overlay" UIImage
// generate unique file name
let destinationFilename = String(NSUUID().uuidString + ".gif")
// create empty file in temp folder to hold gif
let destinationURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(destinationFilename)
// metadata for gif file to describe it as an animated gif
let fileDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0]]
// create the file and set the file properties
guard let animatedGifFile = CGImageDestinationCreateWithURL(destinationURL as CFURL, UTType.gif.identifier as CFString, images.count, nil) else {
print("error creating file")
return
}
CGImageDestinationSetProperties(animatedGifFile, fileDictionary as CFDictionary)
let frameDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: delayTime]]
// use original size of gif
let sz: CGSize = images[0].size
let renderer: UIGraphicsImageRenderer = UIGraphicsImageRenderer(size: sz)
// loop through the images
// drawing the top/border image on top of each "frame" image with 80% alpha
// then writing the combined image to the gif file
images.forEach { img in
let combinedImage = renderer.image { ctx in
img.draw(at: .zero)
overlayImage.draw(in: CGRect(origin: .zero, size: sz), blendMode: .normal, alpha: 0.8)
}
guard let cgFrame = combinedImage.cgImage else {
print("error creating cgImage")
return
}
// add the combined image to the new animated gif
CGImageDestinationAddImage(animatedGifFile, cgFrame, frameDictionary as CFDictionary)
}
// done writing
CGImageDestinationFinalize(animatedGifFile)
print("New GIF created at:")
print(destinationURL)
print()
// do something with the newly created file...
// maybe move it to documents folder, or
// upload it somewhere, or
// save to photos library, etc
}
}
Notes:
the code is based on this article: How to Make an Animated GIF Using Swift
this should be considered Example Code Only!!! -- a starting-point for you, not a "production ready" solution.
Sorry, I duplicated this question How to build AVDepthData manually, because it doesn't have answers I want and I don't have enough rep to comment there. If you don't mind, I could remove my question in the future and ask somebody to move future answers to that topic.
So, my goal is to create depth data and attach it to an arbitrary image. There is an article on how to do it https://developer.apple.com/documentation/avfoundation/avdepthdata/creating_auxiliary_depth_data_manually, but I don't know how to implement any step of it. I won't post all questions at once and start with the first one.
As a first step a depth image must be converted per-pixel from grayscale to depth or disparity values. I took this snippet from the aforementioned topic:
func buildDepth(image: UIImage) -> AVDepthData? {
let width = Int(image.size.width)
let height = Int(image.size.height)
var maybeDepthMapPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_DisparityFloat32, nil, &maybeDepthMapPixelBuffer)
guard status == kCVReturnSuccess, let depthMapPixelBuffer = maybeDepthMapPixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
guard let baseAddress = CVPixelBufferGetBaseAddress(depthMapPixelBuffer) else {
return nil
}
let buffer = unsafeBitCast(baseAddress, to: UnsafeMutablePointer<Float32>.self)
for i in 0..<width * height {
buffer[i] = 0 // disparity must be calculated somehow, but set to 0 for testing purposes
}
CVPixelBufferUnlockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
let info: [AnyHashable: Any] = [kCGImagePropertyPixelFormat: kCVPixelFormatType_DisparityFloat32,
kCGImagePropertyWidth: image.size.width,
kCGImagePropertyHeight: image.size.height,
kCGImagePropertyBytesPerRow: CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)]
let metadata = generateMetadata(image: image)
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
// I get an error when converting baseAddress to CFData
kCGImageAuxiliaryDataInfoData: baseAddress as! CFData,
kCGImageAuxiliaryDataInfoMetadata: metadata]
guard let depthData = try? AVDepthData(fromDictionaryRepresentation: dic) else {
return nil
}
return depthData
}
Then the article says to load a base address of a pixel buffer (in which is the disparity map) as CFData and pass it as kCGImageAuxiliaryDataInfoData value into a CFDictionary. But I get an error when converting baseAddress to CFData. I tried to convert the pixel buffer itself too, but without luck. What do I have to pass as kCGImageAuxiliaryDataInfoData? Did I create the disparity buffer correctly in the first place?
Aside from this problem it would be cool if somebody could direct me to some sample code on how to do the whole thing.
Your question really helped me get from cvPixelBuffer to AVDepthData so thank you. It was about 95% of the way there.
To fix your (and mine) issue I added the following:
let bytesPerRow = CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)
let size = bytesPerRow * height;
... code code code ...
CVPixelBufferLockBaseAddress(depthMapPixelBuffer!, .init(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddressOfPlane(depthMapPixelBuffer!, 0)
let data = NSData(bytes: baseAddress, length: size);
... code code code ...
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
kCGImageAuxiliaryDataInfoData: data,
kCGImageAuxiliaryDataInfoMetadata: metadata]
I've created several UIViews inside of a UIScrollView that resize dynamically based on values I type into Height and Width text fields. Once the UIView resizes I save the contents of the UIScrollView as PDF data.
I find that the dimensions of the UIView within the PDF (when measured in Adobe Illustrator) are always rounded to a third.
For example:
1.5 -> 1.333
1.75 -> 1.666
I check the constant values each time before the constraints are updated and they are accurate. Can anyone explain why the UIViews have incorrect dimensions once rendered as a PDF?
#IBAction func updateDimensions(_ sender: Any) {
guard let length = NumberFormatter().number(from:
lengthTextField.text ?? "") else { return }
guard let width = NumberFormatter().number(from:
widthTextField.text ?? "") else { return }
guard let height = NumberFormatter().number(from:
heightTextField.text ?? "") else { return }
let flapHeight = CGFloat(truncating: width)/2
let lengthFloat = CGFloat(truncating: length)
let widthFloat = CGFloat(truncating: width)
let heightFloat = CGFloat(truncating: height)
UIView.animate(withDuration: 0.3) {
self.faceAWidthConstraint.constant = lengthFloat
self.faceAHeightConstraint.constant = heightFloat
self.faceBWidthConstraint.constant = widthFloat
self.faceA1HeightConstraint.constant = flapHeight
self.view.layoutIfNeeded()
}
}
func createPDFfrom(aView: UIView, saveToDocumentsWithFileName fileName: String)
{
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil)
UIGraphicsBeginPDFPage()
guard let pdfContext = UIGraphicsGetCurrentContext() else { return }
aView.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
if let documentDirectories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first {
let documentsFileName = documentDirectories + "/" + fileName
debugPrint(documentsFileName)
pdfData.write(toFile: documentsFileName, atomically: true)
}
}
You should not be using layer.render(in:) to render your pdf. The reason its always a third is because you must be on a 3x device (it would be 1/2 on a 2x device and simply 1 on a 1x device), so there are 3 pixels per point. When iOS converts your constraints to pixels, then best it can do is round to the nearest third because its picking an integer pixel. The pdf can have much higher pixel density (or use vector art with infinite) resolution, so instead of using layer.render(in:), which is dumping pixels in the rasterized vector layer into your PDF, you should actually draw the contents into the PDF context manually (ie use UIBezier curve, UIImage.draw, etc). This will allow the pdf to capture the full resolution of any rasterized images you have and will allow it to capture any vectors you use without degrading them into rasterized pixels that are constrained by the device screen that you happen to be on.
I have a UItextView which I place images and type text into, and once I have finished with the TextView I then upload the contents of that textView to Parse.
If I only add 1 image to the textView it lets me upload the UITextView contents to Parse without any problems, but when I add more than 1 Image I get the error "data is larger than 10mb etc...".
Now I am wondering how I can reduce the size of this PFFile?
Or is their a way to reduce the size of the images before or after adding the to the textView? possibly extract the from th etextView and reduce there size before uploading to Parse?
This is my code:
Here is where the text & images from the textView are stored:
var recievedText = NSAttributedString()
And here Is how I upload it to Parse:
let post = PFObject(className: "Posts")
let uuid = NSUUID().UUIDString
post["username"] = PFUser.currentUser()!.username!
post["titlePost"] = recievedTitle
let data: NSData = NSKeyedArchiver.archivedDataWithRootObject(recievedText)
post["textPost"] = PFFile(name:"text.txt", data:data)
post["uuid"] = "\(PFUser.currentUser()!.username!) \(uuid)"
if PFUser.currentUser()?.valueForKey("profilePicture") != nil {
post["profilePicture"] = PFUser.currentUser()!.valueForKey("profilePicture") as! PFFile
}
post.saveInBackgroundWithBlock ({(success:Bool, error:NSError?) -> Void in
})
Best regards.
How I add the images
image1.image = images[1].imageWithBorder(40)
let oldWidth1 = image1.image!.size.width;
let scaleFactor1 = oldWidth1 / (blogText.frame.size.width - 10 )
image1.image = UIImage(CGImage: image1.image!.CGImage!, scale: scaleFactor1, orientation: .Up)
let attString1 = NSAttributedString(attachment: image1)
blogText.textStorage.insertAttributedString(attString1, atIndex: blogText.selectedRange.location)
You should to resize the picture to make sure it is small size for upload. See this answer