Store series of captured images into array - ios

In the function below(didPressTakePhoto), I am trying to take a series of pictures(10 in this case), store them into an array and display them as an animation in the "gif". Yet the program keeps crashing and I have no idea why. This is all after one button click, hence the function name. Also, I tried taking the animation code outside the for loop, but the imageArray would then lose it's value for some reason.
func didPressTakePhoto(){
if let videoConnection = stillImageOutput?.connectionWithMediaType(AVMediaTypeVideo){
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {
(sampleBuffer, error) in
//var counter = 0
if sampleBuffer != nil {
for var index = 0; index < 10; ++index {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
var imageArray: [UIImage] = []
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
imageArray.append(image)
imageArray.insert(image, atIndex: index++)
self.tempImageView.image = image
self.tempImageView.hidden = false
//UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
var gif: UIImageView!
gif.animationImages = imageArray
gif.animationRepeatCount = -1
gif.animationDuration = 1
gif.startAnimating()
}
}
})
}
}

Never try to make an array of images (i.e., a [UIImage] as you are doing). A UIImage is very big, so an array of many images is huge and you will run out of memory and crash.
Save your images to disk, maintaining only references to them (i.e. an array of their names).
Before using your images in the interface, reduce them to the actual physical size you will need for that interface. Using a full-size image for a mere screen-sized display (or smaller) is a huge waste of energy. You can use the ImageIO framework to get a "thumbnail" smaller version of the image from disk without wasting memory.

You are creating a new [UIImage] in every iteration of the loop, so in the last iteration there's only one image, you should take the imageArray creation out of the loop. Having said that, you should take into account what #matt answered

Related

How to match RPScreenRecorder screenshot times to application logs

I'm trying to align the screenshots emitted by RPScreenRecorder's startCapture method to logs saved elsewhere in my code.
I was hoping that I could just match CMSampleBuffer's presentationTimeStamp to the timestamp reported by CMClockGetHostTimeClock(), but that doesn't seem to be true.
I've created a small sample project to demonstrate my problem (available on Github), but here's the relevant code:
To show the current time, I'm updating a label with the current value of CMClockGetTime(CMClockGetHostTimeClock()) when CADisplayLink fires:
override func viewDidLoad() {
super.viewDidLoad()
// ...
displayLink = CADisplayLink(target: self, selector: #selector(displayLinkDidFire))
displayLink?.add(to: .main, forMode: .common)
}
#objc
private func displayLinkDidFire(_ displayLink: CADisplayLink) {
timestampLabel.text = String(format: "%.3f", CMClockGetTime(CMClockGetHostTimeClock()).seconds)
}
And here is where I'm saving RPScreenRecorder's buffers to disk.
Each filename is the buffer's presentationTimeStamp in seconds, truncated to milliseconds:
RPScreenRecorder.shared().startCapture(handler: { buffer, bufferType, error in
switch bufferType {
case .video:
guard let imageBuffer = buffer.imageBuffer else {
return
}
CVPixelBufferLockBaseAddress(imageBuffer, .readOnly) // Do I need this?
autoreleasepool {
let ciImage = CIImage(cvImageBuffer: imageBuffer)
let uiImage = UIImage(ciImage: ciImage)
let data = uiImage.jpegData(compressionQuality: 0.5)
let filename = String(format: "%.3f", buffer.presentationTimeStamp.seconds)
let url = Self.screenshotDirectoryURL.appendingPathComponent(filename)
FileManager.default.createFile(atPath: url.path, contents: data)
}
CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)
default:
break
}
}
The result is a collection of screenshots like this:
I'd expect each screenshot's filename to match the timestamp visible in the screenshot, or at least be off by some consistent duration. Instead, I'm seeing variable differences which seem to get worse over time. More confusing, I also sometimes get duplicates of the same screenshot. For example, here are the times from a recent recording:
Visible in the screenshot
The screenshot's filename
Diff
360665.775
360665.076
0.699
360665.891
360665.092
0.799
360665.975
360665.108
0.867
360666.058
360665.125
0.933
360666.158
360665.142
1.016
360665.175
360665.175
0.000
360666.325
360665.192
1.133
360665.175
360665.208
-0.033
...
The results are wild enough that I think I must be doing something exceptionally stupid, but I'm not sure what it is. Any ideas/recommendations? Or, ideas for how to better accomplish my goal?

Animate icons change

I am working on my first application, its a mathematical riddles app. The player can get a hint that will reveal one of the variables - it's basically replacing one image with another. Sometimes I am replacing more than one image so I am using a loop that replace all of them. I want the old image to fade and be replaced with the new image, the answer. Also I would like them to fade one after the other, meaning that there will be a small delay between one image replacement animation to the next.
func changeHintIcons () {
var labelsArr = [[firstEquationFirstElemnt,firstEquationSecondElemnt,firstEquationThirdElemnt],[secondEquationFirstElemnt,secondEquationSecondElemnt,secondEquationThirdElemnt],[thirdEquationFirstElemnt,thirdEquationSecondElemnt,thirdEquationthirdElemnt],[fourthEquationFirstElemnt,fourthEquationSecondElemnt,fourthEquationThirdElemnt], [fifthEquationFirstElemnt,fifthEquationSecondElemnt,fifthEquationThirdElemnt]]
let col:Int = Int(arc4random_uniform(UInt32(gameDifficulty.stages[gameLevel].umberOfVariables)))
let row:Int = Int(arc4random_uniform(UInt32(2))) * 2
let var_to_show = current_equations[col][row]
let image_name = "answer.num.\(var_to_show)"
for i in 0..<current_equations.count {
for j in 0..<current_equations[i].count {
if (current_equations[i][j] == var_to_show) {
var image_index = j
if (j > 0) {
image_index = Int(j/2) //Converting index
}
labelsArr[i][image_index]!.image = UIImage(named: image_name)! //Replacing the image
}
}
}
}
One last thing, what if I want to use in an animation instead of letting the image simply fade out? What are my options and how can I implement them?
Ok I found the answer. Basically swift allows you to create your animation by projecting a set of images one after the other. Follow these steps:
1. Copy animation images to assets folder
2. create an array of UIImages
3. Do the same things as I did in the animate function
Main code -
var animationArray = createImageArray(total: 14, imagePrefix: "hint.animation")
animationArray.append(UIImage(named: imageHintAnswer)!)
animate(imageView: labelsArr[i][image_index]!, images: animationArray)
Functions -
func createImageArray(total: Int, imagePrefix: String) -> [UIImage] {
var imageArray:[UIImage] = []
for imageCount in 1..<total {
let imageName = "\(imagePrefix).\(imageCount)"
let image = UIImage(named: imageName)!
imageArray.append(image)
}
return imageArray
}
func animate(imageView: UIImageView, images: [UIImage]) {
imageView.animationImages = images
imageView.animationDuration = 0.7
imageView.animationRepeatCount = 1
imageView.startAnimating()
}

Changing size of image obtained from camera explanation

I am new to iOS and have experience with image processing coding in other languages, which I was hoping to translate into an app, however I am getting some unusual behavior which I don't understand. When I convert an image to a Data array and look at the number of elements in the array, with every new image this number changes. When I look at the specific data in the array, the values are 0-255 which matches what I would expect for a grayscale image, but I am confused why the size (or number of elements) in the data array changes. I would expect it to be held constant since I set the captureSession to 640x480. Why is this not the case? Even if it wasn't a grayscale image, I would expect the size to remain the same and not change picture to picture.
UPDATE:
I am getting the uiimage from AV, and the code is shown below. The other rest of the code not shown is just beginning the session. I basically want to turn the image into raw pixel data, which I have seen a lot of different ways to do this, but this seems like a good method.
Relevant Code:
#objc func timerHandle() {
imageView.image = uiimages
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
uiimages = sampleBuffer.image(orientation: .down, scale: 1.0)!
print(uiimages) //output1
let data = sampleBuffer.data()
let newData = Array(data!)
print(data!.count) //output2
}
extension CMSampleBuffer {
func image(orientation: UIImageOrientation = .left, scale: CGFloat = 1.0) -> UIImage? {
if let buffer = CMSampleBufferGetImageBuffer(self) {
let ciImage = CIImage(cvPixelBuffer: buffer).applyingFilter("CIColorControls", parameters: [kCIInputSaturationKey:0.0])
return UIImage(ciImage: ciImage, scale: scale, orientation: orientation)
}
return nil
}
func data(orientation: UIImageOrientation = .left, scale: CGFloat = 1.0) -> Data? {
if let buffer = CMSampleBufferGetImageBuffer(self) {
let size = self.image()?.size
let scale = self.image()?.scale
let ciImage = CIImage(cvPixelBuffer: buffer).applyingFilter("CIColorControls", parameters: [kCIInputSaturationKey:0.0])
UIGraphicsBeginImageContextWithOptions(size!, false, scale!)
defer { UIGraphicsEndImageContext() }
UIImage(ciImage: ciImage).draw(in: CGRect(origin: .zero, size: size!))
guard let redraw = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
return UIImagePNGRepresentation(redraw)
}
return nil
}
}
At output1, when I straight print the uiimage variable I get:
<UIImage: 0x1c40b0ec0>, {640, 480}
which shows correct dimensions
at output2, when I print the count, every time captureOutput is called I get a different value:
225726
224474
225961
640x480 should give me 307,200, so why am I not getting constant numbers at least, even if the value isn't correct.

Reduce Parse PFFile, images and text

I have a UItextView which I place images and type text into, and once I have finished with the TextView I then upload the contents of that textView to Parse.
If I only add 1 image to the textView it lets me upload the UITextView contents to Parse without any problems, but when I add more than 1 Image I get the error "data is larger than 10mb etc...".
Now I am wondering how I can reduce the size of this PFFile?
Or is their a way to reduce the size of the images before or after adding the to the textView? possibly extract the from th etextView and reduce there size before uploading to Parse?
This is my code:
Here is where the text & images from the textView are stored:
var recievedText = NSAttributedString()
And here Is how I upload it to Parse:
let post = PFObject(className: "Posts")
let uuid = NSUUID().UUIDString
post["username"] = PFUser.currentUser()!.username!
post["titlePost"] = recievedTitle
let data: NSData = NSKeyedArchiver.archivedDataWithRootObject(recievedText)
post["textPost"] = PFFile(name:"text.txt", data:data)
post["uuid"] = "\(PFUser.currentUser()!.username!) \(uuid)"
if PFUser.currentUser()?.valueForKey("profilePicture") != nil {
post["profilePicture"] = PFUser.currentUser()!.valueForKey("profilePicture") as! PFFile
}
post.saveInBackgroundWithBlock ({(success:Bool, error:NSError?) -> Void in
})
Best regards.
How I add the images
image1.image = images[1].imageWithBorder(40)
let oldWidth1 = image1.image!.size.width;
let scaleFactor1 = oldWidth1 / (blogText.frame.size.width - 10 )
image1.image = UIImage(CGImage: image1.image!.CGImage!, scale: scaleFactor1, orientation: .Up)
let attString1 = NSAttributedString(attachment: image1)
blogText.textStorage.insertAttributedString(attString1, atIndex: blogText.selectedRange.location)
You should to resize the picture to make sure it is small size for upload. See this answer

Xcode image operations memory leaking

I'm currently having an issue with an iOS project I'm developing. It goes through a procedure where it has to download images from a server, round them and place them on a black background, and finally save them as a file. This loops through over 2.000 URLs of different images and has to process them each. The problem is that there seems to be a huge memory leak somewhere, I just can't figure out how to solve it. The memory warning is triggered four times before the app is terminated.
func getRoundedPNGDataFromData(imageData: NSData) -> NSData? {
let Image: UIImage? = UIImage(data: imageData)!
let ImageFrame: CGRect = CGRectMake(0, 0, Image!.size.width, Image!.size.height)
UIGraphicsBeginImageContextWithOptions(Image!.size, false, 1.0);
let CurrentContext: CGContextRef = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(CurrentContext, UIColor.blackColor().CGColor)
CGContextFillRect(CurrentContext, ImageFrame)
UIBezierPath(roundedRect: ImageFrame, cornerRadius: 10.0).addClip()
Image!.drawInRect(ImageFrame)
let NewImage: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImagePNGRepresentation(NewImage!)
}
...
var ProgressCounter: Int = 0
for OID: String in MissingThumbnails {
let CurrentProgress: CFloat = (CFloat(ProgressCounter) / CFloat(MissingThumbnails.count))
dispatch_sync(dispatch_get_main_queue(), {
progressView?.progress
progressView?.setProgress(CurrentProgress, animated: true)
})
let ThumbURL: NSURL = NSURL(string: "http://\(Host):\(Port)/\(self.WebAPIThumbnailPath)".stringByReplacingOccurrencesOfString("${OID}", withString: OID, options: [], range: nil).stringByReplacingOccurrencesOfString("${ThumbnailNumber}", withString: self.padID(5), options: [], range: nil))!
var ThumbData: NSData? = NSData(contentsOfURL: ThumbURL)
if (ThumbData != nil) {
if (ThumbData!.length > 0) {
var RoundedPNG: NSData? = self.getRoundedPNGDataFromData(ThumbData!)
if ((RoundedPNG!.writeToFile(FSTools.getDocumentSub("\(self.Structs[Paths.OBJ_THUMBNAILS])/\(OID).png"), atomically: false)) == false) {
dispatch_sync(dispatch_get_main_queue(), {() -> Void in
UIApplication.sharedApplication().networkActivityIndicatorVisible = false
delegate?.thumbnailCacheCompleted?(false)
})
}
RoundedPNG = nil
ThumbData = nil
}
}
ProgressCounter++
}
You need to add an autoreleasepool inside the loop to allow the autoreleased memory to be deallocated.
for OID: String in MissingThumbnails {
autoreleasepool {
/* code */
}
}
Autoreleased memory is deallocated when the current autoreleasepool scope is left which generally happens in the runloop. But if the code is tight and the runloop does not run the memory will not be released in a timely manner. Adding an explicit autoreleasepool at each iteration will allowed this temporary autoreleased memory to be reclaimed on each iteration of the loop.

Resources