I need to record the screen to make a ten-seconds video when someone is broadcasting like the record function of "live.me" app. I can't use replaykit because I need to support iOS8.
when user click the record button,I start to screen shot twenty-four times in a second.
public func screenShot() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, true, 0)
self.drawViewHierarchyInRect(self.bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
then I use AVasset to put all pictures to video
This way, there are too many disadvantages,such as high cpu low performance of mobile phones can't use it。
Is this way wrong or the way works I need to optimize?
Related
I am developing an ios video trimmer with swift 4. I am trying to render a horizontal list of video thumbnails spread out over various durations both from local video files and and remote urls. When I test it in the simulator the thumbnails get generated in less than a second which is ok. However, when I test this code on an actual device the thumbnail generation is really slow and sometimes crashes. I tried to add the actual image generation to a background thread and then update the UI on the main thread when it is completed but that doesnt seem to work very well and the app crashes after rendering the screen a few times. I am not sure if that is because I am navigating away from the screen while tasks are still trying to complete. I am trying to resolve this problem and have the app generate the thumbnails quicker and not crash. Here is the code that I am using below. I would really appreciate any assistance for this issue.
func renderThumbnails(view: UIView, videoURL: URL, duration: Float64) {
var offset: Float64 = 0
for i in 0..<self.IMAGE_COUNT{
DispatchQueue.global(qos: .userInitiated).async {
offset = Float64(i) * (duration / Float64(self.IMAGE_COUNT))
let thumbnail = thumbnailFromVideo(videoUrl: videoURL,
time: CMTimeMake(Int64(offset), 1))
DispatchQueue.main.async {
self.addImageToView(image: thumbnail, view: view, index: i)
}
}
}
}
static func thumbnailFromVideo(videoUrl: URL, time: CMTime) -> UIImage{
let asset: AVAsset = AVAsset(url: videoUrl) as AVAsset
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
do{
let cgImage = try imgGenerator.copyCGImage(at: time, actualTime: nil)
let uiImage = UIImage(cgImage: cgImage)
return uiImage
}catch{
}
return UIImage()
}
The first sentence of the documentation says not to do what you’re doing! And it even tells you what to do instead.
Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, generateCGImagesAsynchronously(forTimes:completionHandler:), which employs decoding efficiencies similar to those used during playback.
(Italics mine.)
iOS 11 added a markup option after taking a screenshot, how can I programmatically apply this option after programmatically taking a screenshot? it directly gets saved to photos without providing the markup/share option.
I use the code below to take and save the screenshot
#IBAction func takeScreenshot(_ sender: Any) {
let layer = UIApplication.shared.keyWindow!.layer
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale);
layer.render(in: UIGraphicsGetCurrentContext()!)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(screenshot!, nil, nil, nil)
}
Instant Markup is not documented anywhere in Apple's Reference Docs, so I think its safe to assume this isn't made public through their SDK.
Instead you would have to create your own markup editor.
Note: You may not change the way actual device screenshots are handled (when the user presses Home and Lock together) as per Apple's guidelines.
I am using this ImagePicker to select multiple images from library or camera. Once user is done selecting images, I am storing those images in to an array. I want display images to the Frame according to number of images selected. for example, if more than 5 images selected the result will be something like this from the selected Images.
Imagepicker is new for me. I don't know how to achieve this. I've read many posts but not getting clear idea how to implement in my case.
I am testing it on Demo project given,
func doneButtonDidPress(_ imagePicker: ImagePickerController, images: [UIImage]) {
imagePicker.dismiss(animated: true, completion: nil)
imageArray = images
createCollage()
}
func createCollage() {
}
If I use UIimageView to display/load images it shows massive memory usage.
Can please anyone help me here with any of these issue? Any help will be much appreciated!!
look at the doc and code of ImagePicker, they are recommend to use
public var imageAssets: [UIImage] {
return AssetManager.resolveAssets(imagePicker.stack.assets)
}
by looking for AssetManager.resolveAssets implementation we will se some config size
open static func resolveAsset(_ asset: PHAsset, size: CGSize = CGSize(width: 720, height: 1280), completion: #escaping (_ image: UIImage?) -> Void) {
set size depends from your imageView size
--- UPD
or use the property
open var preferredImageSize: CGSize?
Tested the same code on real device and it is working fine. In my case, testing on simulator was causing massive memory usage issue here.
I uploaded some photos to Firebase Storage following the sample project on github.
Before using Firebase Storage, I was saving my photos to some other website. And when I download photos from those image URLs that I saved before on the other website, nothing is wrong and memory usage is reasonable. But when I paste images' URL links to corresponding childs on Firebase Database and then download from those URLs, I seem to have a terrible memory issue. For every ~200kb image, memory usage goes ~10mb up. Since I don't have this problem downloading images from other URLs, I believe this is a firebase specific issue. Does anyone else encounter the same memory issue? Any suggestions/help?
NOTE: I saved the URLs of the images to the firebase realtime database. I download URL links from there and give them to my photo collection view cells. Here is the code I wrote for my photo collection view cells:
class PhotosCollectionViewCell: UICollectionViewCell {
#IBOutlet weak var imageView: UIImageView!
private var downloadTask: FIRStorageDownloadTask!
var imageURL: String! {
didSet {
downloadTask = FIRStorage.storage().referenceForURL(imageURL).dataWithMaxSize(1*1024*1024) { (imageData, error) in
guard error == nil else { print(error); return }
if let imageData = imageData {
self.imageView.image = UIImage(data: imageData)
}
// imageView.kf_showIndicatorWhenLoading = true
// imageView.kf_setImageWithURL(NSURL(string: imageURL)!)
}
}
}
override func prepareForReuse() {
super.prepareForReuse()
imageView.image = nil
// imageView.kf_cancelDownloadTask()
downloadTask.cancel()
}
}
The only thing I want to solve is that I want to be able to download the images that I saved to Firebase Storage from their URLs that I also save in the real time database. One important fact is that kingfisher downloads images from URLs without having any memory issue. The problem just occurs when those image URLs are from firebase storage.
NOTE: I also get memory issue when download those images from Firebase Storage function. I know to some extent it's normal for memory usage to go up but my images in Firebase Storage are just about 200KB.
While you are only downloading 200kb blobs of data, it costs much more memory than that to display that as an image. While the "clarity" of the image may be compressed, it will still remain the same dimensions. An image usually requires 4 bytes per pixel - one for each of red, blue, green, and alpha. So a 2000x1000 pixel image requires ~8mb of memory, which is close to what you are describing. I would first ask you if you are caching the images, and if you are, instead cache the data. What will probably help you more though is to resize the image that you are displaying, unless you need it at full size. Use something like:
extension UIImage {
/// Returns a image that fills in newSize
func resizedImage(newSize: CGSize) -> UIImage {
// Guard newSize is different
guard self.size != newSize else { return self }
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
self.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
And do something like half of the dimensions of the original image, which will make it cost 4x less memory. Make sure to modify the original image and not to create a new image. Hope this helps.
I faced with the same problem, and one thing which helped me it is SDWebImage. It can load images from refs and direct url.
I'm developing an application which plays an HLS video and rendering it in an UIView.
At a determinate time I want to save a picture of the currently displayed video image. For this I begin an image context graphic, draw the UIView hierarchy in the context and save it in an UIImage with UIGraphicsGetImageFromCurrentImageContext method.
This work really fine on iOS simulator, the rendered image is perfect. But on a device the rendered image is totally white.
Anyone knows why it doesn't work on device ?
Or, is there a working way to take a screenshot of an HLS video on device ?
Thank for any help.
I was able to find a way to save a screenshot of an HLS live stream, by adding an AVPlayerItemVideoOutput object to the AVPlayerItem.
In initialisation:
self.output = AVPlayerItemVideoOutput(pixelBufferAttributes:
Dictionary<String, AnyObject>())
playerItem.addOutput(output!)
To save screenshot:
guard let time = self.player?.currentTime() else { return }
guard let pixelBuffer = self.output?.copyPixelBufferForItemTime(time,
itemTimeForDisplay: nil) else { return }
let ciImage = CIImage(CVPixelBuffer: pixelBuffer)
let temporaryContext = CIContext(options: nil)
let rect = CGRectMake(0, 0,
CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
let videoImage = temporaryContext.createCGImage(ciImage, fromRect: rect)
let image = UIImage(CGImage: videoImage)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
This seems not to work in the simulator, but works fine on a device. Code is in Swift 2 but should be straightforward to convert to obj-c or Swift 1.x.
People have tried, and failed (like me), apparently because of the nature of HLS. See: http://blog.denivip.ru/index.php/2012/12/screen-capture-in-ios/?lang=en