How can I make the process that filtering saved video in photo library in iOS?
I got URLs of videos in the library using AssetsLibrary framework,
then, made a preview for the video.
Next step, I wanna make filtering process for video using CIFilter.
In case of real time issue, I made video filter process using AVCaptureVideoDataOutputSampleBufferDelegate.
But in case of saved video, I don't know how to make filter process.
Do I use AVAsset? If I must use that, how can I filter it? and how to save it?
always thank you.
I hope this will help you
AVAsset *theAVAsset = [[AVURLAsset alloc] initWithURL:mNormalVideoURL options:nil];
NSError *error = nil;
float width = theAVAsset.naturalSize.width;
float height = theAVAsset.naturalSize.height;
AVAssetReader *mAssetReader = [[AVAssetReader alloc] initWithAsset:theAVAsset error:&error];
[theAVAsset release];
NSArray *videoTracks = [theAVAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
mPrefferdTransform = [videoTrack preferredTransform];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput* mAssetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];
[mAssetReader addOutput:mAssetReaderOutput];
[mAssetReaderOutput release];
CMSampleBufferRef buffer = NULL;
//CMSampleBufferRef buffer = NULL;
while ( [mAssetReader status]==AVAssetReaderStatusReading ){
buffer = [mAssetReaderOutput copyNextSampleBuffer];//read next image.
}
You should have a look at CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf) then you can have the image pointer first address, so you can add filter to pixBuf, but i find that the performance is not good, If you have any new idea,we can discuss about it further.
Related
I have a piece of code that reads a video AVAssetTrack's sampleBuffer:
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
NSDictionary *pixelBufferAttributes = #{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
AVAssetReaderTrackOutput *assetReaderTrackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
outputSettings:pixelBufferAttributes];
NSError *assetReaderCreationError = nil;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset
error:&assetReaderCreationError];
[assetReader addOutput:assetReaderTrackOutput];
[assetReader startReading];
while (assetReader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBuffer = [assetReaderTrackOutput copyNextSampleBuffer];
// Some OpenGL operations here
}
// and other operations here, too
The code above works normally for almost all videos, but there's a video that always crashed. Upon inspection, I found that the CMSampleBufferRef result has different size than the real asset.
When printed out from the debugger, sampleBuffer has dimension of 848 x 480, whereas the real asset has dimension of 1154.88720703125 x 480.
I tried to search about the cause of this issue, but found none. Do any of you have any insight about this? Any comments or input is greatly appreciated.
Thank you!
The "real asset" is reporting the scaled dimensions of the video in points where the video is stretched horizontally from its pixel width of 848.
This appears easy, but the lack of documentation makes this question impossible to guess.
I have pictures and videos on my app's icloud drive and I want to create thumbnails of these assets. I am talking about assets on iCloud Drive, not the iCloud photo stream inside the camera roll. I am talking about the real iCloud Drive folder.
Creating thumbnails from videos are "easy" compared to images. You just need 2 weeks to figure out how it works, having in mind the poor documentation Apple wrote but thumbnails from images seem impossible.
What I have now is an array of NSMetadataItems each one describing one item on the iCloud folder.
These are the methods I have tried so far that don't work:
METHOD 1
[fileURL startAccessingSecurityScopedResource];
NSFileCoordinator *coordinator = [[NSFileCoordinator alloc] init];
__block NSError *error;
[coordinator coordinateReadingItemAtURL:fileURL
options:NSFileCoordinatorReadingImmediatelyAvailableMetadataOnly
error:&error
byAccessor:^(NSURL *newURL) {
NSDictionary *thumb;
BOOL success = [newURL getResourceValue:&thumb forKey:NSURLThumbnailDictionaryKey error:&error];
UIImage *thumbnail = thumb[NSThumbnail1024x1024SizeKey];
}];
[fileURL stopAccessingSecurityScopedResource];
The results of this method are fantastic. Ready for that? Here we go: success = YES, error = nil and thumbnail = nil.
ANOTHER METHOD
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:fileURL
options:nil];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform = YES;
CMTime time = CMTimeMake(0, 60); // time range in which you want
NSValue *timeValue = [NSValue valueWithCMTime:time];
[imageGenerator generateCGImagesAsynchronouslyForTimes:#[timeValue] completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * error) {
thumbnail = [[UIImage alloc] initWithCGImage:image];
}];
error = The requested URL was not found on this server. and thumbnail = nil
This method appears to be just for videos. I was trying this just in case. Any equivalent of this method to images?
PRIMITIVE METHOD
NSData *tempData = [NSData dataWithContentsOfUrl:tempURL];
NOPE - data = nil
METHOD 4
The fourth possible method would be using ALAsset but this was deprecated on iOS 9.
I think that all these methods fail because they just work (bug or not) if the resource is local. Any ideas on how to download the image so I can get the thumbnail?
Any other ideas?
thanks
EDIT: after several tests I see that Method 1 is the only one that seems to be in the right direction. This method works poorly, sometimes grabbing the icon but most part of the time not working.
Another point is this. Whatever people suggests me, they always say about downloading the whole image to get the thumbnail. I don't think this is the way to go. Just see how getting thumbnails of video work. You don't download the whole video to get its thumbnail.
So this
question remains open.
The Photos-Framework or AssetsLibrary will not work here as you would have to import your iCloud Drive Photos first to the PhotoLibrary to use any methods of these two frameworks.
What you should look at is ImageIO:
Get the content of the iCloud Drive Photo as NSData and then proceed like this:
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)(imageData), NULL );
NSDictionary* thumbOpts = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageAlways,
[NSNumber numberWithInt:160],(id)kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef thumbImageRef = CGImageSourceCreateThumbnailAtIndex(source,0,(__bridge CFDictionaryRef)thumbOpts);
UIImage *thumbnail = [[UIImage alloc] initWithCGImage: thumbImageRef];
After testing several solutions, the one that seems to work better is this one:
[fileURL startAccessingSecurityScopedResource];
NSFileCoordinator *coordinator = [[NSFileCoordinator alloc] init];
__block NSError *error;
[coordinator coordinateReadingItemAtURL:fileURL
options:NSFileCoordinatorReadingImmediatelyAvailableMetadataOnly
error:&error
byAccessor:^(NSURL *newURL) {
NSDictionary *thumb;
BOOL success = [newURL getResourceValue:&thumb forKey:NSURLThumbnailDictionaryKey error:&error];
UIImage *thumbnail = thumb[NSThumbnail1024x1024SizeKey];
}];
[fileURL stopAccessingSecurityScopedResource];
This solution is not perfect. It will fail to bring the thumbnails sometimes but I was not able to find any other solution that works 100%. Others are worst than that.
This works for me.It has a little bit different
func genereatePreviewForOnce(at size: CGSize,completionHandler: #escaping (UIImage?) -> Void) {
_ = fileURL.startAccessingSecurityScopedResource()
let fileCoorinator = NSFileCoordinator.init()
fileCoorinator.coordinate(readingItemAt: fileURL, options: .immediatelyAvailableMetadataOnly, error: nil) { (url) in
if let res = try? url.resourceValues(forKeys: [.thumbnailDictionaryKey]),
let dict = res.thumbnailDictionary {
let image = dict[.NSThumbnail1024x1024SizeKey]
completionHandler(image)
} else {
fileURL.removeCachedResourceValue(forKey: .thumbnailDictionaryKey)
completionHandler(nil)
}
fileURL.stopAccessingSecurityScopedResource()
}
}
It looks like you are generating your thumbnail after the fact. If this is your document and you are using UIDocument, override fileAttributesToWriteToURL:forSaveOperation:error: to insert the thumbnail when the document is saved.
I am very upset because from last two days i am searching for editing video frame and replacing them at same time(frame actual time) with edited frame but i am unable to do that.
I have seen so many links of stackoverflow but that is not perfect for my requirement- iFrame extractor is not working.
Get image from a video frame in iPhone
In mean while, I though to extract all frames and save it in an array of dictionary with frames and respecting time and when i got a frame from running video then after editing i will run a loop for actual frame with respecting time and i will match the actual time with captured(edited frame running from video) frame and if it got then replace actual frame with edited frame and again write the video from all frames.
but to do so i used -
Get all frames of Video IOS 6
but it is crashing after extracting some images .
I have written my code like this
-(void)editMovie:(id)sender
{
float FPS=1;
NSString *videoPath = [[NSBundle mainBundle] pathForResource:#"IMG_0879" ofType:#"MOV"];
NSURL *videoURl = [NSURL fileURLWithPath:videoPath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURl options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * FPS ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, FPS);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
[self saveImage: generatedImage atTime:actualTime]; // Saves the image on document directory and not memory
CGImageRelease(image);
}
}
}
-(void)saveImage:(UIImage*)image atTime:(CMTime)time
{
float t=time.timescale;
NSMutableDictionary *dict=[[NSMutableDictionary alloc]init];
[dict setObject:image forKey:#"image"];
[dict setObject:[NSString stringWithFormat:#"%f",t] forKey:#"time"];
[arrFrame addObject:dict];
NSLog(#"ArrayImg=%#",arrFrame);
}
My Requrement is ->>>
I need to run a video
From running video i have to pause a video and get image(frame) at pause time.
I have to edit captured image and save or replace at actual image.
4 when i agin play the video the edited image should be in video.
Please give me any example if you all have, I have downloaded so many projects or examples from given link of stack flow or other sites but no one perfect or even fulfilling my 20 % requirement.
Please give me any example or ides if you have.
I will be obliged to you, Thanks in Advance
This question already has answers here:
iPhone Read UIimage (frames) from video with AVFoundation
(5 answers)
Closed 4 years ago.
I'm using UIImagePicker to allow the user to create a video and then trim it. I need to split that video into multiple frames and let the user choose one of them.
In order to show the frames I likely must convert them to UIImage. How can I do this? I must use AVFoundation but I couldn't find a tutorial on how to get & convert the frames.
Should I do the image capture with AVFoundation too? If so do I have to implementing trimming myself?
I think the answer in this question is what you are looking for.
iPhone Read UIimage (frames) from video with AVFoundation.
There are 2 methods specified by the accepted answer. You can use either one according to your requirement.
Here is code to get FPS images from video
1) Import
#import <Photos/Photos.h>
2) in viewDidLoad
videoUrl = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:#"VfE_html5" ofType:#"mp4"]];
[self createImage:5]; // 5 is frame per second (FPS) you can change FPS as per your requirement.
3) Functions
-(void)createImage:(int)withFPS {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * withFPS ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, withFPS);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
[self savePhotoToAlbum: generatedImage]; // Saves the image on document directory and not memory
CGImageRelease(image);
}
}
}
-(void)savePhotoToAlbum:(UIImage*)imageToSave {
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromImage:imageToSave];
} completionHandler:^(BOOL success, NSError *error) {
if (success) {
NSLog(#"sucess.");
}
else {
NSLog(#"fail.");
}
}];
}
You also can use the lib VideoBufferReader (see on GitHub), based on AVFoundation.
Is it possible to render a movie to an OpenGL texture in real time using the Apple iOS frameworks?
I've seen it in an old NeHe tutorial using glTexSubImage2D, but I'm wondering how can I access the RGB data using the Apple frameworks?
init
NSString *mPath = [[NSBundle mainBundle] pathForResource:#"movie" ofType:#"m4v"];
NSURL *url = [NSURL fileURLWithPath:mPath];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
imgGen = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
imgGen.requestedTimeToleranceBefore = kCMTimeZero;
imgGen.requestedTimeToleranceAfter = kCMTimeZero;
each frame
double time = 0.xyz * asset.duration;
CMTime reqTime = CMTimeMakeWithSeconds (time, preferredTimeScale), actTime;
NSError *err = nil;
CGImageRef ref = [imgGen actualTime:&actTime error:&err];
//... GL calls to make an image from the CGImageRef
This method is incredibly slow for realtime rendering, and I can only generate ~15 frames. One way may be to generate frames on the fly asynchronsly, but surely it can be done in real time? The most time consuming part is the copyCGImageAtTime call.