I'm working on an iOS project that capture videos and extract images from this video (for example 10 pictures every 500 ms). My problem is for short videos (few seconds). I'm extracting the same pictures 2 or 3 times. I don't have this problem with video lengths around 10 seconds and more.
When discussing with a friend, he told me it could be a problem with key frame number in video.
I'm using AVCaptureDeviceInput, AVCaptureSession and AVCaptureMovieFileOutput to take the video.
So, my questions are :
How can I figure out why i'm extracting the same pictures many times ?
Is it a key-frame number problem and is it possible to increase this value (from capture session ? from capture device ?).
EDIT :
Here is the code for the picture extract :
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];
AVAssetImageGenerator *generateImg = [[AVAssetImageGenerator alloc] initWithAsset:asset];
NSMutableArray *pictList = [NSMutableArray array];
for (int i = 0; i < timeList.count; i++) {
NSError *error = NULL;
CMTime time = CMTimeMake([[timeList objectAtIndex:i] intValue], 1000);
CGImageRef refImg = [generateImg copyCGImageAtTime:time actualTime:NULL error:&error];
//NSLog(#"error==%#, Refimage==%#", error, refImg);
[pictList addObject:[[UIImage alloc] initWithCGImage:refImg]];
}
And here is the timeList print out :
(lldb) po timeList
<__NSArrayM 0x175afab0>(
0,
131,
262,
393,
524
)
Thanks in advance !
According to the documentation, there is 2 property on the AVAssetImageGenerator. These properties allow to define a tolerance for getting the nearest frame. But if I put this tolerance to zero, it generates exactly the frame I request.
Here is the corrected instanciation of the AVAssetImageGenerator :
AVAssetImageGenerator *generateImg = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generateImg.requestedTimeToleranceBefore = kCMTimeZero;
generateImg.requestedTimeToleranceAfter = kCMTimeZero;
It could be slower with big videos, but in my case i don't have time stakes so it's perfect.
I guess you're running into frame rate issues if you're requesting up to 10 frames every 500ms. This could be due to low frame rate source data or it could be due to a calculation issue causing the frames requested to be too close together.
Consider getting the nominalFrameRate of the tracksWithMediaCharacteristic: of the asset and working with multiples of that frame rate.
Related
I have a piece of code that reads a video AVAssetTrack's sampleBuffer:
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
NSDictionary *pixelBufferAttributes = #{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
AVAssetReaderTrackOutput *assetReaderTrackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
outputSettings:pixelBufferAttributes];
NSError *assetReaderCreationError = nil;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset
error:&assetReaderCreationError];
[assetReader addOutput:assetReaderTrackOutput];
[assetReader startReading];
while (assetReader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBuffer = [assetReaderTrackOutput copyNextSampleBuffer];
// Some OpenGL operations here
}
// and other operations here, too
The code above works normally for almost all videos, but there's a video that always crashed. Upon inspection, I found that the CMSampleBufferRef result has different size than the real asset.
When printed out from the debugger, sampleBuffer has dimension of 848 x 480, whereas the real asset has dimension of 1154.88720703125 x 480.
I tried to search about the cause of this issue, but found none. Do any of you have any insight about this? Any comments or input is greatly appreciated.
Thank you!
The "real asset" is reporting the scaled dimensions of the video in points where the video is stretched horizontally from its pixel width of 848.
I am very upset because from last two days i am searching for editing video frame and replacing them at same time(frame actual time) with edited frame but i am unable to do that.
I have seen so many links of stackoverflow but that is not perfect for my requirement- iFrame extractor is not working.
Get image from a video frame in iPhone
In mean while, I though to extract all frames and save it in an array of dictionary with frames and respecting time and when i got a frame from running video then after editing i will run a loop for actual frame with respecting time and i will match the actual time with captured(edited frame running from video) frame and if it got then replace actual frame with edited frame and again write the video from all frames.
but to do so i used -
Get all frames of Video IOS 6
but it is crashing after extracting some images .
I have written my code like this
-(void)editMovie:(id)sender
{
float FPS=1;
NSString *videoPath = [[NSBundle mainBundle] pathForResource:#"IMG_0879" ofType:#"MOV"];
NSURL *videoURl = [NSURL fileURLWithPath:videoPath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURl options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * FPS ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, FPS);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
[self saveImage: generatedImage atTime:actualTime]; // Saves the image on document directory and not memory
CGImageRelease(image);
}
}
}
-(void)saveImage:(UIImage*)image atTime:(CMTime)time
{
float t=time.timescale;
NSMutableDictionary *dict=[[NSMutableDictionary alloc]init];
[dict setObject:image forKey:#"image"];
[dict setObject:[NSString stringWithFormat:#"%f",t] forKey:#"time"];
[arrFrame addObject:dict];
NSLog(#"ArrayImg=%#",arrFrame);
}
My Requrement is ->>>
I need to run a video
From running video i have to pause a video and get image(frame) at pause time.
I have to edit captured image and save or replace at actual image.
4 when i agin play the video the edited image should be in video.
Please give me any example if you all have, I have downloaded so many projects or examples from given link of stack flow or other sites but no one perfect or even fulfilling my 20 % requirement.
Please give me any example or ides if you have.
I will be obliged to you, Thanks in Advance
I have two different views that are meant to play the same video, I am creating an app that will switch several times between the two views while the video is running.
I currently load the first view with the video as follows:
NSURL *url = [NSURL URLWithString:#"http://[URL TO VIDEO HERE]"];
AVURLAsset *avasset = [[AVURLAsset alloc] initWithURL:url options:nil];
AVPlayerItem *item = [[AVPlayerItem alloc] initWithAsset:avasset];
player = [[AVPlayer alloc] initWithPlayerItem:item];
playerLayer = [[AVPlayerLayer playerLayerWithPlayer:player] retain];
CGSize size = self.bounds.size;
float x = size.width/2.0-202.0;
float y = size.height/2.0 - 100;
//[player play];
playerLayer.frame = CGRectMake(x, y, 404, 200);
playerLayer.backgroundColor = [UIColor blackColor].CGColor;
[self.layer addSublayer:playerLayer];
NSString *tracksKey = #"tracks";
[avasset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:tracksKey] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
NSError *error = nil;
AVKeyValueStatus status = [avasset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
//videoInitialized = YES;
[player play];
}
else {
// You should deal with the error appropriately.
NSLog(#"The asset's tracks were not loaded:\n%#", [error localizedDescription]);
}
});
}];
In my second view I want to load the video from the dispatch_get_main_queue so that the video in both views are in sync.
I was hoping someone could help me out with loading the data of the video from the first view into the second view.
It is very simple:
Init the first player:
AVAsset *asset = [AVAsset assetWithURL:URL];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:player];
And the second player in the same way, BUT, use the same asset from the first one.
I have verified, it works.
There is all the info you need on the Apple page:
https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html
This abstraction means that you can play a given asset using different
players simultaneously
this quote is from this page.
I don't think you will be able to get this approach to work. Videos are decoded in hardware and then the graphics buffer is sent to the graphics card. What you seem to want to do is decode a video in one view but then capture the contents of the first view and show it in a second view. That will not stay in sync because it would take time to capture the contents of the first window back into main memory and then those contents would need to be sent to the video card again. Basically, that is not going to work. You also cannot decode two h.264 videos streams and expect them to be in sync.
You could implement this with another approach entirely. If you decode the h.264 video to frames on disk (save each frame as a PNG) and then write your own loop that will decode the Nth PNG in a series of PNGs and then display the results in the two different windows. That will work fast enough to be an effective implementation on newer iPhone 4 and 5 and iPad 2 and 3. If you want to make use of a more advanced implementation, take a look at my AVAnimator library for iOS, you could get this approach working in 20 minutes if you use existing code.
For this ten year old question which has only ten year old answers which are out of date, here's the up to date answer.
var leadPlayer: AVPlayer ... the lead player you want to dupe
This does not work:
let leadPlayerItem: AVPlayerItem = leadPlayer.currentItem!
yourPlayer = AVPlayer(playerItem: leadPlayerItem)
yourPlayer.play()
Apple does not allow that (try it, see error).
This works. You must use the item:
let dupeItem: AVPlayerItem = AVPlayerItem(asset: leadPlayer.currentItem!.asset)
yourPlayer = AVPlayer(playerItem: dupeItem)
yourPlayer.play()
Fortunately it's now that easy.
I am recording a video from the iPhone camera by using the AVCam code provided from apple.
After the video is recorded it is saved to the photos library.
A new view is then loaded, here I need to have an image thumbnail of the video.
I have a path to the video:
file://localhost/private/var/mobile/Applications/ED45DEFC-ABF9-4A5E-9102-21680CC1448E/tmp/output.mov
I can't seem to figure how to get the first frame of the video to use as a thumbnail.
Any help would be very appreciated and thank you for your time.
EDIT
I ended up using this, I'm not sure why it returns the image sideways?
- (UIImage*)loadImage {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:vidURL options:nil];
AVAssetImageGenerator *generate = [[AVAssetImageGenerator alloc] initWithAsset:asset];
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60);
CGImageRef imgRef = [generate copyCGImageAtTime:time actualTime:NULL error:&err];
NSLog(#"err==%#, imageRef==%#", err, imgRef);
return [[UIImage alloc] initWithCGImage:imgRef];
}
To fix the thumbnail orientation set appliesPreferredTrackTransform to YES in the AVAssetImageGenerator instance. If you add your own video composition, you'll need to include the right transform to rotate the video as wanted.
generate.appliesPreferredTrackTransform = YES;
Remember to release the obtained image reference with CGImageRelease.
To request multiple thumbnails it's better to do asynchronously with generateCGImagesAsynchronouslyForTimes:completionHandler:.
How can I make the process that filtering saved video in photo library in iOS?
I got URLs of videos in the library using AssetsLibrary framework,
then, made a preview for the video.
Next step, I wanna make filtering process for video using CIFilter.
In case of real time issue, I made video filter process using AVCaptureVideoDataOutputSampleBufferDelegate.
But in case of saved video, I don't know how to make filter process.
Do I use AVAsset? If I must use that, how can I filter it? and how to save it?
always thank you.
I hope this will help you
AVAsset *theAVAsset = [[AVURLAsset alloc] initWithURL:mNormalVideoURL options:nil];
NSError *error = nil;
float width = theAVAsset.naturalSize.width;
float height = theAVAsset.naturalSize.height;
AVAssetReader *mAssetReader = [[AVAssetReader alloc] initWithAsset:theAVAsset error:&error];
[theAVAsset release];
NSArray *videoTracks = [theAVAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
mPrefferdTransform = [videoTrack preferredTransform];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput* mAssetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];
[mAssetReader addOutput:mAssetReaderOutput];
[mAssetReaderOutput release];
CMSampleBufferRef buffer = NULL;
//CMSampleBufferRef buffer = NULL;
while ( [mAssetReader status]==AVAssetReaderStatusReading ){
buffer = [mAssetReaderOutput copyNextSampleBuffer];//read next image.
}
You should have a look at CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf) then you can have the image pointer first address, so you can add filter to pixBuf, but i find that the performance is not good, If you have any new idea,we can discuss about it further.