In my app I want to take frames from a video for filtering them. I try to take frame frim video by time offset. This is my code:
- (UIImage*)getVideoFrameForTime:(NSDate*)time {
CGImageRef thumbnailImageRef = NULL;
NSError *igError = nil;
NSTimeInterval timeinterval = [time timeIntervalSinceDate:self.videoFilterStart];
CMTime atTime = CMTimeMakeWithSeconds(timeinterval, 1000);
thumbnailImageRef =
[self.assetImageGenerator copyCGImageAtTime:atTime
actualTime:NULL
error:&igError];
if (!thumbnailImageRef) {
NSLog(#"thumbnailImageGenerationError %#", igError );
}
UIImage *image = thumbnailImageRef ? [[UIImage alloc] initWithCGImage:thumbnailImageRef] : nil;
return image;
}
Unfortunately, I see only frames which located on integer seconds: 1, 2, 3.. Even when time interval is non-integer (1.5, etc).
How to get frames at any non-integer interval?
Thnx to #shallowThought I found an answer in this question Grab frames from video using Swift
You just need to add this two lines
assetImgGenerate.requestedTimeToleranceAfter = kCMTimeZero;
assetImgGenerate.requestedTimeToleranceBefore = kCMTimeZero;
Use this project to get more frame details
The corresponding project on github: iFrameExtractor.git
If I remember correctly NSDate's accuracy only goes up to the second which explains why frames are only take on integer seconds. You'll have to use a different type of input to get frames at non-integer seconds.
Related
I want to display some CMSampleBuffer's with the AVSampleBufferDisplayLayer, but it freezes after showing the first sample.
I get the samplebuffers from the AVCaptureVideoDataOutputSampleBuffer delegate:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
[self imageToBuffer:sampleBuffer];
CFRelease(sampleBuffer);
}
put them into a vector
-(void) imageToBuffer: (CMSampleBufferRef )source{
//buffers is defined as: std::vector<CMSampleBufferRef> buffers;
CMSampleBufferRef newRef;
CMSampleBufferCreateCopy(kCFAllocatorDefault, source, &newRef);
buffers.push_back(newRef);
}
Then try to show them via AVSampleBufferDisplayLayer (in another ViewController)
AVSampleBufferDisplayLayer * displayLayer = [[AVSampleBufferDisplayLayer alloc] init];
displayLayer.bounds = self.view.bounds;
displayLayer.position = CGPointMake(CGRectGetMidX(self.displayOnMe.bounds), CGRectGetMidY(self.displayOnMe.bounds));
displayLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
displayLayer.backgroundColor = [[UIColor greenColor] CGColor];
[self.view.layer addSublayer:displayLayer];
self.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
dispatch_queue_t queue = dispatch_queue_create("My queue", DISPATCH_QUEUE_SERIAL);
[displayLayer setNeedsDisplay];
[displayLayer requestMediaDataWhenReadyOnQueue:queue
usingBlock:^{
while ([displayLayer isReadyForMoreMediaData]) {
if (samplesKey < buffers.size()) {
CMSampleBufferRef buf = buffers[samplesKey];
[displayLayer enqueueSampleBuffer:buffers[samplesKey]];
samplesKey++;
}else
{
[displayLayer stopRequestingMediaData];
break;
}
}
}];
but it shows the first sample then freezes, and does nothing.
And my video data output settings are as follows:
//set up our output
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
[_videoDataOutput setSampleBufferDelegate:self queue:queue];
[_videoDataOutput setVideoSettings:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,
nil]];
I came across this problem in the same context, trying to take the output from AVCaptureVideoDataOutput and display it in a AVSampleDisplay layer.
If your frames come out in display order, then the fix is very easy, just set the display immediately flag on the CMSampleBufferRef.
Get the sample buffer returned by the delegate and then...
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
If your frames come out in encoder order (not display order), then the time stamps on the CMSampleBuffer need to be zero biased and restamped such that the first frames timestamp is equal to time 0.
double pts = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer));
// ptsStart is equal to the first frames presentationTimeStamp so playback starts from time 0.
CMTime presentationTimeStamp = CMTimeMake((pts-ptsStart)*1000000,1000000);
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
Update:
I ran into a situation where some video still wasn't playing smoothly when I used the zero bias method and I investigated further. The correct answer seems to be using the PTS from the first frame you intend to play.
My answer is here, but I will post it here, too.
Set rate at which AVSampleBufferDisplayLayer renders sample buffers
The Timebase needs to be set to the presentation time stamp (pts) of the first frame you intend to decode. I was indexing the pts of the first frame to 0 by subtracting the initial pts from all subsequent pts and setting the Timebase to 0. For whatever reason, that didn't work with certain video.
You want something like this (called before a call to decode):
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
displayLayer.controlTimebase = controlTimebase;
// Set the timebase to the initial pts here
CMTimebaseSetTime(displayLayer.controlTimebase, CMTimeMake(ptsInitial, 1));
CMTimebaseSetRate(displayLayer.controlTimebase, 1.0);
Set the PTS for the CMSampleBuffer...
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
And maybe make sure display immediately isn't set....
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanFalse);
This is covered very briefly in WWDC 2014 Session 513.
I am using AVCaptureSession for Recording Video.But i can't able to set maximum Video length.If i am using ImagePicker Controller there is method is used for set maximum video duration like videoMaximumDuration .But In AVCaptureSession how i can set MaximumDuration .please help me..Advance Thanks
You can set the maximum duration using the property maxRecordedDuration of your AVCaptureMovieFileOutput settings.
Here's an example.
self.movieFileOutput = [[AVCaptureMovieFileOutput alloc]init];
Float64 maximumVideoLength = 60; //Whatever value you wish to set as the maximum, in seconds
int32_t prefferedTimeScale = 30 //Frames per second
CMTime maxDuration = CMTimeMakeWithSeconds(maximumVideoLength, preferredTimescale_;
self.movieFileOutput.maxRecordedDuration = maxDuration;
self.movieFileOutput.minFreeDiskSpaceLimit = 1024*1024;
if(self.captureSession canAddOutput:self.movieFileOutput){
[self.captureSession addOutput:self.movieFileOutput];
}
I hope this answers your question
This question already has answers here:
Getting a thumbnail from a video url or data in iOS
(12 answers)
Closed 9 years ago.
I have a movie file, what I'm trying to do is show the first screen of this movie in a UIImage on my screen.
The user selects the UIImage/button, and it plays the movie. The issue I'm having is how do i get this initial screenshot.
Best example i can use to demonstrate is using iPhone/iPad, make a video, go to library and you can see a thumbnail showing the initial screen of the video. How can i achieve this.
thanks
To get a framegrab:
AVAssetImageGenerator* generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:destinationAsset];
//Get the 1st frame 3 seconds in
int frameTimeStart = 3;
int frameLocation = 1;
//Snatch a frame
CGImageRef frameRef = [generator copyCGImageAtTime:CMTimeMake(frameTime,frameLocation)
AVAssetImageGenerator* generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:destinationAsset];
//Get the 1st frame 3 seconds in
int frameTimeStart = 3;
int frameLocation = 1;
//Snatch a frame
CGImageRef frameRef = [generator copyCGImageAtTime:CMTimeMake(frameTime,frameLocation)
I have been scouring the internet and have been looking high and low for any type of code to help me zoom in on barcodes using ZXing.
I started with the code from their git site here
https://github.com/zxing/zxing
Since then I have been able to increase the default resolution to 1920x1080.
self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
This would be fine but the issue is that I am scanning very small barcodes and even though 1920x1080 would work it doesnt give me any kind of zoom to capture closer to a smaller barcode without losing focus. Now the resolution did help me quite a bit but its simply not close enough.
Im thinking what I need to do is to set the capture session to a scroll view that is 1920x1080 and then set the actual image capture to take from the bounds of my screen so i can zoom in and out of the scroll view itself to achieve a "zoom" kind of affect.
The problem with that is im really not sure where to start...any ideas?
Ok since I have seen this multiple time on here and no one seems to have an answer. I thought I would share my own answer.
There are 2 properties NO ONE seems to know about. Ill cover both.
Now the first one is good for iOS 6+. Apple added a property called setVideoScaleAndCropfactor.
This setting this returns a this is a CGFloat type. The only downfall in this is that you have to set the value to your AVConnection and then set the connection to a stillImageCapture. It will not work with anything else in iOS 6. Now in order to do this you have to set it up to work Asynchronously and you have to loop your code for the decoder to work and take pictures at that scale.
Last thing is that you have to scale your preview layer yourself.
This all sounds like a lot of work. And it really really is. However, This sets your original scan picture to be taken at 1920x1080 or whatever you have it set as. Rather than scaling a current image which will stretch pixels causing the decoder to miss the barcode.
Sp this will look something like this
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[stillImageOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if(error)
return;
NSString *path = [NSString stringWithFormat:#"%#%#",
[[NSBundle mainBundle] resourcePath],
#"/blank.wav"];
SystemSoundID soundID;
NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO];
AudioServicesCreateSystemSoundID(( CFURLRef)filePath, &soundID);
AudioServicesPlaySystemSound(soundID);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t* baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* free_me = 0;
if (true) { // iOS bug?
uint8_t* tmp = baseAddress;
int bytes = bytesPerRow*height;
free_me = baseAddress = (uint8_t*)malloc(bytes);
baseAddress[0] = 0xdb;
memcpy(baseAddress,tmp,bytes);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext =
CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
CGImageRef capture = CGBitmapContextCreateImage(newContext);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
free(free_me);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
Decoder* d = [[Decoder alloc] init];
[self decoding:d withimage:&capture];
}];
}
Now the second one that is coming in iOS 7 that will change EVERYTHING I just said. You have a new property called videoZoomFactor. this is a CGFloat. However it changes everything at the TOP of the stack rather than just affecting like the stillimagecapture.
In Otherwords you wont have to manually zoom your preview layer. You wont have to go through some stillimagecaptureloop and you wont have to set it to an AVConnection. You simply set the CGFloat and it scales everything for you.
Now I know its going to be a while before you can publish iOS 7 applications. So I would seriously consider figuring out how to do this the hard way. Quick tips. I would use a pinch and zoom gesture to set your CGFloat for setvideoscaleandcropfactor. Dont forget to set the value to 1 in your didload and you can scale from there. At the same time in your gesture you can use it to do your CATransaction to scale the preview layer.
Heres a sample of how to do the gesture capture and preview layer
- (IBAction)handlePinchGesture:(UIPinchGestureRecognizer *)recognizer
{
effectiveScale = recognizer.scale;
if (effectiveScale < 1.0)
effectiveScale = 1.0;
if (effectiveScale > 25)
effectiveScale = 25;
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[CATransaction begin];
[CATransaction setAnimationDuration:0];
[prevLayer setAffineTransform:CGAffineTransformMakeScale(effectiveScale, effectiveScale)];
[CATransaction commit];
}
Hope this helps someone out! I may go ahead and just to a video tutorial on this. Depends on what kind of demand there is for it I guess.
I'm developing an iOS app that gets UIImages at random times from an internet connection, and progressively constructs a video file from them as the images come in. I got it working a little, but the fact that the images dont arrive at the same rate all the time is messing up the video.
How do I re-calculate CMTime when each new UIImage arrives so that it adjusts for the varying frame rate of the arriving UIImages, which can arrive anywhere from milliseconds to seconds apart??
Here is what I'm doing so far, some code is not shown, but here is the basic thing
.
.
adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput: videoStream
sourcePixelBufferAttributes:attributes];
CMTime frameTime=CMTimeMake(1,10); // assumed initial frame rate
.
.
-(void)addImageToMovie:(UIImage*)img {
append_ok=FALSE;
buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:img.size];
while (!append_ok) {
if (adaptor.assetWriterInput.readyForMoreMediaData){
frameTime.value +=1;
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
[NSThread sleepForTimeInterval:0.01];
} else {
[NSThread sleepForTimeInterval:0.01];
}
}
if(buffer) {
CVBufferRelease(buffer);
}
}
It depends on the number of frames. Instead of adding 1, add 10 to frameTime.value.