I am receiving raw RGBA data from a AVCaptureVideoDataOutput and using VTCompressionSession to compress it to a raw H264 stream.
The problem I have is that the resulting stream plays too fast (playing in VLC), about 3x the real speed.
I am using the presentation times and durations from the captured data. Using AVFileMovieOutput works correctly, but I want more control over the compression.
I have tried setting kVTCompressionPropertyKey_ExpectedFrameRate but that makes no difference.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
CMTime presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime duration = CMSampleBufferGetDuration(sampleBuffer);
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
OSStatus encodeStatus = VTCompressionSessionEncodeFrame(compressionSession, pixelBuffer, presentationTime, duration, NULL, NULL, NULL);
if (encodeStatus != noErr) {
NSLog(#"Encode error.");
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
I'm:
CFAbsoluteTime currentTime = CFAbsoluteTimeGetCurrent();
CMTime presentationTimeStamp = CMTimeMake(currentTime*27000000, 27000000);
VTCompressionSessionEncodeFrame(_enc_session, imageBuffer, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, NULL);
Also. How do you init your compression session? What 'k' parameters do you set to what?
Related
I use to read AVAssetReaderTrackOutput video.
Setting "kCVPixelBufferPixelFormatTypeKey" - "kCVPixelFormatType_32BGRA" work!
But I need a 16 bit video.
If set setting value "kCVPixelFormatType_16...." does not work.
[asset_reader_output copyNextSampleBuffer] - always nil =(
Why is this happening?
How do I change a bit color?
UPD:
`code:
[videoWriterInput requestMediaDataWhenReadyOnQueue:queueVideo usingBlock:^
{
while([videoWriterInput isReadyForMoreMediaData])
{
CMSampleBufferRef sampleBuffer=[video_asset_reader_output copyNextSampleBuffer];
if(sampleBuffer)
{
NSLog(#"write video");
[videoWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
} else
{
[videoWriterInput markAsFinished];
dispatch_release(queueVideo);
videoFinished=YES;
break;
}
}
}];
`
Core Video doesn't support all the pixel formats. BGRA is guaranteed to work though. You have to perform your own conversion. What are you using the buffer for?
UPDATE: To access the pixels, use something like this:
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void* bufferAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
// Read / modify the pixel data with bufferAddress, height & bytesPerRow
// For BGRA format, it's 4-byte per pixel in that order
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
I am trying to encode a single YUV420P image gathered from a CMSampleBuffer to an AVPacket so that I can send h264 video over the network with RTMP.
The posted code example seems to work as avcodec_encode_video2 returns 0 (Success) however got_output is also 0 (AVPacket is empty).
Does anyone have any experience with encoding video on iOS devices that might know what I am doing wrong?
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// sampleBuffer now contains an individual frame of raw video frames
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Convert the raw pixel base to h.264 format
AVCodec *codec = 0;
AVCodecContext *context = 0;
AVFrame *frame = 0;
AVPacket packet;
//avcodec_init();
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (codec == 0) {
NSLog(#"Codec not found!!");
return;
}
context = avcodec_alloc_context3(codec);
if (!context) {
NSLog(#"Context no bueno.");
return;
}
// Bit rate
context->bit_rate = 400000; // HARD CODE
context->bit_rate_tolerance = 10;
// Resolution
context->width = width;
context->height = height;
// Frames Per Second
context->time_base = (AVRational) {1,25};
context->gop_size = 1;
//context->max_b_frames = 1;
context->pix_fmt = PIX_FMT_YUV420P;
// Open the codec
if (avcodec_open2(context, codec, 0) < 0) {
NSLog(#"Unable to open codec");
return;
}
// Create the frame
frame = avcodec_alloc_frame();
if (!frame) {
NSLog(#"Unable to alloc frame");
return;
}
frame->format = context->pix_fmt;
frame->width = context->width;
frame->height = context->height;
avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height);
int got_output = 0;
av_init_packet(&packet);
avcodec_encode_video2(context, &packet, frame, &got_output)
// Unlock the pixel data
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// Send the data over the network
[self uploadData:[NSData dataWithBytes:packet.data length:packet.size] toRTMP:self.rtmp_OutVideoStream];
}
Note: It is known that this code has memory leaks because I am not freeing the memory that is dynamically allocated.
UPDATE
I updated my code to use #pogorskiy method. I only try to upload the frame if got output returns 1 and clear the buffer once I am done encoding video frames.
I am using AVAssetWriter to save the live feed from the camera. This works well using this code
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(videoWriter.status != AVAssetWriterStatusWriting){
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
if(adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:imageBuffer withPresentationTime:lastSampleTime];
else NSLog(#"adaptor not ready",);
}
I am usually getting close to 30 fps (however not 60 fps on iPhone 4s as noted by others) and when timing [adaptor appendPixelBuffer] it only takes a few ms.
However, I don't need the full frame, but I need high quality (low compression, key frame every frame) and I am going to read it back a process several times later. I therefore would like to crop the image before writing. Fortunately I only need a strip in the middle so I can do a simple memcpy of the buffer. To do this I am creating a CVPixelBufferRef that I am copying into and writing with the adaptor:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(videoWriter.status != AVAssetWriterStatusWriting){
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void * buffIn = CVPixelBufferGetBaseAddress(imageBuffer);
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, nil, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *buffOut = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(buffOut != NULL);
//Copy the whole buffer while testing
memcpy(buffOut, buffIn, width * height * 4);
//memcpy(buffOut, buffIn+sidecrop, width * 100 * 4);
if (adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
else NSLog(#"adaptor not ready");
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
This also works and the video looks OK. However it is very slow and the frame rate becomes unacceptable. And strangely, the big slowdown isn't the copying but that the [adaptor appendPixelBuffer] step now takes 10-100 times longer than before. So I guess that it doesn't like the pxbuffer I create, but I can see why. I am using kCVPixelFormatType_32BGRA when setting up both the video out and the adaptor.
Can anyone suggest a better way to do the copying/cropping? Can you do that directly on the ImageBuffer?
I found a solution. In iOS5 (I had missed the updates) you can set AVAssetWriter to crop your video (as also noted by Steve). Set AVVideoScalingModeKey to AVVideoScalingModeResizeAspectFill
videoWriter = [[AVAssetWriter alloc] initWithURL:filmurl
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:1280], AVVideoWidthKey,
[NSNumber numberWithInt:200], AVVideoHeightKey,
AVVideoScalingModeResizeAspectFill, AVVideoScalingModeKey,// This turns the
// scale into a crop
nil];
videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
I am recording video and audio using an AVAssetWriter to append CMSampleBuffer from AVCaptureVideoDataOutput and AVCaptureAudioDataOutput respectively. What I want to do is at the user discretion mute the audio during the recording.
I assuming the best way is to some how create an empty CMSampleBuffer like
CMSampleBufferRef sb;
CMSampleBufferCreate(kCFAllocatorDefault, NULL, YES, NULL, NULL, NULL, 0, 1, &sti, 0, NULL, &sb);
[_audioInputWriter appendSampleBuffer:sb];
CFRelease(sb);
but that doesn't work, so I am assuming that I need to create a silent audio buffer. How do I do this and is there a better way?
I have done this before by calling a function that processes the data in the SampleBuffer and zeros all of it. Might need to modify this if your audio format is not using an SInt16 sample size.
You can also use this same technique to process the audio in other ways.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(isMute){
[self muteAudioInBuffer:sampleBuffer];
}
}
- (void) muteAudioInBuffer:(CMSampleBufferRef)sampleBuffer
{
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
for (NSInteger i=0; i<numSamples; i++) {
samples[i] = (SInt16)0;
}
}
Am Captuing video using AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/uid/TP40010188-CH5-SW2
Now i did Following things
1.Created videoCaptureDevice
2.Created AVCaptureDeviceInput and set videoCaptureDevice
3.Created AVCaptureVideoDataOutput and implemented Delegate
4.Created AVCaptureSession - set input as AVCaptureDeviceInput and set output as AVCaptureVideoDataOutput
5.In AVCaptureVideoDataOutput Delegate method
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
i got CMSamplebuffer and Converted into UIImage And tested to print UIImageview using
[self.imageView performSelectorOnMainThread:#selector(setImage:) withObject:image waitUntilDone:YES];
Every thing went well up to this........
MY Problem IS,
I need to send video frames through UDP Socket .even though following one is bad idea i tried ,UIImage to NSData and Send via UDP Pocket. BUt got so Delay in video Processing.Mostly problem because of UIImage to NSDate
So Please GIve me Solution For my problem
1)Any way to convert CMSampleBUffer or CVImageBuffer to NSData ??
2)Like Audio Queue Service and Queue for Video to store UIImage and do UIImage to NSDate
And Sending ???
if am riding behind the Wrong Algorithm Please path me in write direction
Thanks In Advance
Here is code to get at the buffer. This code assumes a flat image (e.g. BGRA).
NSData* imageToBuffer( CMSampleBufferRef source) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return [data autorelease];
}
A more efficient approach would be to use a NSMutableData or a buffer pool.
Sending a 480x360 image every second will require a 4.1Mbps connection assuming 3 color channels.
Use CMSampleBufferGetImageBuffer to get CVImageBufferRef from the sample buffer, then get the bitmap data from it with CVPixelBufferGetBaseAddress. This avoids needlessly copying the image.