I am currently working on live filters using Metal.
After defining my CIImage I render the image to a MTLTexture.
Below is my rendering code. context is a CIContext backed by Metal;
targetTexture is the alias to the texture attached to the currentDrawable property of my MTKView instance:
context?.render(drawImage, to: targetTexture, commandBuffer: commandBuffer, bounds: targetRect, colorSpace: colorSpace)
It renders correctly as I can see the image being displayed on the metal view.
The problem is that after rendering the image (and displaying it), I want to extract the CVPixelBuffer and save it to disk using the class AVAssetWriter.
Another alternative would be to have two rendering steps, one rendering to the texture and another rendering to a CVPixelBuffer. (But it isn't clear how to create such buffer, or the impact that two rendering steps would have in the framerate)
Any help will be appreciated, Thanks!
You can try to copy the raw data from the MTLTexture like this :
var outPixelbuffer: CVPixelBuffer?
if let datas = targetTexture.texture.buffer?.contents() {
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, targetTexture.width,
targetTexture.height, kCVPixelFormatType_64RGBAHalf, datas,
targetTexture.texture.bufferBytesPerRow, nil, nil, nil, &outPixelbuffer);
}
+ (void)getPixelBufferFromBGRAMTLTexture:(id<MTLTexture>)texture result:(void(^)(CVPixelBufferRef pixelBuffer))block {
CVPixelBufferRef pxbuffer = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
size_t imageByteCount = texture.width * texture.height * 4;
void *imageBytes = malloc(imageByteCount);
NSUInteger bytesPerRow = texture.width * 4;
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:imageBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,texture.width,texture.height,kCVPixelFormatType_32BGRA,imageBytes,bytesPerRow,NULL,NULL,(__bridge CFDictionaryRef)options,&pxbuffer);
if (block) {
block(pxbuffer);
}
CVPixelBufferRelease(pxbuffer);
free(imageBytes);
}
Related
I want to place video as texture to object in OpenGLES 2.0 iOS.
I create AVPlayer with AVPlayerItemVideoOutput, setting
NSDictionary *videoOutputOptions = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
nil];
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:videoOutputOptions];
Than I get CVPixelBufferRef for each moment of time
CMTime currentTime = [self.videoOutput itemTimeForHostTime:CACurrentMediaTime()];
CVPixelBufferRef buffer = [self.videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
Then i convert it to UIImage with this method
+ (UIImage *)imageWithCVPixelBufferUsingUIGraphicsContext:(CVPixelBufferRef)pixelBuffer
{
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *bufferU = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data) {
int maxY = h;
for(int y = 0; y < maxY; y++) {
for(int x = 0; x < w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = bufferU[offset]; // R
data[offset+1] = bufferU[offset+1]; // G
data[offset+2] = bufferU[offset+2]; // B
data[offset+3] = bufferU[offset+3]; // A
}
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFRelease(pixelBuffer);
return image;
}
As result i got required frame from video:
After all i try to update texture with
- (void)setupTextureWithImage:(UIImage *)image
{
if (_texture.name) {
GLuint textureName = _texture.name;
glDeleteTextures(1, &textureName);
}
NSError *error;
_texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error];
if (error) {
NSLog(#"Error during loading texture: %#", error);
}
}
I call this method in GLKView's update method, but as result got black screen, only audio available.
Can anyone explain whats done wrong? Looks like i'm doing something wrong with textures...
The issue is most likely somewhere else then the code you posted. To check the texture itself create a snapshot (a feature in Xcode) and see if you can see the correct texture there. Maybe your coordinates are incorrect or some parameters missing when displaying the textured object, could be you forgot to enable some attributes or the shaders are not present...
Since you got so far I suggest you first try to draw a colored square, then try to apply a texture (not from the video) to it until you get the correct result. Then implement the texture from video.
And just a suggestion since you are getting raw pixel data from the video you should consider creating only one texture and then use texture sub image function to update the texture directly with the data instead of doing some strange iterations and transformations to the image. The glTexSubImage2D will take your buffer pointer directly and do the update.
I try to launch at device - and it's work fine.
Looks like that problem is that simulator not support some operations.
I am grabbing CIImage's from CVPixelBufferRef's and then rendering those CIImage's back to CVPixelBufferRef's. The result is a black movie. I have tried several variations of creating the new CVPixelBufferRef but the result is always the same.
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:ciImage
toCVPixelBuffer:pbuff
bounds:ciImage.extent
colorSpace:CGColorSpaceCreateDeviceRGB()];
} else {
NSLog(#"Failed create pbuff");
}
What am I doing wrong?
It turns out that CIImage becomes nil right after creating it in the simulator. I did find that if I run the same code on a device then it works.
You need to use glReadPixels to manually read the pixels into the buffer. You can find more about this here.
Link with implementation is here
Here is how I implement the AVCaptureVideoDataOutputSampleBufferDelegate:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureVideoOrientation videoOrientation = [[[_captureOutput connections] objectAtIndex:0] videoOrientation];
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, NULL, baseaddress, 0); //<<<<----HERE
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Here is how I set the capture format the format:
OSType format = kCVPixelFormatType_32BGRA;
// Check YUV format is available before selecting it (iPhone 3 does not support it)
if ([_captureOutput.availableVideoCVPixelFormatTypes containsObject:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]]) {
format = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
}
_captureOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
The problem happens because of NULL passed as 3rd parameter. It should be CV_8UC4 for 4-channel image:
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress);
I want to convert a yuv 420SP image (captured directly from camera, YCbCr format) to jpg in iOS. What I have found is CGImageCreate() function https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CGImage/Reference/reference.html#//apple_ref/doc/uid/TP30000956-CH1g-F17167 , which takes in a few parameters including the byte array containing and should return some CGImage, whose UIImage when input to UIImageJPEGRepresentation() returns jpeg data, but that is not really happening
The output image data is far from what is required. At least the output is not nil.
As input to CGImageCreate(), bits per component i am setting as 4, bits per pixel as 12, and some default values.
Can it really convert a yuv YCbCr image ad not only rgb? If yes, then i think i am doing wrong something in the input values to the CGImageCreate function.
From what I can see here, the CGColorSpaceRef colorspace parameter can refer to RGB, CMYK, or grayscale only.
So I think first you need to convert your YCbCr420 image to RGB, for example, using IPP function YCbCr420toRGB (doc). Alternatively, you can write your own conversion routine, it's not that hard.
Here's the code for converting a sample buffer returned by the captureOutput:didOutputSampleBuffer:fromConnection method of AVVideoDataOutput:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); //2560 == (640 * 4)
size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer); //480
size_t dataSize = CVPixelBufferGetDataSize(pixelBuffer); //1_228_808 = (2560 * 480) + 8
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawImageBytes, bufferWidth, bufferHeight, 8, bytesPerRow, defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
CFMutableDataRef imageData = CFDataCreateMutable(NULL, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(imageData, kUTTypeJPEG, 1, NULL);
NSDictionary *properties = #{(__bridge id)kCGImageDestinationLossyCompressionQuality: #(0.25),
(__bridge id)kCGImageDestinationBackgroundColor: (__bridge id)CLEAR_COLOR,
(__bridge id)kCGImageDestinationOptimizeColorForSharing : #(TRUE)
};
CGImageDestinationAddImage(destination, image, (__bridge CFDictionaryRef)properties);
if (!CGImageDestinationFinalize(destination))
{
CFRelease(imageData);
imageData = NULL;
}
CFRelease(destination);
UIImage *frame = [[UIImage alloc] initWithCGImage:image];
CGContextRelease(context);
CGImageRelease(image);
renderFrame([self.childViewControllers.lastObject.view viewWithTag:1].layer, frame);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Here are your three options for pixel format types:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
If _captureOutput is the pointer reference to my instance of AVVideoDataOutput, this is how you set the pixel format type:
[_captureOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
Strange problem. I take frames from a video file (.mov) and write them with AVAssetWriter to another file without any explicit processing. Actually I just copy the frame from one memory buffer to another and them flush them through PixelbufferAdaptor. Then I take the resulting file, delete the original file, put the resulting file instead the original and do the same operation. Interesting thing is that the size of the file constantly grows! Can somebody explain why?
if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
CVImageBufferRef cvimgRef=nil;
CMTime lastTime=CMTimeMake(fcounter++, 30);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
CMSampleBufferRef framebuffer=nil;
CGImageRef frameImg=nil;
if ( [asr status]==AVAssetReaderStatusReading ){
framebuffer = [asset_reader_output copyNextSampleBuffer];
frameImg = [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
}
if(frameImg && screenshot){
//CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);
pxdata=CVPixelBufferGetBaseAddress(screenshot);
bufferSize = CVPixelBufferGetDataSize(screenshot);
// Get the number of bytes per row for the pixel buffer.
bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
// Get the pixel buffer width and height.
width = CVPixelBufferGetWidth(screenshot);
height = CVPixelBufferGetHeight(screenshot);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);
CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
size_t bpx=CGImageGetBitsPerPixel(frameImg);
CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
//CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
//CGImageRef myMaskedImage;
const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
CGImageRef myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
//CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);
[adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}
well, the mystery seems to be solved. The problem was in inappropriate codec configuration.
This is set of configuration options I use now and it seems to do the work:
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
codecSettings,AVVideoCompressionPropertiesKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
Now the file size still grows but at much slower pace. There is a tradeoff between the file size and the video quality - size reduction affects the quality.