I can't find any documentation from Apple to explain why this piece of code runs at different speeds depending on how many times its been run.
- (void)speedTest2:(CIImage*)source {
NSTimeInterval start = CFAbsoluteTimeGetCurrent();
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setValue:source forKey:kCIInputImageKey];
CGImageRef cgImage = [_context createCGImage:filter.outputImage fromRect:source.extent];
UIImage* output = [UIImage imageWithCGImage:cgImage];
if (cgImage)
CFRelease(cgImage);
_source.image = output;
NSLog(#"time: %0.3fms", 1000.0f * (CFAbsoluteTimeGetCurrent() - start));
}
Run times
Fresh app install - first call to method = 206ms
App restarted - first call to method = 61ms
second call to method (3rd, 4th, ...) = 14ms
The same source image is being used for every run.
I know Core Image concatenates the filter chain. Is this somehow being cached? Can I pre-cache this operation so users don't get hit with performance problems on their first app launch?
This one is making me crazy :(
A portion of the overhead may be the image library itself loading. If the effects are implemented as pixel shaders, there may well be a compilation step going on behind the scenes.
This hidden cost is unavoidable, but you can choose to do it at a more convenient time. For example when the application is loading.
I would suggest loading a small image (1x1 px) and applying some effects to it during load to see if it helps.
You may also want to try the official Apple forums for a response.
There are three ways to create context to draw outputImgae; contextWithOptions: this create on GPU or CPu which based on you deveice; contextWithEAGLContext:; contextWithEAGLContext: options: created on GPU; look at Core Image Programming Guide;
Related
I started working on my first non-demo react-native app. I hope it will be a iOS/Android app, but actually I'm focused on iOS only.
I have a one problem actually. How can I get a data (base64, array of pixels, ...) in real-time from the camera without saving to the camera roll.
There is this module: https://github.com/lwansbrough/react-native-camera but base64 is deprecated and is useless for me, because I want a render processed image to user (change picture colors eg.), not the real picture from camera, as it does react-native-camera module.
(I know how to communicate with SWIFT code, but I don't know what the options are in native code, I come here from WebDev)
Thanks a lot.
This may not be optimal but is what I have been using. If anyone can give a better solution, I would appreciate your help, too!
My basic idea is simply to loop (but not simple for-loop, see below) taking still pictures in yuv/rgb format at max resolution, which is reasonably fast (~x0ms with normal exposure duration) and process them. Basically you will setup AVCaptureStillImageOutput that links to you camera (following tutorials everywhere) then set the format to kCVPixelFormatType_420YpCbCr8BiPlanarFullRange (if you want YUV) or kCVPixelFormatType_32BGRA(if you prefer rgba) like
bool usingYUVFormat = true;
NSDictionary *outputFormat = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:usingYUVFormat?kCVPixelFormatType_420YpCbCr8BiPlanarFullRange:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[yourAVCaptureStillImageOutput setOutputSettings:outputFormat];
When you are ready, you can start calling
AVCaptureConnection *captureConnection=[yourAVCaptureStillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[yourAVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:captureConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer){
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// do your magic with the data buffer imageBuffer
// use CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0/1/2); to get each plane
// use CVPixelBufferGetWidth/CVPixelBufferGetHeight to get dimensions
// if you want more, please google
}
}];
Additionally, use NSNotificationCenter to register your photo-taking action and post a notification after you have processed each frame (with some delay perhaps, to cap your through-put and reduce power consumption) so the loop will keep going.
A quick precaution: the Android counterpart is much worse a headache. Few hardware manufacturers implement api for max-resolution uncompressed photos but only 1080p for preview/video, as I have raised in my question. I am still looking for solutions but gave up most hope. JPEG images are just toooo slow.
In my iOS app I am using the +imageNamed: method to load an image (many times and in many different places in the code).
In one case the user might update (download) a new image.
When I try to load the new, it will show the old, due to caching.
From the "Is there a way to clear the cache used by UIImage class?" question, I saw that I have to use the -initWithContentsOfFile: method.
But this will not take advantage of the caching speedup that the +imageNamed: enjoys. All I want is to "tell" the cache that the file has changed, so it needs to "re-cache" it. And then keep using the +imageNamed: method with the new cached image.
In other words, I use the +imageNamed: method (say) 10 times, I change the image, I "tell" the cache, then I continue use the +imageNamed: method another (say) 10 times. If I change all the +imageNamed: to -initWithContentsOfFile: then I lose the caching advantage.
Is there a way/trick to do that?
There is no API for clearing the cache. If your app is not destined for the app store you could call the private method:
[UIImage _flushSharedImageCache];
However I wouldn't want this anywhere near production code.
Instead I would create a category on UIImage and add a method for returning the desired image from a filename. This name would be stored and then updated when your new image is downloaded. You will get the benefit of caching, without any hacky workarounds.
Depending on the complexity of your project, a simple find and replace shouldn't take too long.
Although I'm now questioning how your app is working currently, imageNamed only looks for files in your app's bundle, so won't work for images downloaded by the user.
You'll probably just have to figure out your own way of caching your images.
I'd suggest using a UIImage category with a static NSMutableDictionary that can hold your cached images. Then just use your custom caching method when initialising your UIImage.
For example:
#interface UIImage (UIImageCache)
+(UIImage*) cachedImageFile:(NSString*)imageFile;
+(void) resetCacheForImageFile:(NSString*)imageFile;
#end
#implementation UIImage (UIImageCache)
static NSMutableDictionary* cachedImages;
+(UIImage*) cachedImageFile:(NSString*)imageFile {
// Optional error checking
NSAssert1([[NSFileManager defaultManager] fileExistsAtPath:imageFile], #"Warning! The image file %# doesn't exist.", imageFile);
if (!cachedImages) cachedImages = [NSMutableDictionary dictionary];
UIImage* cachedImg = [cachedImages objectForKey:imageFile];
if (cachedImg) return cachedImg; // Image is cached, return it
else { // No cached image, create one
UIImage* img = [UIImage imageWithContentsOfFile:imageFile]; // iOS won't auto-cache the image.
[cachedImages setObject:img forKey:imageFile];
return img;
}
}
+(void) resetCacheForImageFile:(NSString*)imageFile {
[cachedImages removeObjectForKey:imageFile];
}
#end
Maybe I just got late to the party...but using
+ (UIImage *)imageWithContentsOfFile:(NSString *
I got rid of the cache issue.
Hope it helps!!
I am trying to use Path's FastImageCache library to handle photos in my app. The sample they provide simply reads the images from disk. Does anyone know how I might modify it to read from a url? In the section about providing source images to the cache they have
- (void)imageCache:(FICImageCache *)imageCache wantsSourceImageForEntity:(id<FICEntity>)entity withFormatName:(NSString *)formatName completionBlock:(FICImageRequestCompletionBlock)completionBlock {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Fetch the desired source image by making a network request
NSURL *requestURL = [entity sourceImageURLWithFormatName:formatName];
UIImage *sourceImage = [self _sourceImageForURL:requestURL];
dispatch_async(dispatch_get_main_queue(), ^{
completionBlock(sourceImage);
});
});
}
Has anyone used this api before and know how to get the source from the server to pass to the cache? Another example that still uses hard disk is
- (void)imageCache:(FICImageCache *)imageCache wantsSourceImageForEntity:(id<FICEntity>)entity withFormatName:(NSString *)formatName completionBlock:(FICImageRequestCompletionBlock)completionBlock {
// Images typically come from the Internet rather than from the app bundle directly, so this would be the place to fire off a network request to download the image.
// For the purposes of this demo app, we'll just access images stored locally on disk.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImage *sourceImage = [(FICDPhoto *)entity sourceImage];
dispatch_async(dispatch_get_main_queue(), ^{
completionBlock(sourceImage);
});
});
}
I worked on Fast Image Cache while I was at Path. The critical portion of Fast Image Cache is that it is the absolute fastest way to go from image data on disk to being rendered by Core Animation. No decoding happens, none of the image data is kept in memory by your app, and no image copies occur.
That said, the responsibility is yours to figure out how to download the images. There's nothing inherently special about downloading images. You can use NSURLConnection or one of many popular networking libraries (like AFNetworking) to actually download the image data from your server. Once you have that image data, you can call the relevant completion block for Fast Image Cache to have it optimize it for future rendering.
If you're looking for a simple way to download an image and display it when it's finished, then use something like SDWebImage. It's great for simple cases like that. If you are running into performance bottlenecks—especially with scrolling—as a result of your app needing to display tons of images quickly, then Fast Image Cache is perfect for you.
Your Approach Seems a Lot Like Lazy Loading Images from the URL, I had to do this once I had Used the following Library to do it, It dosent stores the Images in the disk, but uses cached Images..the below is its link..
https://github.com/nicklockwood/AsyncImageView
I added the networking logic to our fork > https://github.com/DZNS/FastImageCache#dezine-zync-additions-to-the-class
It utilizes NSURLSessionDownloadTasks, has a couple of configuration options (optional). All you need to do is create a new instance of DZFICNetworkController and set it as the delegate for FICImageCache's sharedCache instance object. It'll take care of downloading images with reference to the sourceImageURLWithFormatName: method on your objects conforming to <FICEntity>.
As I assume you'd use this in a UITableView or UICollectionView, calling cancelImageRetrievalForEntity:withFormatName: on the imageCache will cancel the download operation (if it's still in-flight or hasn't started).
I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.
I'm writing an iOS app that applies filters to existing video files and outputs the results to new ones. Initially, I tried using Brad Larson's nice framework, GPUImage. Although I was able to output filtered video files without much effort, the output wasn't perfect: the videos were the proper length, but some frames were missing, and others were duplicated (see Issue 1501 for more info). I plan to learn more about OpenGL ES so that I can better investigate the dropped/skipped frames issue. However, in the meantime, I'm exploring other options for rendering my video files.
I'm already familiar with Core Image, so I decided to leverage it in an alternative video-filtering solution. Within a block passed to AVAssetWriterInput requestMediaDataWhenReadyOnQueue:usingBlock:, I filter and output each frame of the input video file like so:
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
CMTime presentationTimeStamp = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer);
CVPixelBufferRef inputPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* frame = [CIImage imageWithCVPixelBuffer:inputPixelBuffer];
// a CIFilter created outside the "isReadyForMoreMediaData" loop
[screenBlend setValue:frame forKey:kCIInputImageKey];
CVPixelBufferRef outputPixelBuffer;
CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, assetWriterInputPixelBufferAdaptor.pixelBufferPool, &outputPixelBuffer);
// verify that everything's gonna be ok
NSAssert(result == kCVReturnSuccess, #"CVPixelBufferPoolCreatePixelBuffer failed with error code");
NSAssert(CVPixelBufferGetPixelFormatType(outputPixelBuffer) == kCVPixelFormatType_32BGRA, #"Wrong pixel format");
[self.coreImageContext render:screenBlend.outputImage toCVPixelBuffer:outputPixelBuffer];
BOOL success = [assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputPixelBuffer withPresentationTime:presentationTimeStamp];
CVPixelBufferRelease(outputPixelBuffer);
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
This works well: the rendering seems reasonably fast, and the resulting video file doesn't have any missing or duplicated frames. However, I'm not confident that my code is as efficient as it could be. Specifically, my questions are
Does this approach allow the device to keep all frame data on the GPU, or are there any methods (e.g. imageWithCVPixelBuffer: or render:toCVPixelBuffer:) that prematurely copy pixels to the CPU?
Would it be more efficient to use CIContext's drawImage:inRect:fromRect: to draw to an OpenGLES context?
If the answer to #2 is yes, what's the proper way to pipe the results of drawImage:inRect:fromRect: into a CVPixelBufferRef so that it can be appended to the output video file?
I've searched for an example of how to use CIContext drawImage:inRect:fromRect: to render filtered video frames, but haven't found any. Notably, the source for GPUImageMovieWriter does something similar, but since a) I don't really understand it yet, and b) it's not working quite right for this use case, I'm wary of copying its solution.