Cannot create PDF document with 400+ pages on iOS - ios

I am using the following pseudocode to generate a PDF document:
CGContextRef context = CGPDFContextCreateWithURL(url, &rect, NULL);
for (int i = 1; i <= N; i++)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGContextBeginPage(context, &mediaBox);
// drawing code
CGContextEndPage(context);
[pool release];
}
CGContextRelease(context);
It works very well with small documents (N < 100 pages), but it uses too much
memory and crashes if the document has more than about 400 pages (it received
two memory warnings before crashing.) I have made sure there were no leaks using
Instruments. What's your advice on creating large PDF documents on iOS? Thanks a lot.
edit: The pdf creation is done in a background thread.

Since you're creating a single document via CGPDFContextCreateWithURL the entire thing has to be held in memory and appended to, something that commonly (though I can't say for certain with iOS and CGPDFContextCreateWithURL) requires a full before and after copy of the document to be kept. No need for a leak to create a problem, even without the before-and-after issue.
If you aren't trying to capture a bunch of existing UIKit-drawn stuff -- and in your sample it seems that you're not -- use the OS's printing methods instead, which offer built-in support for printing to a PDF. UIGraphicsBeginPDFContextToFile writes the pages out to disk as they're added so the whole thing doesn't have to be held in memory at once. You should be able to generate a huge PDF that way.

Probably not the answer you want to hear, but looking at it from another perspective.
Could you consider it as a limitation of the device?... First check the number of pages in the PDF and if it is too large, give a warning to the user. Therefore handling it gracefully.
You could then expand on this....
You could construct small PDF's on the iDevice and if the PDF is too large, construct it server-side the next time the iDevice has a net connection.

If you allocate too much memory, your app will crash. Why is generating an unusually large PDF a goal? What are you actually trying to accomplish?

What about using a memory mapped file to back your CG data consumer? Then it doesn't necessarily have to fit in RAM all at once.
I created an example here: https://gist.github.com/3748250
Use it like this:
NSURL * url = [ NSURL fileURLWithPath:#"pdf.pdf"] ;
MemoryMappedDataConsumer * consumer = [ [ MemoryMappedDataConsumer alloc ] initWithURL:url ] ;
CGDataConsumerRef cgDataConsumer = [ consumer CGDataConsumer ] ;
CGContextRef c = CGPDFContextCreate( cgDataConsumer, NULL, NULL ) ;
CGDataConsumerRelease( cgDataConsumer ) ;
// write your PDF to context `c`
CGPDFContextClose( c ) ;
CGContextRelease( c ) ;
return 0;

Related

MTLBuffer with MTLStorageModePrivate mode

I'm relatively new to the Metal, and I have a pretty straight-forward question. I simply can not init MTLBuffer with MTLStorageModePrivate option:
id<MTLBuffer> privateBuff = [device newBufferWithLength:dataLength options:MTLStorageModePrivate];
Compiler gives me an assert with that text:
-[MTLDebugDevice validateResourceOptions:isTexture:isIOSurface:]:437: failed assertion `options 0x2 conveys invalid cpuCacheMode of 0x2'
And it doesn't make much sense. I'm creating a buffer which can be accessed only from GPU, so I need no cpu cache modes whatsoever for this particular entity. I suppose I need to turn off that cpu cache mode, but how?
I looked through MTLCPUCacheMode, but it has nothing regarding turning cpu cache mode completely off.
Interesting note: I absolutely can create MTLHeap with MTLStorageModePrivate, but not MTLBuffer.
Any help would be appreciated. Thanks in advance!
UPDATE: I can create MTLBuffer with MTLStorageModePrivate by using MTLHeap. It looks something like this:
MTLHeapDescriptor *heapDescriptor = [MTLHeapDescriptor new];
heapDescriptor.storageMode = MTLStorageModePrivate;
heapDescriptor.size = 0;
MTLSizeAndAlign sizeAndAlign = [device heapBufferSizeAndAlignWithLength:lutSharedBuffer.length options:MTLResourceStorageModePrivate];
sizeAndAlign.size += (sizeAndAlign.size & (sizeAndAlign.align - 1)) + sizeAndAlign.align;
heapDescriptor.size += sizeAndAlign.size;
privateHeap = [device newHeapWithDescriptor:heapDescriptor];
privateBuff = [privateHeap newBufferWithLength:lutSharedBuffer.length options:MTLResourceStorageModePrivate]; //now it works!
But it's still not possible to do without the heap.
The issue here seems to be that you're using the incorrect enum to specify your resource options. In your first snippet, you use MTLStorageModePrivate, but you should be using MTLResourceStorageModePrivate, which contains a bit shift to place the storage mode in the correct bits.
MTLResourceStorageModePrivate = MTLStorageModePrivate << MTLResourceStorageModeShift
In Swift, this would have caused a compile-time error.

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

When is the data copied to GPU memory?

I have some well known steps in my program:
CreateBuffer
Create..View
CSSet..Views
Dispatch
At which step is the data copied to the GPU?
The reason they down-voted it is because it seems as if you didn't put any effort into a little Google search.
Answer: DirectX usually transfers data from system memory into video memory when the creation methods are called. An example of a creation method is "ID3D11Device::CreateBuffer". This method requires a pointer to the memory location of where the data is so it can be copied from system RAM to video RAM. However, if the pointer that is passed into is a null pointer then it just sets the amount of space to the side so you can copy it later.
Example:
If you create a Dynamic Vertex buffer and you don't pass the data in at first then you will have to use map/unmap to copy the data into video memory.
// Fill in a buffer description.
D3D11_BUFFER_DESC bufferDesc;
bufferDesc.Usage = D3D11_USAGE_DYNAMIC;
bufferDesc.ByteWidth = sizeof(Vertex_dynamic) * m_iHowManyVertices;
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
bufferDesc.MiscFlags = 0;
bufferDesc.StructureByteStride = NULL;
// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = &_vData[0];
InitData.SysMemPitch = NULL;
InitData.SysMemSlicePitch = NULL;
// Create the vertex buffer.
/*Data is being copyed right now*/
m_pDxDevice->CreateBuffer(&bufferDesc, &InitData, &m_pDxVertexBuffer_PiecePos);
DirectX manages the memory for you and the data is copied to the GPU when it needs to be.

What is the correct way to clear sensitive data from memory in iOS?

I want to clear sensitive data from memory in my iOS app.
In Windows I used to use SecureZeroMemory. Now, in iOS, I use plain old memset, but I'm a little worried the compiler might optimize it:
https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/coding/771-BSI.html
code snippet:
NSData *someSensitiveData;
memset((void *)someSensitiveData.bytes, 0, someSensitiveData.length);
Paraphrasing 771-BSI (link see OP):
A way to avoid having the memset call optimized out by the compiler is to access the buffer again after the memset call in a way that would force the compiler not to optimize the location. This can be achieved by
*(volatile char*)buffer = *(volatile char*)buffer;
after the memset() call.
In fact, you could write a secure_memset() function
void* secure_memset(void *v, int c, size_t n) {
volatile char *p = v;
while (n--) *p++ = c;
return v;
}
(Code taken from 771-BSI. Thanks to Daniel Trebbien for pointing out for a possible defect of the previous code proposal.)
Why does volatile prevent optimization? See https://stackoverflow.com/a/3604588/220060
UPDATE Please also read Sensitive Data In Memory because if you have an adversary on your iOS system, your are already more or less screwed even before he tries to read that memory. In a summary SecureZeroMemory() or secure_memset() do not really help.
The problem is NSData is immutable and you do not have control over what happens. If the buffer is controlled by you, you could use dataWithBytesNoCopy:length: and NSData will act as a wrapper. When finished you could memset your buffer.

Compressing BitmapData

The situation is this:
I've written a simple MovieClip replacement that converts an existing imported MovieClip to a sequence of BitmapData. This removes the requirement for Flash to render vector data in the MovieClip on each frame.
But BitmapData has a huge memory footprint. I've tried converting the BitmapData to a ByteArray and using the compress() method. This results in a significantly smaller memory footprint. But it has proven impractical. For each redraw, I tried uncompressing()'ing the ByteArray, then using SetPixels to blit the data to the screen, then re-compressing() the frame. This works but is terribly slow.
So I was wondering if anybody else has an approach I could try. In Flash, is it possible to compress bitmap data in memory and quickly blit it to the screen?
I wonder how native animated GIFs work in Flash. Does it uncompress them to BitmapData behind the scenes, or is frame decompression done on the fly?
Perhaps there is an Alchemy project that attempts to blit compressed images?
Thanks for any advice you can offer :)
#thienhaflash's response is good but has aged a year and since then Flash Player and AIR Runtime have expanded their capabilities. Today I stumbeled on this little tidbit from Adobe's AS3 Guide. As of player 11.3 there are native image compression techniques available. Here's a snippet:
// Compress a BitmapData object as a JPEG file.
var bitmapData:BitmapData = new BitmapData(640,480,false,0x00FF00);
var byteArray:ByteArray = new ByteArray();
bitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
Not sure about the practicality for blitting but it's nice that it can be done natively.
For memory reservation you need to think twice before convert a MovieClip to a Bitmap sequence. Is it really that need ? Can you break things down as there are several things (like the background) is static (or just moving around) why don't cache bitmap for each elements instead of one big Bitmap sequence ?
I usually used AnimatedBitmap (the name for bitmap sequence alternative for a MovieClip) only for small size animated icons, and other heavy calculation stuffs (like fire / smoke effects ...). Just break things down as much as you can !
As far as i know, there are no way to compress the memory used by a BitmapData located in the memory and there are nothing related to Alchemy could help improve memory used in this case.
Animated GIF won't works in Flash natively, you will need some library to do that. Search for AnimatedGIF as3 library from bytearray.com, actually the library just read the gif file in raw byteArray and convert to an animatedBitmap just like how you've done.
this is an old question, but there is recent info on this : jackson Dunstan has had a run with bitmapdatas and it turns out that Bitmap data obtained from compressed sources will "deflate" after some time unused.
here are the articles : http://jacksondunstan.com/articles/2112, and the two referred at the beginning of it.
So you could absolutely do something like :
var byteArray:ByteArray = new ByteArray();
myBitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
var loader = new Loader();
loader.addEventListener(Event.COMPLETE, function(_e:Event):void{
if(loader.content is Bitmap){
myBitmapData.dispose()
myBitmapData= Bitmap(loader.content).bitmapData;
}
});
loader.loadBytes(byteArray);
I'm not sure if it would work as is, and you definitely want to handle your memory better. but now, myBitmapData will be uncompressed when you try to read from it, and then re-compressed when you don't use it for about ten seconds.

Resources