loading Animations with RGBA8888 and RGBA4444 showing no difference in memory usage, platform cocos2D & iOS - ios

plateform -> cocos2D, iOS
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA8888"
Shows memory usage in x-code Instruments 10.0 MB.
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA4444"
Shows memory usage in x-code Instruments 10.0 MB.
Question -> why its not showing any difference in Memory Usage while using lower ImageFormat = "RGBA4444" instead of higher ImageFormat = "RGBA8888"?
TexturePacker file size = 2047 * 1348

The default texture format is RGBA8888 so if you have a RGBA4444 texture you need to change the format before loading the texture (and perhaps change it back afterwards).
The method to change texture format for newly created textures is a class method of CCTexture2D:
+ (void) setDefaultAlphaPixelFormat:(CCTexture2DPixelFormat)format;

I found this error cause my memory size same in both format:-http://www.cocos2d-iphone.org/forum/topic/31092.
In CCTexturePVR.m ->
// Not word aligned ?
if( mod != 0 ) {
NSUInteger neededBytes = (4 - mod ) / (bpp/8);
printf("\n");
NSLog(#"cocos2d: WARNING. Current texture size=(%tu,%tu). Convert it to size=(%tu,%tu) in order to save memory", _width, _height, _width + neededBytes, _height );
NSLog(#"cocos2d: WARNING: File: %#", [path lastPathComponent] );
NSLog(#"cocos2d: WARNING: For further info visit: http://www.cocos2d-iphone.org/forum/topic/31092");
printf("\n");
}
its cocos2d or iOS bug which can be handle by adjusting pvr.ccz size
Size dimension should be divisible by 4 but not the Power Of two. it will resolve bug and get expected memory difference for Both Format

Related

How do I reduce memory required to view .usdz objects in AR?

Im playing with ARKit in a Messages extension and I'm able to load and show the sample tv and wheelbarrow files, but I'm getting memory warnings for the tv and it's not that big really.
Are there any techniques I can use to reduce the memory requirements for using this object file? This is from a subclass of SCNNode.
func loadModel() {
let bundle = Bundle(for: VirtualObject.self)
guard let fileURL = bundle.url(forResource: "retrotv", withExtension: "usdz")
, let modelNode = SCNReferenceNode(url: fileURL)
else { return }
modelNode.load()
modelNode.scale = SCNVector3(0.005, 0.005, 0.005)
self.addChildNode(modelNode)
modelLoaded = true
}
The rest of the code is from using Apple's UIKit example.
Sorry for late answer, but crucial are texture resolutions (not file size). When you rename .USDZ file to .ZIP you can actually unzip it and see what is inside. There are textures + .USDC file. Textures here are either RGB (3B/px) or Greyscale (2B/px) and all of them are 2k (2048px x 2048px).
So, for example if you have iPhone X which have retina display that scales each side of image 3 times it means, that device needs (2048 * 3) * (2048 * 3) * 3 = 113MB of RAM memory to display 2k RGB rexture.
This retro TV has four 2k RGB textures and four 2k Greyscale texture, we could calculate that textures alone are aprox. (4 * 113MB) + (4 * 75MB) = 752MB of RAM.
For more info I highly recommend this WWDC video.

Fastest way on iOS 7+ to get CVPixelBufferRef from BGRA bytes

What is the fastest way on iOS 7+ to convert raw bytes of BGRA / UIImage data to a CVPixelBufferRef? The bytes are 4 bytes per pixel in BGRA order.
Is there any chance of a direct cast here vs. copying data into a secondary storage?
I've considered CVPixelBufferCreateWithBytes but I have a hunch it is copying memory...
You have to use CVPixelBufferCreate because CVPixelBufferCreateWithBytes will not allow fast conversion to an OpenGL texture using the Core Video texture cache. I'm not sure why this is the case, but that's the way things are at least as of iOS 8. I tested this with the profiler, and CVPixelBufferCreateWithBytes causes a texSubImage2D call to be made every time a Core Video texture is accessed from the cache.
CVPixelBufferCreate will do funny things if the width is not a multiple of 16, so if you plan on doing CPU operations on CVPixelBufferGetBaseAddress, and you want it to be laid out like a CGImage or CGBitmapContext, you will need to pad your width higher until it is a multiple of 16, or make sure you use the CVPixelBufferGetRowBytes and pass that to any CGBitmapContext you create.
I tested all combinations of dimensions of width and height from 16 to 2048, and as long as they were padded to the next highest multiple of 16, the memory was laid out properly.
+ (NSInteger) alignmentForPixelBufferDimension:(NSInteger)dim
{
static const NSInteger modValue = 16;
NSInteger mod = dim % modValue;
return (mod == 0 ? dim : (dim + (modValue - mod)));
}
+ (NSDictionary*) pixelBufferSurfaceAttributesOfSize:(CGSize)size
{
return #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(NSString*)kCVPixelBufferWidthKey: #(size.width),
(NSString*)kCVPixelBufferHeightKey: #(size.height),
(NSString*)kCVPixelBufferBytesPerRowAlignmentKey: #(size.width * 4),
(NSString*)kCVPixelBufferExtendedPixelsLeftKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsRightKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsTopKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsBottomKey: #(0),
(NSString*)kCVPixelBufferPlaneAlignmentKey: #(0),
(NSString*)kCVPixelBufferCGImageCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferIOSurfacePropertiesKey: #{ #"IOSurfaceCGBitmapContextCompatibility": #(YES), #"IOSurfaceOpenGLESFBOCompatibility": #(YES), #"IOSurfaceOpenGLESTextureCompatibility": #(YES) } };
}
Interestingly enough, if you ask for a texture from the Core Video cache with dimensions smaller than the padded dimensions, it will return a texture immediately. Somehow underneath it is able to reference the original texture, but with a smaller width and height.
To sum up, you cannot wrap existing memory with a CVPixelBufferRef using CVPixelBufferCreateWithBytes and use the Core Video texture cache efficiently. You must use CVPixelBufferCreate and use CVPixelBufferGetBaseAddress.

Generating a 54 megapixel image on iPhone 4/4S and iPad 2

I'm currently working on a project that must generate a collage of a 9000x6000 pixels resolution, generated from 15 photos. The problem that I'm facing is that when I finish drawing I'm getting an empty image (those 15 images are not being drawn in the context).
This problem is only present on devices with 512MB of RAM like iPhone 4/4S or iPad 2 and I think that this is a problem caused by the system because it cannot allocate enough memory for this app. When I run this line: UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f); the app's memory usage raises by 216MB and the total memory usage gets to ~240MB RAM.
The thing that I cannot understand is why on Earth the images that I'm trying to draw within the for loop are not being rendered always on the currentContext? I emphasized the word always, because only once in 30 tests the images were rendered (without changing any line of code).
Question nr. 2: If this is a problem caused by the system because it cannot allocate enough memory, is there any other way to generate this image, like a CGContextRef backed by a file output stream, so that it won't keep the image in the memory?
This is the code:
CGSize outputSize = CGSizeMake(9000, 6000);
BOOL opaque = YES;
UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(currentContext, [UIColor blackColor].CGColor);
CGContextFillRect(currentContext, CGRectMake(0, 0, outputSize.width, outputSize.height));
for (NSUInteger i = 0; i < strongSelf.images.count; i++)
{
#autoreleasepool
{
AGAutoCollageImageData *imageData = (AGAutoCollageImageData *)strongSelf.layout.images[i];
CGRect destinationRect = CGRectMake(floorf(imageData.destinationRectangle.origin.x * scaleXRatio),
floorf(imageData.destinationRectangle.origin.y * scaleYRatio),
floorf(imageData.destinationRectangle.size.width * scaleXRatio),
floorf(imageData.destinationRectangle.size.height * scaleYRatio));
CGRect sourceRect = imageData.sourceRectangle;
// Draw clipped image
CGImageRef clippedImageRef = CGImageCreateWithImageInRect(((ALAsset *)strongSelf.images[i]).defaultRepresentation.fullScreenImage, sourceRect);
CGContextDrawImage(currentContext, drawRect, clippedImageRef);
CGImageRelease(clippedImageRef);
}
}
// Pull the image from our context
strongSelf.result = UIGraphicsGetImageFromCurrentImageContext();
// Pop the context
UIGraphicsEndImageContext();
P.S: The console doesn't show anything but 'memory warnings', which are expected to see.
Sound like a cool project.
Tactic: try also releasing imageData at the end of every loop (explicitly, after releasing the clippedImageRef)
Strategic:
If you do need to support such "low" RAM requirements with such "high" input, maybe you should consider 2 alternative options:
Compress (obviously): even minimal, naked to the eye, JPEG compression can go a long way.
Split: never "really" merge the image. Have an arrayed datastructure which represents a BigImage. make utilities for the presentation logic.

Tesseract has crashed

I am using tesseract on my iOS device and it was working properly until recently it started to crash on me. I have been testing with the same image over and over again and prior to now I had it working about 75 times consecutively. The only thing I can think of is that I deleted the app from my iOS device and then ran it again through Xcode.
I am far from an expert on tesseract and I could really use some advice on what to do next, it would truly be a disappointment for all the hours I put in to go to waste because I cannot read the image anymore. Thank you
This is the crash error it appears to happen when the tesseract file in this method
- (BOOL)recognize
{
int returnCode = _tesseract->Recognize(NULL);// here is where the arrow points on the crash
return (returnCode == 0) ? YES : NO;
}
This is an old question from Alex G and I don't see any answer.
Does anyone find the root cause and solution? Please advice. Many thanks.
I hope you are using AVCaptureSession to take continuously photo and passing to tesseract after some image processing.
So before passing UIImage to tesseract for recognising you should check with this:
CGSize size = [image size];//your image
int width = size.width;
int height = size.height;
if (width < 100 || height < 50) {//UIImage must contain some some size
//Consider as invalid image
return;
}
//This condition is not mandatory.
uint32_t* _pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
if (!_pixels) {
return;
//Consider as invalid image
}

iOS GL ES 2 app crashes on device but not on simulator

I have an app with a number of render targets/frame buffers and inside one call to glDrawElements it crashes on device ( iPad iOS 5.0) but not in simulator. This is a very shader intensive app with a dozen different shaders and thousands of vertex buffers.
Further debugging the matter turned me to believe that the crash occurs because of a particular shader, but the shader is valid and so is the frame buffer object that is being written to.
Ok, so after tons of time spent on debugging I found out that my Depth of field shader was causing the crash, particularly this function :
float GetNearFalloff( float Depth, float MinDist, float MaxDist)
{
float Range = MaxDist - MinDist;
if (Depth < MinDist)
return 1.0;
/*else*/if (Depth > MaxDist)
return 0.0;
float Blur = 1.0 - ( (Depth - MinDist) / Range );
return Blur;
}
Basically the commented else there is causing my crash. Removing that made everything work. I put it back actually ( I was thinking it may be something else ), only to see that after a couple of shader recompilations the same crash appeared with the same fix, deleting the else.

Resources