The situation is this:
I've written a simple MovieClip replacement that converts an existing imported MovieClip to a sequence of BitmapData. This removes the requirement for Flash to render vector data in the MovieClip on each frame.
But BitmapData has a huge memory footprint. I've tried converting the BitmapData to a ByteArray and using the compress() method. This results in a significantly smaller memory footprint. But it has proven impractical. For each redraw, I tried uncompressing()'ing the ByteArray, then using SetPixels to blit the data to the screen, then re-compressing() the frame. This works but is terribly slow.
So I was wondering if anybody else has an approach I could try. In Flash, is it possible to compress bitmap data in memory and quickly blit it to the screen?
I wonder how native animated GIFs work in Flash. Does it uncompress them to BitmapData behind the scenes, or is frame decompression done on the fly?
Perhaps there is an Alchemy project that attempts to blit compressed images?
Thanks for any advice you can offer :)
#thienhaflash's response is good but has aged a year and since then Flash Player and AIR Runtime have expanded their capabilities. Today I stumbeled on this little tidbit from Adobe's AS3 Guide. As of player 11.3 there are native image compression techniques available. Here's a snippet:
// Compress a BitmapData object as a JPEG file.
var bitmapData:BitmapData = new BitmapData(640,480,false,0x00FF00);
var byteArray:ByteArray = new ByteArray();
bitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
Not sure about the practicality for blitting but it's nice that it can be done natively.
For memory reservation you need to think twice before convert a MovieClip to a Bitmap sequence. Is it really that need ? Can you break things down as there are several things (like the background) is static (or just moving around) why don't cache bitmap for each elements instead of one big Bitmap sequence ?
I usually used AnimatedBitmap (the name for bitmap sequence alternative for a MovieClip) only for small size animated icons, and other heavy calculation stuffs (like fire / smoke effects ...). Just break things down as much as you can !
As far as i know, there are no way to compress the memory used by a BitmapData located in the memory and there are nothing related to Alchemy could help improve memory used in this case.
Animated GIF won't works in Flash natively, you will need some library to do that. Search for AnimatedGIF as3 library from bytearray.com, actually the library just read the gif file in raw byteArray and convert to an animatedBitmap just like how you've done.
this is an old question, but there is recent info on this : jackson Dunstan has had a run with bitmapdatas and it turns out that Bitmap data obtained from compressed sources will "deflate" after some time unused.
here are the articles : http://jacksondunstan.com/articles/2112, and the two referred at the beginning of it.
So you could absolutely do something like :
var byteArray:ByteArray = new ByteArray();
myBitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
var loader = new Loader();
loader.addEventListener(Event.COMPLETE, function(_e:Event):void{
if(loader.content is Bitmap){
myBitmapData.dispose()
myBitmapData= Bitmap(loader.content).bitmapData;
}
});
loader.loadBytes(byteArray);
I'm not sure if it would work as is, and you definitely want to handle your memory better. but now, myBitmapData will be uncompressed when you try to read from it, and then re-compressed when you don't use it for about ten seconds.
Related
iOS version: 13.1
iPhone: X
I'm currently using DBAttachmentPickerController to choose from a variety of images, the problem comes when I take a picture directly from the camera and try to upload it to our server. The SDImageWebPCoder.shared.encodedData loads for about 30 seconds more less. The same image in the Android app takes about 2-3 seconds.
Here is the code I use
let attachmentPickerController = DBAttachmentPickerController(finishPicking: { attachmentArray in
self.images = attachmentArray
var currrentImage = UIImage()
self.images[0].loadOriginalImage(completion: { image in
self.userImage.image = image
currrentImage = image!
})
//We transform it to webP
let webpData = SDImageWebPCoder.shared.encodedData(with: currrentImage, format: .webP, options: nil)
self.api.editImageUser(data: webpData!)
}, cancel: nil)
attachmentPickerController.mediaType = DBAttachmentMediaType.image
attachmentPickerController.allowsSelectionFromOtherApps = true
attachmentPickerController.present(on: self)
Should I change the Pod I'm using? Should I just compress it? Or am I doing something wrong?
WebP encoding speed is related slow, it use software encoding and VP8 compression algorithm (complicated), compared to the Hardware accelerated JPEG/PNG encoding. (Apple's SoC).
picture directly from the camera
The original image taken on iPhone camera may be really lark, like 4K resolution. If you don't do some pre-scale and try to encode it, you may consume much more time.
The suggestion can be like this:
Try to use the options like the compressionQuality, the higher cost
more time, but compress more.By default it's 1.0, which is the higest and most time consuming.
Try to pre-scale the original image. For image from Photos Libraray, you can always use the API to control the size. Or, you can use SDWebImage's transform method like - [UIImage sd_resizedImage:].
Do all the encoding in background thread, never block main thread
If all these is not suitable, the better solution, it's to use JPEG and PNG format instead of WebP. Then, on your image server side code, transcoding the JPEG/PNG to WebP. Server side processing is always the best idea for this thing.
If you're intersted the real benchmark or something, compared to JPEG/PNG (Hardware) and WebP (Software). You can try to use my benchmark code demo here, to help you do your decision.
https://github.com/dreampiggy/ModernImageFormatBenchmark
Using this class I am trying to load a gif url into UIImageView.
The thing is , for some url's it takes 10 seconds to load, others 2 seconds.
I have tried almost anything, but still the process is too slow. 1 second would be good, but i had never succeed getting there.
I have also tried with UIWebview which had its own issues .
Here is the code :
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
let fileUrl = NSURL(string:"http://45.media.tumblr.com/6785bae27b8f888fe825f0ade95796a3/tumblr_noenkbeTSw1qjmwryo1_500.gif" )
let gif = UIImage.animatedImageWithAnimatedGIFURL(fileUrl!)
dispatch_async(dispatch_get_main_queue()) {
self.player.image = gif
}
}
The problem with most of the GIF reading tools I have looked at is that they read all the data in at load time and that they allocate memory for all of the decoded frames and hold all that uncompressed data in memory at the same time. This will lead to runtime performance problems and it will crash your app and possibly your device on large/long gifs. On the issue of loading time, there is not all that much you can do since the data does need to be downloaded and read. You are also just assuming that the network cache is going to handle hitting the same GIF over and over without going to the network again, which may or may not work well for you. For a solution that addresses these issues, see this SO Question or you can also take a look at the flipboard solution here.
I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.
Unfortunately, there appears to be no way to using a built-in method on iOS to extract 32 bit RGBA data from a PNG file without losing the alpha channel reference. Therefore, some people have been using libpng to extract their OpenGL textures. However, all the examples have required the png file to be loaded from a file. Assuming these textures are imported over a network connection, they would have to be saved to files from NSData and then read. What is the best way to extract raw PNG data into raw OpenGL RGBA texture data?
Ended up writing a category which solves this problem using the customization capabilities of libpng. Posted a gist here: https://gist.github.com/joshcodes/5681512
Hopefully this helps someone else who needs to know how this is done. The essential part is creating a method
void user_read_data(png_structp png_ptr, png_bytep data, png_size_t length)
{
void *nsDataPtr = png_get_io_ptr(png_ptr);
ReadStream *readStream = (ReadStream*)nsDataPtr;
memcpy(data, readStream->source + readStream->index, length);
readStream->index += length;
}
and using
// init png reading
png_set_read_fn(png_ptr, &readStream, user_read_data);
as a custom read method.
While reading specification at Khronos, I found:
bufferData(ulong target, Object data, ulong usage)
'usage' parameter can be: STREAM_DRAW, STATIC_DRAW or DYNAMIC_DRAW
My question is, which one should I use?
What are the advantages, what are the differences?
Why would I choose to use some other instead STATIC_DRAW?
Thanks.
For 'desktop' OpenGL, there is a good explanation here:
http://www.opengl.org/wiki/Buffer_Object
Basically, usage parameter is a hint to OpenGL/WebGL on how you intend to use the buffer. The OpenGL/WebGL can then optimize the buffer depending on your hint.
The OpenGL ES docs writes the following, which is not exactly the same as for OpenGL (remember that WebGL is inherited from OpenGL ES):
STREAM
The data store contents will be modified once and used at most a few times.
STATIC
The data store contents will be modified once and used many times.
DYNAMIC
The data store contents will be modified repeatedly and used many times.
The nature of access must be:
DRAW
The data store contents are modified by the application, and used as the source for GL drawing and image specification commands.
The most common usage is STATIC_DRAW (for static geometry), but I have recently created a small particle system where DYNAMIC_DRAW makes more sense (the particles are stored in a single buffer, where parts of the buffer is updated when particles are emitted).
http://jsfiddle.net/mortennobel/YHMQZ/
Code snippet:
function createVertexBufferObject(){
particleBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
var vertices = new Float32Array(vertexBufferSize * particleSize);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.DYNAMIC_DRAW);
bindAttributes();
}
function emitParticle(x,y,velocityX, velocityY){
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
// ...
gl.bufferSubData(gl.ARRAY_BUFFER, particleId*particleSize*sizeOfFloat, data);
particleId = (particleId +1 )%vertexBufferSize;
}