Reducing Flash file size in KB? - ios

I have a Flash file that I need to reduce the size of.
The reason that I need to reduce its size is that I will need to convert this into an iPhone app.
currently it only has 2 buttons and 2 TLF textfileds on the stage one, layer one and the size of the file is 355KB.
I have also placed the code on layer 2.
is there anyway to reduce the size of it so I won't have problems when publishing and sending for app store?
Thanks

The biggest portion of that file size will be related to TLF. TLF (Text-Layout-Framework) is huge and is generally not recommended on mobile (as it has pretty high cpu usage).
If you're not using any TLF specific features, then it would be wise to change your text fields to use classic text instead (DF3).
Beyond TLF, make sure you're using vector objects instead of bitmaps wherever you can as that will drastically reduce file size. If you are using bitmaps, you can play around with the compression settings to optimize file size further. You can do this globally in the Publish Settings (JPEG Quality) or individually on a graphics properties menu.
One note with Vector graphics and mobile, simple vectors will run ok, but complex vectors will run terribly. Make sure to set cacheAsBitmap = true; on any complex (or even all) vectors to improve performance. OR in FLashPRO, click on a movieClip and in the properties panel, go to the "Display" twirl down, and set cache as bitmap in the Render setting.

Related

For what design reason TCanvas.StretchDraw is not working as expected for the TIcon?

Recently I've discovered that TCanvas.StretchDraw will not work as expected for object which is a TIcon instance (quick look at TIcon.Draw and DrawIconEx method tells why). Delphi help acknowledges that fact. I know the workaround but I do not know the reason behind such design decision in VCL. Does anyone know why they decided to left TIcon untouched in this matter?
Icons are not regular bitmaps. This is mostly due to historical design and technical reasons.
It did make sense at time when icons were small 32x32 pixels large and 16 colors (good old days!) that icons were never to be stretched on screen.
But there is also a "common sense" technical reason. Such small bitmaps are usually very hard to be re-sized by an algorithm (and default GDI strech algoritm is very fast but produces also very bad result in respect to other interpolation modes, e.g. available with GDI+), so it was decided to embed a set of icons within the executable, as resources: one icon per size. The strech process benefits of being done at design time, at pixel level, by an icon designer. And - back those days - it was also much less resource consuming to use dedicated icons, with a reduced color palette.
Since you are supposed to have a set of icons with a pre-defined size for each, you do not need to use StrechDraw, but just select the right icon to display.
So if you want to display an icon with a given size, ensure you pickup the right size, or get the biggest icon and upsize it, using a temporary bitmap - or DrawIconEx(). Or, even better, do not use icons, but a bitmap, or a vectorial drawing if you expect huge picture size.

OpenGL-ES iOS DrawText on screen drops FPS dramastically

Our team has developed an OpenGL application which draws different polygons on the screen. Additionally we want to create about 1000 different strings to print on the screen. If we do this with the Texture2D class the FPS drops down under 3.
I've already tested Bitmap fonts, which doesn't improve the performance.
Which is the best way in OpenGL iOS to draw a lot of text without loss of performance and without losing quality (text should be scalable)?
Allocating 1000 textures takes up a huge amount of memory and will slow down your app, especially if they are at a high enough resolution for readable text. You should generate these textures as they are needed and free them once they are no longer being displayed. Make sure that you aren't generating and freeing textures each frame, but only as needed.
If you are drawing all 1000 strings in the same scene, you should combine as many as you can into like textures. This will allow you to leverage Cocos2D's TrueType rendering system to keep text high-quality. On the other hand if this is not an option and all 1000 strings need to be distinct from each other, consider building a font rendering system that renders each character as a glyph image. This will reduce the number of textures used from 1000 to about 100 to represent all standard English characters and punctuation. I had to do something similar for a video game with lots of dynamic text in an OpenGL environment, and got good performance out of it. However, I do not recommend it unless it's absolutely necessary since it limits your text to only the glyphs you define and you have to program the formatting yourself.

Large images with Direct2D

Currently I am developing application for the Windows Store which does real time-image processing using Direct2D. It must support various sizes of images. The first problem I have faced is how to handle the situations when the image is larger than the maximum supported texture size. After some research and documentation reading I found the VirtualSurfaceImageSource as a solution. The idea was to load the image as IWICBitmap then to create render target with CreateWICBitmapRenderTarget (which as far as I know is not hardware accelerated). After some drawing operations I wanted to display the result to the screen by invalidating the corresponding region in the VirtualSurfaceImage source or when the NeedUpdate callback fires. I supposed that it is possible to do it by creating ID2D1Bitmap (hardware accelerated) and to call CopyFromRenderTarget with the render target created with CreateWICBitmapRenderTarget and the invalidated region as bounds, but the method returns D2DERR_WRONG_RESOURCE_DOMAIN as a result. Another reason for using IWICBitmap is one of the algorithms involved in the application which must have access to update the pixels of the image.
The question is why this logic doesn't work? Is this the right way to achieve my goal using Direct2D? Also as far as the render target created with CreateWICBitmapRenderTarget is not hardware accelerated if I want to do my image processing on the GPU with images larger than the maximum allowed texture size which is the best solution?
Thank you in advance.
You are correct that images larger than the texture limit must be handled in software.
However, the question to ask is whether or not you need that entire image every time you render.
You can use the hardware accel to render a portion of the large image that is loaded in a software target.
For example,
Use ID2D1RenderTarget::CreateSharedBitmap to make a bitmap that can be used by different resources.
Then create a ID2D1BitmapRenderTarget and render the large bitmap into that. (making sure to do BeginDraw, Clear, DrawBitmap, EndDraw). Both the bitmap and the render target can be cached for use by successive calls.
Then copy from that render target into a regular ID2D1Bitmap with the portion that will fit into the texture memory using the ID2D1Bitmap::CopyFromRenderTarget method.
Finally draw that to the real render target, pRT->DrawBitmap

What kind of images should go into a spritesheet for a cocos2d ios app?

I apologize upfront for the long question, with multiple subquestions, but the question is really as stated in the title. All that follows is a detailed breakup of different aspects of the question.
In my universal ios game built using cocos2d, I have four categories of images - I want to determine which of them should go into spritesheets and which are better off loaded as individual images. My current guess is that only character animations that run throughout the game provide value being loaded in memory as spritesheets:
Character animations that run throughout the game play (except
when user is in menus): for these, I assume that having the images
in a spritesheet will reduce runtime memory usage (due to the
padding of individual files to power of two byte boundaries), hence
they are candidates for a spritesheet. Is that correct?
Small images (about 200 of them) of which 0 to 4 are displayed at any
time, picked at random, during game play. For these, I am not sure
if it is worth having all 200 images loaded in memory when only at
most any 4 are used at a time. So it may be better to directly
access them as images. Is that correct?
A few (about 20) small menu elements like buttons that are used only in static menus: since the menu items are used only during menu display, I assume they are not of much value in improving memory access via spritesheets. Is that correct?
A few large images that are used as backgrounds for the menus, the game play scene, etc. Most of these images are as large as the screen resolution. Since the screen resolution is roughly equal to the maximum size of a single image (for example, for ipad retina, 4096 x 4096 image size versus screen size of 2048 x 1536), there isn't much gain in using spritesheets as at most 1 or 2 large images would fit in one spritesheet. Also, since each of these large files is used only in one scene, loading all these large images as spritesheets in memory upfront seems like an unnecessary overhead. Hence, directly access them as spritesheets. Is that correct?
A couple of common related questions:
a) Packing images used across different scenes into the same spritesheet makes us load them into memory even when only a subset of the images is used in that scene. I assume that is a bad idea. Correct?
b) There is a stutter in the game play only on older devices (iPad 1 and iPhone 3gs). Will spritesheets help reduce such stutter?
c) I assume that spritesheets are only going to benefit runtime memory footprint, and not so much the on disk size of the app archive. I noticed, for instance, that a set of files which are 11.8 MB in size, when put in a spritesheet are 11 Mb - not much of a compression benefit. Is that a valid assumption?
Please let me know your thoughts on my rationale above.
Thanks
Anand
Rule of thumb: everything goes in a spritesheet (texture atlas), unless there's a good reason not to.
Definitely texture atlas.
Cocos2d's caching mechanism will cause all individual images to be cached and remain in memory, so eventually they'll use more than if they were in a texture atlas. I would only use single images if they're rarely needed (ie menu icons) and I would also remove them from CCTextureCache just after they've been added to the scene, to make sure they will be removed from memory when leaving that menu screen.
Assumption may be wrong. A 320x480 image uses 512x512 as a texture in memory. If you have multiple of them, you may be better off having them all in a single texture atlas. Unless you enabled NPOT support in cocos2d. Again, don't forget that CCTextureCache is caching the textures.
Keep in mind that large textures benefit a lot from spritebatching. So even if there may only be 2-3 images in the texture atlas, it can make a difference in performance.
a) It really depends. If your app is running low on memory, you might want to have separate texture atlases. Otherwise it would be wasteful not to pack the images into a single texture atlas (if possible).
b) It depends on what's causing the stutter. Most common issue: loading resources (ie images) during gameplay, or creating and removing many nodes every frame instead of re-using existing ones.
c) Probably. It may depend on the texture atlas tool (I recommend TexturePacker). It definitely depends on the file format. If you can pack multiple PNG into a single .pvr.ccz texture atlas, you'll save a lot of memory (and gain loading speed).
For more info refer to my cocos2d memory optimization blog post.

Multilingual Unicode rendering in opengl

I have to extend an OpenGL-Rendering system to support international characters (especially Hebrew, Arabic and cyrillic).
Development platform is Windows(XP|Vista|7), alas using Embercardero Delphi 2010.
I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings.
While this is feasible for Latin-1 Characters, building the full Unicode character set in advance is pretty time-consuming (about 8.5 minutes on my machine), so I am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (Latin, Greek, Cyrillic, Arabic and Hebrew) to include just the glyphs I need, but that would just be a solution for my current needs, and will become insufficient once other encoding is needed.
On the upside, I do not have to worry about left-to right or right-to left direction as our application can handle this already.
I would expect this to be a well-known problem, so I would like to ask if there is any reference material on this on the web, or if you could share some insight on this?
Edit
A clarification: I use a polygonal font representations. Each Font is constructed at unit size (1.0) in advance and scaled appropriately using glScalef(...) before rendering. I did decide against pre-rasterizing since the users might zoom in quite closely (The application is used for CAD), so rastering artifacts would become visible.
Additionally, since a scene seldom exceeds more then a few hundred characters (mainly labels and measurements), the speed gain from pre-rasterization is negligible.
Don't pre-build the display lists :- create an intermediate sprite that builds the lists on demand, and caches them. Trying to pre-compute lists - or pre generate rasterized textures at every font size, font face, and for all characters, is impractical, Especially when you scale to far eastern character sets.
You need to replace the wglOutLineFont.
To do that, generate/render to texture the required glyphs using the wglOutLineFont, and then save the texture into a raster image file. Once application loads, it needs to load the texture image and the glyph texture coordinates (4 coords for each glyph), and to generate the display lists (one list for each glyph, each display list shall draw a single glyph as textured quad).
Each short representing a glyph shall have a corresponding display list (their value much match, and glListBase can aid in this).
I suppose loading a texture is faster than generating font display lists at runtime. Pratically you move offline the glyph raster computation. But the display list generation can be heavy (many glyphs). Indeed you can run in a separated thread the display list generation or generate only the display lists required by your needs.
I've had good luck transliterating this tutorial into C++, though I'm not sure how well it will transfer to Delphi.

Resources