I am updating some textures of the scene all the time by new images.
Problem is uploading is synchronous and texImage2D takes ~100ms. It takes so long time even if texture is not used during rendering of the next frame or rendering is switched off.
I am wondering, is there any way to upload texture data asynchronously?
Additional conditions:
I had mention there is old texture which could stay active until uploading of new one to GPU will be finished.
Solution is to use texSubImage2D and upload image to GPU by small portions. Once uploading will be finished activate your new texture and delete old one.
is there any way to upload texture data asynchronously?
no, not in WebGL 1.0. There might be in WebGL 2.0 but that's not out yet.
Somethings you might try.
make it smaller
What are you uploading? Video? Can you make it smaller?
Have you tried different formats?
WebGL converts from whatever format the image is stored in to the format you request. So for example if you load a .JPG the browser might make an RGB image. If you then upload it with gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE it has to convert the image to RGBA before uploading (more time).
Do you have UNPACK_FLIP_Y set to true?
If so WebGL has to flip your image before uploading it.
Do you have UNPACK_COLORSPACE_CONVERSION_WEBGL set to BROWSER_DEFAULT_WEBGL?
If not WebGL may have to re-decompress your image
Do you have UNPACK_PREMULTIPLY_ALPHA_WEBGL set to false or true?
Depending on how the browser normally stores images it might have
to convert the image to the format your requesting
Images have to be decompressed
are you sure your time is in "uploading" vs "decompressing"? If you switch to uploading a TypedArray of the same dimensions does it speed up?
Related
I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.
I need to reduce the size of UIImage captured from Camera/Gallery & reduce to size to max 200kb.
The UIImage would then saved to a SQLite Database in the app.
I tried using UIImageJPEGRepresentation(image, compression); in while loop. but couldn't crack it.
Thanks for the help..!!!
You need to scale the image down first, straight out of camera it will likely be too big. Something like 1000 pixels on the longest side might be enough, but you can play with it and see what works for you. Then apply JPEG compression to a scaled image.
Also the way JPEG works, it's pointless to run the algorithm over and over again. Just pick a good compression rate and run it only once.
On a tangent, in many cases where you need to save a large data blob in a database, you might find it more memory efficient to save the data into a separate file in the file system, and store the path in the data base.
I'm receiving a series of JPEGs over the network from a camera (MJPEG). I display the images as I receive them in a UIView. What I'm seeing is that my App is spending 50% of CPU (device and simulator tested) in what appears to me to be the UIView update.
Is there is a less CPU intensive way to do this screen update? Should I process the JPEG in some way before handing it over to UIView?
Receive method:
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(),^{
[cameraView updateVideoImage:image];
});
Update method:
- (void) updateVideoImage:(UIImage*)image {
myUIView.image = image;
...
update: added better screen capture
update2: Is OpenGL going to provide a quicker surface to render to for JPEG? It's not clear to me from Instruments where the time is being spent, render or decode. I'm going to put together a test case as suggested and work from there.
iOS is optimized for PNG images. While JPEG greatly reduces the size of images for transmission, it is a much more complex format, so it does not surprise me that this rendering is taking a lot of time. People have said there is jpeg hardware assist on the device, but I do not know for sure and even if its there it maybe tuned for certain image types.
So - some suggestions. Devise a test where you take one jpeg you have now, and render it to a context, and baseline this time. Take the same image and open it in Preview, then save it with a slightly different quality value to another file, and try that (Preview will strip out unnecessary "junk" from the image, or even convert it first to a png then back to a jpeg. The idea here is to use an image output from Preview, which is going to be as clean an image as you are going to get. Is the image any better?
You can also try using libjpegturbo, and see if it can render your images faster. You can see that library in action in a github project, PhotoScrollerNetwork. You may find that project of use as it decodes the jpegs (using that library) in real time as they are received, and then supports zoomable viewing using CATiledLayers.
I generally use Fireworks PNGs (with different layers, some hidden, etc.) in my iOS projects (loaded into NIBs in UIImageView instances). Often, I take the PNG and resave it as PNG-32 to make the file smaller, but I'm questioning this now (because then I have to store the Fireworks PNG separately)....
In my cursory tests, a smaller file size does NOT affect the resultant memory use. Is there a relationship, or is it the final rendered bitmap that matters?
Note: I'm not asking about using an image that is too big in pixels. A valid comparison would be a high-quality jpeg that weights 1mb vs. a low-quality jpeg of the same content that weights 100K. Is the memory use the same?
UIImageView does not do any processing so if you set a large image the whole thing is loaded into memory when the imageView needs it regardless of the size of the imageView. So, yes, it does make a difference. You should store the smallest images that work within the imageView.
While your current example is using NIB's, if you were creating an app that displays large images acquired from other sources (e.g. the device camera or an external service) then you would scale those to a display size before using them in a UIImageView.
Edit:ok, sorry, I had a simple programming error, is there a way to delete this question?
I have some compressed textures that are PVR files, but I cant seem to draw them in my iPad application using OpenGL ES.
I can draw PNG files just fine, I know the PVR files are being loaded correctly.
Are there some special OpenGL draw functions that I need to be calling to draw the PVR files?
Edit:All I get is a white image.
Any info is appreciated.
Drawing PVRTC textures should be exactly the same as any other texture format - it looks more likely that your loading code is the problem. Are any GL errors being reported during loading?
The major difference to loading uncompressed textures are in the line:
glCompressedTexImage2D(GL_TEXTURE_2D, level, GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, width, height, 0, size, data);
or
glCompressedTexImage2D(GL_TEXTURE_2D, level, GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG, width, height, 0, size, data);
Make sure that you're not setting a GL filter mode to use MIPmaps if they're not in the texture as well.
Searching for PVRTC in Apple's docs brings up a decent summary of how to use these textures.
After upload, PVR textures are no different from other formats. Did you forgot to skip header during data upload, or used wrong parameters for glCompressedTexImage2D? It is even possible that compression tool was unable to convert images because of wrong size or color format.