I call the following line in my rendering loop, which seems to be the right way to handle drawing a constantly changing array :
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(myArray), gl.DYNAMIC_DRAW);
This is working fine under Firefox, my array is being updated properly and there is no memory leak. However with chrome it crash the application in a few seconds, each call to bufferData increase the memory usage and nothing is freed.
Am I doing something wrong ? Is there a way to fix it ?
I had the exact same problem.
Worked around by doing it without a typed array:
gl.bufferData(gl.ARRAY_BUFFER, myArray, gl.DYNAMIC_DRAW);
This "solved" my problem.
Related
We have a native C++ add-on running in an Electron renderer process providing Uint8Array bitmap data to JavaScript where it is painted into a canvas via textImage2D in a webgl context or putImageData in a 2d context.
The Uint8Array is allocated in the native addon and passed via a callback to JS. It is not deallocated immediately after the callback ends, and is kept in a memory pool that holds the last 10 frames sent to be available for async painting.
If the array is passed as is to putImageData or texImage2D, the renderer completely freezes. If it is copied into a new TypedArray beforehand, there is no problem, but we would like to avoid the extra copy operation, hence the memory pool.
I have a feeling the freezing is related to the way Chromium handles GL commands via it's command buffer.
I've tried the following Chromimum command line args in an attempt to isolate the issue, with no luck - renderer process is frozen when the array is not copied beforehand):
--use-passthrough-cmd-decoder results in longer render times for 2d, and a null context for webgl
--disable-gpu-sandbox does nothing
--in-process-gpu does nothing
Any idea what is happening?
My app from time to time initializes a bunch of DirectX stuff and loads scenes, sometimes containing some large textures (up to 200–300 MB per texture). At first, everything works fine, but after a while FromMemory() just stops working, but only for big textures:
SlimDX.Direct3D11.Direct3D11Exception: E_FAIL: An undetermined error occurred (-2147467259)
at SlimDX.Result.Throw[T](Object dataKey, Object dataValue)
at SlimDX.Result.Record[T](Int32 hr, Boolean failed, Object dataKey, Object dataValue)
at SlimDX.Direct3D11.ShaderResourceView.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* loadInformation)
at SlimDX.Direct3D11.ShaderResourceView.FromMemory(Device device, Byte[] memory)
Of course, I dispose all previously loaded ShaderResourceViews loaded before loading a new scene. But FromMemory() starts working again only after app’s restart. Could you please tell me what else could be wrong?
UPD:
With Texture2D.FromMemory(), I get this:
System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
at D3DX11CreateTextureFromMemory(ID3D11Device* , Void* , UInt32 , D3DX11_IMAGE_LOAD_INFO* , ID3DX11ThreadPump* , ID3D11Resource** , Int32* )
at SlimDX.Direct3D11.Resource.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* info)
at SlimDX.Direct3D11.Texture2D.FromMemory(Device device, Byte[] memory)
And with native code debugging enabled:
Exception thrown at 0x748AA882 in app.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x00AFC7C8.
Exception thrown: 'System.Runtime.InteropServices.SEHException' in SlimDX.dll
Sadly, I have no idea how D3DX11CreateTextureFromMemory() actually works and why does it try to re-allocate memory. Maybe it’s time to move to x64…
Found the problem. Turns out all I had to do is to add “LARGEADDRESSAWARE” flag to executable. Without it, 1 GB was the limit — quite easily achievable with 300 MB per texture.
Also, of course, since most of that data ended up in Large Object Heap, GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce helped as well.
Sorry for wasting your time.
I am trying to create a Gtk Widget that you can pass an OpenCV image to that will then show it. I have created a class that is inherited from Gtk.Image that is used to show the image. You pass the OpenCV image to this class using the show_frame method, which then updates the Gtk.Image so it shows that image.
I have tested this and it works fine, i.e the image is correctly shown and updated when the show_frame method is called. However every time the image is updated, the memory used increases, until there is not enough memory and the program crashes.
I believe this is due to the memory that image is not being freed correctly. I cannot however work out how to fix this. I have tried unreferencing the gbytes once a new frame is received but this does not help. The memory only builds up when the set_from_pixbuf function is called. If this is commented out the memory usage stays at a constant level.
class OpenCVImageViewer(Gtk.Image):
def __init__(self):
Gtk.Image.__init__(self)
def show_frame(self, frame):
# Convert to opencv BGR to Gtk RGB
rgb_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Get details about frame in order to set up pixbuffer
height = rgb_image.shape[0]
width = rgb_image.shape[1]
nChannels = rgb_image.shape[2]
gbytes = GLib.Bytes.new(rgb_image.tostring())
pixbuf = GdkPixbuf.Pixbuf.new_from_bytes(gbytes, GdkPixbuf.Colorspace.RGB, False,
8, width, height, width*nChannels)
# Add Gtk to main thread loop for thread safety
GLib.idle_add(self.set_from_pixbuf, pixbuf)
GLib.idle_add(self.queue_draw)
Well,
I found a solution, but I do not understand why it works: Set the image with a copy of the pixbuffer.
imageWidget.set_from_pixbuf(pixbuffer.copy())
I came to this solution after observing that the memory leak disapeared for scaled pixbuffers (i.e. result of pixbuffer.scale_simple).
Excerpt from the PyGTK FAQ, section 5.17:
There is a reference cycle between the python wrapper and its underlying C object; this means that the object will not be automatically deallocated when there are no more user references, and you will need the garbage collector to kick in (which may take a few cycles). This occasionally causes the odd problem, such as with pixbufs described in FAQ 8.4
And from section 8.4:
The answer is "Interesting GC behaviour" in Python. Apparently finalizers are not necessarily called as soon as an object goes out of scope. My guess is that the python memory manager doesn't directly know about the storage allocated for the image buffer (since it's allocated by the gdk) and that it therefore doesn't know how fast memory is being consumed.
The solution is to call gc.collect() at some appropriate place.
For example, I had some code that looked like this:
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
This chewed up an unacceptably large amount of memory for any reasonable image set. Changing the code to look like this fixed the problem:
import gc
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
del pb
gc.collect()
I am not exactly sure where you should call the garbage collector in your code (since I don't really know that much Python), but I believe this is the way to solve it.
NB: The entire code base for this project is so large that posting any meaningful amount wold render this question too localised, I have tried to distil any code down to the bare-essentials. I'm not expecting anyone to solve my problems directly but I will up vote those answers I find helpful or intriguing.
This project uses a modified version of AudioStreamer to playback audio files that are saved to locally to the device (iPhone).
The stream is set up and scheduled on the current loop using this code (unaltered from the standard AudioStreamer project as far as I know):
CFStreamClientContext context = {0, self, NULL, NULL, NULL};
CFReadStreamSetClient(
stream,
kCFStreamEventHasBytesAvailable | kCFStreamEventErrorOccurred | kCFStreamEventEndEncountered,
ASReadStreamCallBack,
&context);
CFReadStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes);
The ASReadStreamCallBack calls:
- (void)handleReadFromStream:(CFReadStreamRef)aStream
eventType:(CFStreamEventType)eventType
On the AudioStreamer object, this all works fine until the stream is read using this code:
BOOL hasBytes = NO; //Added for debugging
hasBytes = CFReadStreamHasBytesAvailable(stream);
length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize);
hasBytes is YES but when CFReadStreamRead is called execution stops, the App does not crash it just stops exciting, any break points below the CFReadStreamRead call are not hit and ASReadStreamCallBack is not called again.
I am at a loss to what might cause this, my best guess is the thread is being terminated? But the hows and whys is why I'm asking SO.
Has anyone seen this behaviour before? How can I track it down and ideas on how I might solve it will be very much welcome!
Additional Info Requested via Comments
This is 100% repeatable
CFReadStreamHasBytesAvailable was added by me for debugging but removing it has no effect
First, I assume that CFReadStreamScheduleWithRunLoop() is running on the same thread as CFReadStreamRead()?
Is this thread processing its runloop? Failure to do this is my main suspicion. Do you have a call like CFRunLoopRun() or equivalent on this thread?
Typically there is no reason to spawn a separate thread for reading streams asynchronously, so I'm a little confused about your threading design. Is there really a background thread involved here? Also, typically CFReadStreamRead() would be in your client callback (when you receive the kCFStreamEventHasBytesAvailable event (which it appears to be in the linked code), but you're suggesting ASReadStreamCallBack is never called. How have you modified AudioStreamer?
It is possible that the stream pointer is just corrupt in some way. CFReadStreamRead should certainly not block if bytes are available (it certainly would never block for more than a few milliseconds for local files). Can you provide the code you use to create the stream?
Alternatively, CFReadStreams send messages asynchronously but it is possible (but not likely) that it's blocking because the runloop isn't being processed.
If you prefer, I've uploaded my AudioPlayer inspired by Matt's AudioStreamer hosted at https://code.google.com/p/audjustable/. It supports local files (as well as HTTP). I think it does what you wanted (stream files from more than just HTTP).
I'm developing an application with adobe air 3 for ios and having low memory errors frequently.
After ios 5 update os started to kill my app after some low memory warnings.
But the thing is profiler says app uses 4 to 9 megs of memory.
There are a lot of bitmap copy operations around and sometimes instantiates new bitmaps from embedded bitmaps.
I highly optimized everything and look for leaks etc.
I watch profiler for memory status and seems like GC clears everything. everything looks perfect but app continues to get low memory errors and gets killed by os.
Is there anything wrong with this code below. Because my assumption is this ClassReference never gets off from memory even the profiles says memory is cleared.
I used clone method to pass value instead of pass by ref. so I guess GC can collect that local variable. I tried with and without clone nothing changes.
If the code below runs 10-15 times with different tile Id's app crashes but with same ID's it continues working.
Is there anyone who is familiar with this kind of thing?
tmp is bitmapData
if (isMoving)
{
tmp=getProxyImage(x,y); //low resolution tile image
}
else
{
strTmp="main_TILE"+getTileID(x,y);
var ClassReference:Class = getDefinitionByName(strTmp) as Class; //full resolution tile image //something wrong here
tmp=new ClassReference().bitmapData.clone(); //something wrong here
ClassReference=null;
}
return tmp.clone();
Thanks for reading. I hope some one has a solution for this.
You are creating three copies of your bitmapdata with this. They will likely get garbage collected eventually, but you probably run out of memory before that happens.
(Here I assume you have embedded your bitmapdata using the [Embed] tag)
tmp = new ClassReference()
// allocates no new memory, class reference already exists
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
// creates a new BitmapAsset from the class reference including it's BitmapData.
// then you clone this bitmapdata, giving you two
tmp = new ClassReference().bitmapData.clone();
// not really necessary since ClassReference goes out of scope anyway, but no harm done
ClassReference=null;
// Makes a third copy of your second copy and returns it.
return tmp.clone();
I would recommend this (assuming you need unique bitmapDatas for each tile)
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
return new ClassReference().bitmapData.clone();
If you don't need unique bitmapDatas, keep static properties with the bitmapDatas on some class and use the same ones all over. That will minimize memory usage.