Some time ago i posted a question related to a WriteableBitmap memory leak, and though I received wonderful tips related to the problem, I still think there is a serious bug / (mistake made by me) / (Confusion) / (some other thing) here.
So, here is my problem again:
Suppose we have a WPF application with an Image and a button. The image's source is a really big bitmap (3600 * 4800 px), when it's shown at runtime the applicaton consumes ~90 MB.
Now suppose i wish to instantiate a WriteableBitmap from the source of the image (the really big Image), when this happens the applications consumes ~220 MB.
Now comes the tricky part, when the modifications to the image (through the WriteableBitmap) end, and all the references to the WriteableBitmap (at least those that I'm aware of) are destroyed (at the end of a method or by setting them to null) the memory used by the writeableBitmap should be freed and the application consumption should return to ~90 MB. The problem is that sometimes it returns, sometimes it does not.
Here is a sample code:
// The Image's source whas set previous to this event
private void buttonTest_Click(object sender, RoutedEventArgs e)
{
if (image.Source != null)
{
WriteableBitmap bitmap = new WriteableBitmap((BitmapSource)image.Source);
bitmap.Lock();
bitmap.Unlock();
//image.Source = null;
bitmap = null;
}
}
As you can see the reference is local and the memory should be released at the end of the method (Or when the Garbage collector decides to do so). However, the app could consume ~224 MB until the end of the universe.
Any help would be great.
Is it necessary to render the Bitmap image at the same resolution and pixels? You could create the writeablebitmap at a much lower set of pixels and call the render method. Since the writeablebitmap carries a reference to the original uielements when calling render, in this case, you are going to have 3 chunks: 1) original uielement, 2) pixels in writeablebitmap, 3) reference to copied original.
I had a similar issue with the writeablebitmap in terms of memory leaks and I fixed it from checking out this link:
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/17/silverlight-s-big-image-problem-and-what-you-can-do-about-it.aspx
If you create another writeablebitmap and copy the pixels over, then dispose of the first writeablebitmap you should see some memory release - at least I did in my scenario.
Related
What exactly should I expect to happen when using DiscardResource?
What's the difference between discard and destroying/deleting a resource.
When is a good time/use-case to discard a resource?
Unfortunately Microsoft doesn't seem to say much about it other than it "discards a resource".
TL;DR: Is a rarely used function that provides a driver hint related to handling clear compression structures. You are unlikely to use it except based on specific performance advice.
DiscardResource is the DirectX 12 version of the Direct3D 11.1 method. See Microsoft Docs
The primary use of these methods it to optimize the performance of tiled-based deferred rasterizer graphics parts by discarding the render target after present. This is a hint to the driver that the contents of the render target are no longer relevant to the operation of the program, so it can avoid some internal clearing operations on the next use.
For DirectX 11, this is in the DirectX 11 App template to use DiscardView because it makes use of DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL:
void DX::DeviceResources::Present()
{
// The first argument instructs DXGI to block until VSync, putting the application
// to sleep until the next VSync. This ensures we don't waste any cycles rendering
// frames that will never be displayed to the screen.
DXGI_PRESENT_PARAMETERS parameters = { 0 };
HRESULT hr = m_swapChain->Present1(1, 0, ¶meters);
// Discard the contents of the render target.
// This is a valid operation only when the existing contents will be entirely
// overwritten. If dirty or scroll rects are used, this call should be removed.
m_d3dContext->DiscardView1(m_d3dRenderTargetView.Get(), nullptr, 0);
// Discard the contents of the depth stencil.
m_d3dContext->DiscardView1(m_d3dDepthStencilView.Get(), nullptr, 0);
// If the device was removed either by a disconnection or a driver upgrade, we
// must recreate all device resources.
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
HandleDeviceLost();
}
else
{
DX::ThrowIfFailed(hr);
}
}
The DirectX 12 App template doesn't need those explicit calls because it uses DXGI_SWAP_EFFECT_FLIP_DISCARD.
If you are wondering why the DirectX 11 app doesn't just use DXGI_SWAP_EFFECT_FLIP_DISCARD, it probably should. The DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL swap effect was the only one supported by Windows 8.x for Windows Store apps, which is when DiscardView was introduced. For Windows 10 / DirectX 12 / UWP, it's probably better to always use DXGI_SWAP_EFFECT_FLIP_DISCARD unless you specifically don't want the backbuffer discarded.
It is also useful for multi-GPU SLI / Crossfire configurations since the clearing operation can require synchronization between the GPUs. See this GDC 2015 talk
There are also other scenario-specific usages. For example, if doing deferred rendering for the G-buffer where you know every single pixel will be overwritten, you can use DiscardResource instead of doing ClearRenderTargetView / ClearDepthStencilView.
My app from time to time initializes a bunch of DirectX stuff and loads scenes, sometimes containing some large textures (up to 200–300 MB per texture). At first, everything works fine, but after a while FromMemory() just stops working, but only for big textures:
SlimDX.Direct3D11.Direct3D11Exception: E_FAIL: An undetermined error occurred (-2147467259)
at SlimDX.Result.Throw[T](Object dataKey, Object dataValue)
at SlimDX.Result.Record[T](Int32 hr, Boolean failed, Object dataKey, Object dataValue)
at SlimDX.Direct3D11.ShaderResourceView.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* loadInformation)
at SlimDX.Direct3D11.ShaderResourceView.FromMemory(Device device, Byte[] memory)
Of course, I dispose all previously loaded ShaderResourceViews loaded before loading a new scene. But FromMemory() starts working again only after app’s restart. Could you please tell me what else could be wrong?
UPD:
With Texture2D.FromMemory(), I get this:
System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
at D3DX11CreateTextureFromMemory(ID3D11Device* , Void* , UInt32 , D3DX11_IMAGE_LOAD_INFO* , ID3DX11ThreadPump* , ID3D11Resource** , Int32* )
at SlimDX.Direct3D11.Resource.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* info)
at SlimDX.Direct3D11.Texture2D.FromMemory(Device device, Byte[] memory)
And with native code debugging enabled:
Exception thrown at 0x748AA882 in app.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x00AFC7C8.
Exception thrown: 'System.Runtime.InteropServices.SEHException' in SlimDX.dll
Sadly, I have no idea how D3DX11CreateTextureFromMemory() actually works and why does it try to re-allocate memory. Maybe it’s time to move to x64…
Found the problem. Turns out all I had to do is to add “LARGEADDRESSAWARE” flag to executable. Without it, 1 GB was the limit — quite easily achievable with 300 MB per texture.
Also, of course, since most of that data ended up in Large Object Heap, GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce helped as well.
Sorry for wasting your time.
I am trying to create a Gtk Widget that you can pass an OpenCV image to that will then show it. I have created a class that is inherited from Gtk.Image that is used to show the image. You pass the OpenCV image to this class using the show_frame method, which then updates the Gtk.Image so it shows that image.
I have tested this and it works fine, i.e the image is correctly shown and updated when the show_frame method is called. However every time the image is updated, the memory used increases, until there is not enough memory and the program crashes.
I believe this is due to the memory that image is not being freed correctly. I cannot however work out how to fix this. I have tried unreferencing the gbytes once a new frame is received but this does not help. The memory only builds up when the set_from_pixbuf function is called. If this is commented out the memory usage stays at a constant level.
class OpenCVImageViewer(Gtk.Image):
def __init__(self):
Gtk.Image.__init__(self)
def show_frame(self, frame):
# Convert to opencv BGR to Gtk RGB
rgb_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Get details about frame in order to set up pixbuffer
height = rgb_image.shape[0]
width = rgb_image.shape[1]
nChannels = rgb_image.shape[2]
gbytes = GLib.Bytes.new(rgb_image.tostring())
pixbuf = GdkPixbuf.Pixbuf.new_from_bytes(gbytes, GdkPixbuf.Colorspace.RGB, False,
8, width, height, width*nChannels)
# Add Gtk to main thread loop for thread safety
GLib.idle_add(self.set_from_pixbuf, pixbuf)
GLib.idle_add(self.queue_draw)
Well,
I found a solution, but I do not understand why it works: Set the image with a copy of the pixbuffer.
imageWidget.set_from_pixbuf(pixbuffer.copy())
I came to this solution after observing that the memory leak disapeared for scaled pixbuffers (i.e. result of pixbuffer.scale_simple).
Excerpt from the PyGTK FAQ, section 5.17:
There is a reference cycle between the python wrapper and its underlying C object; this means that the object will not be automatically deallocated when there are no more user references, and you will need the garbage collector to kick in (which may take a few cycles). This occasionally causes the odd problem, such as with pixbufs described in FAQ 8.4
And from section 8.4:
The answer is "Interesting GC behaviour" in Python. Apparently finalizers are not necessarily called as soon as an object goes out of scope. My guess is that the python memory manager doesn't directly know about the storage allocated for the image buffer (since it's allocated by the gdk) and that it therefore doesn't know how fast memory is being consumed.
The solution is to call gc.collect() at some appropriate place.
For example, I had some code that looked like this:
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
This chewed up an unacceptably large amount of memory for any reasonable image set. Changing the code to look like this fixed the problem:
import gc
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
del pb
gc.collect()
I am not exactly sure where you should call the garbage collector in your code (since I don't really know that much Python), but I believe this is the way to solve it.
I'm developing an application with adobe air 3 for ios and having low memory errors frequently.
After ios 5 update os started to kill my app after some low memory warnings.
But the thing is profiler says app uses 4 to 9 megs of memory.
There are a lot of bitmap copy operations around and sometimes instantiates new bitmaps from embedded bitmaps.
I highly optimized everything and look for leaks etc.
I watch profiler for memory status and seems like GC clears everything. everything looks perfect but app continues to get low memory errors and gets killed by os.
Is there anything wrong with this code below. Because my assumption is this ClassReference never gets off from memory even the profiles says memory is cleared.
I used clone method to pass value instead of pass by ref. so I guess GC can collect that local variable. I tried with and without clone nothing changes.
If the code below runs 10-15 times with different tile Id's app crashes but with same ID's it continues working.
Is there anyone who is familiar with this kind of thing?
tmp is bitmapData
if (isMoving)
{
tmp=getProxyImage(x,y); //low resolution tile image
}
else
{
strTmp="main_TILE"+getTileID(x,y);
var ClassReference:Class = getDefinitionByName(strTmp) as Class; //full resolution tile image //something wrong here
tmp=new ClassReference().bitmapData.clone(); //something wrong here
ClassReference=null;
}
return tmp.clone();
Thanks for reading. I hope some one has a solution for this.
You are creating three copies of your bitmapdata with this. They will likely get garbage collected eventually, but you probably run out of memory before that happens.
(Here I assume you have embedded your bitmapdata using the [Embed] tag)
tmp = new ClassReference()
// allocates no new memory, class reference already exists
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
// creates a new BitmapAsset from the class reference including it's BitmapData.
// then you clone this bitmapdata, giving you two
tmp = new ClassReference().bitmapData.clone();
// not really necessary since ClassReference goes out of scope anyway, but no harm done
ClassReference=null;
// Makes a third copy of your second copy and returns it.
return tmp.clone();
I would recommend this (assuming you need unique bitmapDatas for each tile)
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
return new ClassReference().bitmapData.clone();
If you don't need unique bitmapDatas, keep static properties with the bitmapDatas on some class and use the same ones all over. That will minimize memory usage.
I'm not so new to the concepts present on J2ME, but I'm sort of lazy in ways I shouldn't:
Lately my app has been loading images into memory as they were candy...
Sprite example = new Sprite(Image.createImage("/images/example.png"), w, h);
and I'm not really sure it's the best way, but it worked fine in my Motorola Z6, until last night, when I tested the app in a old Samsung cellphone and the images wont even load and require several attemps of starting the thread to show up. The screen was left on white, so I realized that it has to be something about Image loading that I'm not doing quite well... Is there anyone who can tell me how to properly make a loading routine in my app?.
I'm not sure exactly what you are looking for, but the behavior you describe very much sounds like you are experiencing an OutOfMemory exception. Try reducing the dimensions of your images (heap usage is based on dimension) and see if the behavior ceases. This will let you know if it is truly an OutOfMemory issue or something else.
Other tips:
Load images largest to smallest. This helps with heap fragmentation and allows the largest heap space for the largest images.
Unload (set to null) in reverse order of how you loaded and garbage collect after doing so. Make sure to Thread.yield() after you call the GC.
Make sure you only load the images that you need. Unload images from a state that the application is no longer in.
Since you are creating sprites you may have multiple sprites for one image. Consider creating an image pool to make sure you only load the image once. Then just point each Sprite object to the image within the pool that it requires. Your example in your question seems like you would more than likely load the same image into memory more than once. That's wasteful and could be part of the OutOfMemory issue.
Using a film image(a set of images by a defined dimension in one image) and use logic to pull them out one at a time.
Because they a grouped into one image, you are saving header space per image and thus can reduce the memory used.
This techniques was first used in MIDP 1.0 memory constrained devices.
Using the Fostah approach of not loading images over and over, I made the following class:
public class ImageLoader {
private static Hashtable pool = new Hashtable();
public static Image getSprite(String source){
if(pool.get(source) != null) return (Image) pool.get(source);
try {
Image temp = Image.createImage(source);
pool.put(source, temp);
return temp;
} catch (IOException e){
System.err.println("Error al cargar la imagen en "+source+": "+e.getMessage());
}
return null;
}
}
So, whenever I need an image I first ask the pool for it, or just load it into the pool.