Eclipse-RCP disabling creation of navigation history - editor

There is a following problem: I am creating an editor, which EditorInput contains pretty big object. After creating couple of such editors I've got OutOfMemoryError. Heap memory analyzer showed, that there are 3 objects of type EditorHistoryItem, which take around 80,8% of the heap space. (I think, that I've even closed previous editors, but they are still in the memory).
I think, that those EditorHistoryItems-s are related to navigation history construction in eclipse. So, can I disable navigation history? Or what would be other correct way to dispose large EditorInput or EditorPart, without closing an editor?
Any advices would be much appreciated.

Well, make your object smaller. I'm pretty sure it's not neccessary to load several Megabytes of objects. Why don't you load a pretty small object as your IEditorInput implementation when opening an editor and loading/unloading additional objects based on user-interaction (activating the Editor-Part, several tabls, pressing a button, ...) lazily.
You have a pretty good control about getting notified if the user activates an editor, changing the page (when using MultipageEditor) or other events where you can load or unload objects to minimize the footprint of your heap. There is a absolutely no need to have a big IEditorInput object.

Depending on what editor functions you're using, Eclipse can have up to 5 copies of your IEditorInput in memory.
While this works great for < 1,000 line Java classes, you run out of memory when editing larger files.
Take a look at the source code for FileEditorInput, and see if you can write your own version implementing the IEditorInput interface that keeps most of your file on disk and just reads parts of the file.
Worst case, you'll have to write your own Eclipse editor.

Related

Delphi : in CreateForm, how to tell which component creation is slow

In my program, the creation of the main form is slow : I have identified that it hangs for around two seconds just before the form's OnCreate event is called. So I suspect this is happening while the components are created.
Since this form has several frames, I wonder if there is a way to "profile" component creation in order to see where I can improve. I suspect the lag comes from the opening of a database table that should not be open at that time (rather later, after some filtering is in place).
If there is a way get an event triggered before/after each component creation, I could do the profiling myself (with codesite for example).
Or maybe it is possible to do the component creation manually ?
Here is a quick and dirty way to work out where the delay is:
Take a copy of the Classes unit source code and place it in your project's source folder. This will ensure that this unit is compiled into your program rather than the one supplied with Delphi.
Modify the code in the constructor of TComponent. All streamed components pass through here during creation. Add code to log the class name, e.g. using CodeSite for instance.
Run your program, and then inspect the resulting log to identify the delay.
If you have many components then just knowing the class might not narrow it down. You might inject logging code into TComponent.SetName instead which will let you log the component's name. However, the basic idea is simple enough, and you should be able to apply it to your setting in order to find out the information you need.

Resampling large bitmaps for lists in Android using MVVM Cross

I have a long list of cells which each contain an image.
The images are large on disk, as they are used for other things in the app like wallpapers etc.
I am familiar with the normal Android process for resampling large bitmaps and disposing of them when no longer needed.
However, I feel like trying to resample the images on the fly in a list adapter would be inefficient without caching them once decoded, otherwise a fling would spawn many threads and I will have to manage cancelling unneeded images etc etc.
The app is built making extensive use of the fantastic MVVMCross framework. I was thinking about using the MvxImageViews as these can load images from disk and cache them easily. The thing is, I need to resample them before they are cached.
My question is, does anybody know of an established pattern to do this in MVVMCross, or have any suggestions as to how I might go about achieving it? Do I need to customise the Download Cache plugin? Any suggestions would be great :)
OK, I think I have found my answer. I had been accidentally looking at the old MVVMCross 3.1 version of the DownloadCache Plugin / MvxLocalFileImageLoader.
After cloning the up to date (v3.5) repo I found that this functionality has been added. Local files are now cached and can be resampled on first load :)
The MvxImageView has a Max Height / Width setter method that propagates out to its MvxImageHelper, which in turn sends it to the MvxLocalFileImageLoader.
One thing to note is that the resampling only happens if you are loading from a file, not if you are using a resource id.
Source is here: https://github.com/MvvmCross/MvvmCross/blob/3.5/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache.Droid/MvxAndroidLocalFileImageLoader.cs
Once again MVVMCross saves my day ^_^
UPDATE:
Now I actually have it all working, here are some pointers:
As I noted in the comments, the local image caching is only currently available on the 3.5.2 alpha MVVMCross. This was incompatible with my project, so using 3.5.1 I created my own copies of the 3.5.2a MvxImageView, MvxImageHelper and MvxAndroidLocalFileImageLoader, along with their interfaces, and registered them in the Setup class.
I modified the MvxAndroidLocalFileImageLoader to also resample resources, not just files.
You have to bind to the MvxImageView's ImageUrl property using the "res:" prefix as documented here (Struggling to bind local images in an MvxImageView with MvvmCross); If you bind to 'DrawableId' this assigns the image directly to the underlying ImageView and no caching / resampling happens.
I needed to be able to set the customised MvxImageview's Max Height / Width for resampling after the layout was inflated/bound, but before the images were retrieved (I wanted to set them during 'OnMeasure', but the images had already been loaded by then). There is probably a better way but I hacked in a bool flag 'SizeSet'. The image url is temporarily stored if this is false (i.e. during the initial binding). Once this is set to true (after OnMeasure), the stored url is passed to the underlying ImageHelper to be loaded.
One section of the app uses full screen images as the background of fragments in a pager adapter. The bitmaps are not getting garbage collected quick enough, leading to eventual OOMs when trying to load the next large image. Manually calling GC.Collect() when the fragments are destroyed frees up the memory but causes a UI stutter and also wipes the cache as it uses weak refs.
I was getting frequent SIGSEGV crashes on Lollipop when moving between fragments in the pager adapter (they never happened on KitKat). I managed to work around the issue by adding a SetImageBitmap(null) to the ImageView's Dispose method. I then call Dispose() on the ImageView in its containing fragment's OnDestroyView().
Hope this helps someone, as it took me a while!

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

How can I keep a large amount of OutputDebugString() calls from degrading my application in the Delphi 6 IDE?

This has happened to me on more than one occasion and has led to many lost hours chasing a ghost. As typical, when I am debugging some really difficult timing-related code I start adding tons of OutputDebugString() calls, so I can get a good picture of the sequence of related operations. The problem is, the Delphi 6 IDE seems to be able to only handle that situation for so long. I'll use a concrete example I just went through to avoid generalities (as much as possible).
I spent several days debugging my inter-thread semaphore locking code along with my DirectShow timestamp calculation code that was causing some deeply frustrating problems. After having eliminated every bug I could think of, I still was having a problem with Skype, which my application sends audio to.
After about 10 seconds the delay between my talking and hearing my voice come out of Skype on the second PC that I was using for testing, the far end of the call, started to grow. At around 20 - 30 seconds the delay started to grow exponentially and at that point triggered code I have that checks to see if a critical section was being held too long.
Fortunately it wasn't too late at night and having been through this before, I decided to stop relentlessly tracing and turned off the majority of the OutputDebugString(). Thankfully I had most of them wrapped in a conditional compiler define so it was easy to do. The instant I did this the problems went away, and it turned out my code was working fine.
So it looks like the Delphi 6 IDE starts to really bog down when the amount of OutputDebugstring() traffic is above some threshold. Perhaps it's just the task of adding strings to the Event Log debugger pane, which holds all the OutputDebugString() reports. I don't know, but I have seen similar problems in my applications when a TMemo or similar control starts to contain too many strings.
What have those of you out there done to prevent this? Is there a way of clearing the Event Log via some method call or at least a way of limiting its size? Also, what techniques do you use via conditional defines, IDE plug-ins, or whatever, to cope with this situation?
A similar problem happened to me before with Delphi 2007. Disable event viewing in the IDE and instead use DebugView from Sysinternals.
I hardly ever use OutputDebugString. I find it hard to analyze the output in the IDE and it takes extra effort to keep several sets of multiple runs.
I really prefer a good logging component suite (CodeSite, SmartInspect) and usually log to various files. Standard files for example are "General", "Debug" (standard debug info that I want to collect from a client installation as well), "Configuration", "Services", "Clients". These are all set up to "overflow" to a set of numbered files, which allows you to keep the logs of several runs by simply allowing more numbered files. Comparing log info from different runs becomes a whole lot easier that way.
In the situation you describe I would add debug statements that log to a separate logfile. For example "Trace". The code to make "Trace" available is between conditional defines. That makes turning it on pretty simple.
To avoid leaving in these extra debug statements, I tend to make the changes to turn on the "Trace" log without checking it out from source control. That way, the compiler of the build server will throw out "identifier not defined" errors on any statements unintentionally left in. If I want to keep these extra statements I either change them to go to the "Debug" log, or put them between conditional defines.
The first thing I would do is make certain that the problem is what you think it is. It has been a long time since I've used Delphi, so I'm not sure about the IDE limitations, but I'm a bit skeptical that the event log will start bogging down exponentially over time with the same number of debug strings being written in a period of 20-30 seconds. It seems more likely that the number of debug strings being written is increasing over time for some reason, which could indicate a bug in your application control flow that is just not as obvious with the logging disabled.
To be sure I would try writing a simple application that just runs in a loop writing out debug strings in chunks of 100 or so, and start recording the time it takes for each chunk, and see if the time starts to increase as significantly over a 20-30 second timespan.
If you do verify that this is the problem - or even if it's not - then I would recommend using some type of logging library instead. OutputDebugString really loses it's effectiveness when you use it for massive log dumps like that. Even if you do find a way to reset or limit the output window, you'd be losing all of that logging data.
IDE Fix Pack has an optimisation to improve performance of OutputDebugString
The IDE’s Debug Log View also got an optimization. The debugger now
updates the Log View only when the IDE is idle. This allows the IDE to
stay responsive when hundreds of OutputDebugString messages or other
debug messages are written to the Debug Log View.
Note that this only runs on Delphi 2007 and above.

How might I find out the source of long delays on resizing the main form?

I have a D2006 app that contains a page control and various grids, etc on the tabs. When I resize the main form (which ripples through and resizes just about everything on the form that is aligned to something), I experience long delays, like several seconds. The app freezes, the idle handler is not called and running threads appear to suspend also.
I have tries pausing execution in the IDE while this is happening in an attempt to break execution while it is in the troublesome code, but the IDE is not taking messages.
Obviously I'm not expecting anyone to point me at some errant piece of code, but I'm after debugging approaches that might help me. I have extensive execution timing code throughout the app, and the long delays don't show up in any of the data. For example, the execution time of the main form OnResize handler is minimal.
If you want to find out what's actually taking up your time, try a profiler. Sampling Profiler could answer your question pretty easily, especially if you're able to find the beginning and the end of the section of code that's causing trouble and insert OutputDebugString statements around it to narrow down the profiling.
OK. Problem solved. I noticed that the problem only occurred when I had command-line switches enabled to log some debug info. The debug info included some HTTP responses that were written to a debug log (a TMemo) on one of the tabs. When the HTTP response included a large block with no CR/LFs the TMemo wrapped it. Whenever I resized the main form, the TMemo resized and the control had to render the text again with the new word wrapping.
To demonstrate:
start a new Delphi project
drop a TMemo onto the form
align it to Client
compile and run
paste a large amount of text into the TMemo
resize the main form
I won't award myself the answer, as I hadn't really provided enough info for anybody else to solve it.
BTW #Mason - would SamplingProfiler have picked this one up - given that the execution is inside the VCL, and not in my code?
A brute-force approach that may give results.... Put a debug message to OutputDebugString() from every re-size event, sending the name of the control as the string to be displayed. This may show you which ones are being called "a lot".
You may have a situation where controls are bumping each other, setting off cascading re-size events. Like 3 siblings in the back seat of a compact car, once they start jostling for position, it can take a while for them to "settle down".
Don't make me turn this car arround....
The debug log (viewable in the IDE, or with an external ODS viewer), may show you which ones are causing the most trouble, if they appear multiple times for one "user-initiated re-size event".
Run your application in AQTime's performance profiler (included with XE, but you can get a time-limited version from their website).
Do some fanatic resizing for a while, and then stop the application.
After that, you'll see exactly which function was called many times, and where most time was spent.

Resources