Here can be found a sample code
https://github.com/PVoLan/TestActivityDispose
We have two activities. One has a button leading to second activity. Second activity has 30 TextViews (simulating a complex UI) and a back button.
Switching between activities forward and back causes GREF quantity growing quickly. It requires about 60 times to click forward and back to overflow 2k limit and crash application.
Android log can be found in repository. As you can see from log, GREF overflow occurs most because of TextViews (1543 GREFs). Another GREFS are:
Button (55 GREFs) - backButton, obviously
OnClickListenerImplementor (55 GREFs) - backButton.Click listenters
Activity2 (54 GREFs)
Intent (54 GREFs) - activity starters
So, as we can see, activity resources are not freed when activity finishes (although OnDestroy is called)
How can I free all this GREFs properly?
The problem is that there are two GCs in the process (Dalvik & Mono), and neither knows how much memory the other is using. For example, all Mono sees for TextView instances is a really small object (largely an IntPtr and other supporting fields from Java.Lang.Object):
namespace Java.Lang {
public class Object {
IntPtr handle;
// ...
}
}
namespace Android.Widget {
public class TextView : Java.Lang.Object {
// ...
}
}
That is, for most of the bound types, there are no data members of consequence, and the C# wrappers are quite tiny. Mono doesn't know -- and can't know -- that there's a Java object associated with Object.handle, and (more importantly) how much memory that object is referencing.
Consequently, you occasionally need to help it:
// https://github.com/PVoLan/TestActivityDispose/blob/master/Test/Activity2.cs
public class Activity2 {
// ...
protected override void OnDestroy ()
{
Android.Util.Log.Info("----------", "Destroy");
base.OnDestroy ();
GC.Collect ();
}
}
The added GC.Collect() call will give Mono's GC a chance to execute and collect the garbage objects. After adding that line, repeatedly tapping "Hello World, Click Me!" and "Back" levels out at 93-126 grefs (depending on which activity you're on).
Related
In summary:
The cascade effect nature of the Cold Stream, from Inactive to Active, Is in itself an "alien" execution (alien to the reactive design) that MUST BE EXECUTED WITHIN THE SYNCRHONIZED REGION, and this is unavoidable, going against Item 79 of Effective Java.
Effective Java: Item 79:
"..., to avoid deadlock and data corruption, never call an alien
method from within a synchronized region. More generally, keep the
amount of work that you do from within synchronized to a minimum."
never call an alien
method from within a synchronized region
An add(Consumer<T> observer) AND remove(Consumer<T> observer) WILL BE concurrent (because of switchMaps that react to asynchronous changes in values/states), BUT according to Item 79, it should not be possible for a subscribe(Publisher p); method to even exist.
Since a subscribe(publisher) MUST WORK as a callback function that reacts to additions and removals of observers...
private final Object lock = new Object();
private volatile BooleanConsumer suscriptor;
public void subscribe(Publisher p) {
syncrhonized (lock) {
suscriptor = isActive -> {
if (isActive) p.add(this);
else p.remove(this);
}
}
}
public void add(Consumer<T> observer) {
syncrhonized (lock) {
observers.add(observer);
if (observer.size() > 0) suscriptor.accept(true);
}
}
I would argue that using a volatile mediator is better than holding on to the Publisher directly, but holding on to the publisher makes no difference at all, because by altering its state (when adding ourselves to the publisher) we are triggering the functions (other possible subscriptions to publishers) within it!!!, There really is no difference.
Doing it via misdirection is the proper answer, and doing so is the main idea behind the separation of concerns principle!!
Instead, what Item 79 is asking, is that each time an observer is added, we manually synchronize FROM THE OUT/ALIEN-SIDE, and deliberately check whether a subscription must be performed.
synchronized (alienLock) {
observable.add(observer);
if (observable.getObserverSize() > 0) {
publishier.add(observable);
}
}
and each time an observer is removed:
synchronized (alienLock) {
observable.remove(observer);
if (observable.getObserverSize() == 0) {
publishier.remove(observable);
}
}
Imagine those lines repeated EACH and EVERY TIME a node forks or joins on to a new one (in the reactive graph), it would be an insane amount of boilerplate defeating the entire purpose.
Reading carefully the item, you can see that the rule is there to prevent a "something" done wrong by the user that hangs the thread preventing accesses.
And this answer will be part me trying to justify why this is possible but also a non-issue in this case.
Binary Events.
In this case an event that involves only 2 states, isActive == true || false;
This means that if the consumption gets "hanged" on true, the only other option that may be waiting is "false", BUT even worst...
IF one of the two becomes deadlocked, it means the entire system is deadlocked anyways... in reality the issue is outside the design, not inside.
What I mean is that out of the 2 possible options: true or false. the time it takes for either of them to execute is meaningless since the ONLY OTHER OPTION IS STILL REQUIRED TO WAIT regardless.
Enclosed functionality of the lock.
Even if subscribe(Publisher p) methods can be concatenated, the only thing the user has access to, IS NOT THE lock per se, but the method.
So even If we are executing "alien methods" with functions, inside our synchronized bodies, THOSE FUNCTIONS ARE STILL OURS, and we know what they have inside them and how they work and what exactly they will perform.
In this case the only uncertainty in the system is not what the functions will do, but HOW MANY CONTATENATIONS THE SYSTEM HAS.
What's wrong in my code:
Finally, the only thing (I see) wrong, is that observers and subscriptions MOST DEFINITEY WORK IN SEPARATE LOCKS, because observers MUST NOT, under any circumstance should allow themselves to get locked while a subscription domino effect is taking place.
I believe that's all...
One of my old question had to do with viewing pdf files in monotouch ( I managed to accomplish this). Port of the iOS pdf viewer for xamarin
My issue is as following: if I start to close and open a pdf view( view with catiledlayer) really fast and often my app crashes with a:
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
After researching around the internet for a few days I found a post saying something along the lines of: The image back store is being cleaned and this is causing the error.
Edit:
Ok, I have come to the conclusion that my app is cleaning the memory and my pointers are turning into nulls. I called Gc.Collect() a couple of times and this seems to be the root of the problem.
I have removed all my calls to GC.Collect() and I currently running a stress test and will update as I identify the issue.
After running some more tests this is what I found out:
The error seems to orignate from the TiledLayerDelegate : CALayerDelegate class.
The app only crashes if the method Dispose from CALayerDelegate is called, overriding the method as empty seems to prevent the app from crashing.
Running the app seems to cause no issue whatsoever anymore. It is apparent that something is going really wrong on the Dispose method of the CALayerDelegate.
Last finding: Running the app like a monkey tends to heat up the app a good bit. I assume this is due to the intensive rendering of pdf pages ( they are huge sheets about 4,000 X 3,000 pxs)
protected override void Dispose (bool disposing)
{
try{
view = null;
GC.Collect (2);
//base.Dispose (disposing);
}catch(Exception e) {
//System.Console.Write(e);
}
}
Now more than anything, I am just wondering if the phone heating up is really as I assume nothing more than the CPU rendering the sheets and is normal. Does anyone have any ideas as to how best deal with the Dispose override?
Last Edit: for anyone wanting to prevents crashes this is what my last version of the layer view class looks like.
public class TiledPdfView : UIView {
CATiledLayer tiledLayer;
public TiledPdfView (CGRect frame, float scale)
: base (frame)
{
tiledLayer = Layer as CATiledLayer;
tiledLayer.LevelsOfDetail = 4; //4
tiledLayer.LevelsOfDetailBias = 4;//4
tiledLayer.TileSize = new CGSize (1024, 1024);
// here we still need to implement the delegate
tiledLayer.Delegate = new TiledLayerDelegate (this);
Scale = scale;
}
public CGPDFPage Page { get; set; }
public float Scale { get; set; }
public override void Draw (CGRect rect)
{
// empty (on purpose so the delegate will draw)
}
[Export ("layerClass")]
public static Class LayerClass ()
{
// instruct that we want a CATileLayer (not the default CALayer) for the Layer property
return new Class (typeof (CATiledLayer));
}
protected override void Dispose (bool disposing)
{
Cleanup ();
base.Dispose (disposing);
}
private void Cleanup ()
{
InvokeOnMainThread (() => {
tiledLayer.Delegate = null;
this.RemoveFromSuperview ();
this.tiledLayer.RemoveFromSuperLayer ();
});
}
Apple's sample code around that is not really great. Looking at the source of your tiled view I do not see a place where you set the layer delegate to nil. Under the hood, CATiledLayer creates a queue to call the tiled rendering in the background. This can lead to races and one way to work around this is explicitly nilling the delegate. Experiments showed that this can sometimes block, so expect some performance degradation. Yes, this is a bug and you should file a feedback - I did so years ago.
I'm working on a commercial PDF SDK (and we have a pretty popular Xamarin wrapper) and we moved away from CATiledLayer years ago. It's a relatively simple solution but the nature of PDF is that to render a part, one has to traverse the whole render tree - it's not always easy to figure out what is on screen and what is not. Apple's renderer is doing an ok-ish job on that and performance is okay, but you'll get a better performance if you render into one image and then move that around/re-render as the user scrolls. (Of course, this is trickier and harder to get right with memory, especially on retina screens.)
If you don't have the time to move away from CATiledLayer, some people go with the nuclear option and also manually remove the layer from the view. See e.g. this question for more details.
Universal App with MVVMLight.
So I started wondering why all the SDK examples were done from code behind rather than using a solid Wrapper class.
So I wanted to write a reusable wrapper class. No luck. Even tried adding that wrapper to a ViewModel, still no luck.
Works fine from MainView.xaml.cs
IBandInfo[] pairedBands = BandClientManager.Instance.GetBandsAsync().Result;
if (pairedBands.Length > 0)
{
using (IBandClient bandClient = await BandClientManager.Instance.ConnectAsync(pairedBands[0]))
{
}
}
The moment I move to any kind of OOP or View Model, ConnectAsync will never return or throw exception. I have tried this 20 different ways, is the SDK broken? What Is happening? No message, no throw, just never returns.
If I throw in Code behind, wallah it works just fine and returns the client in 1/2 second.
I have spend 5-6 hours so far on this. I wanted to create a solid wrapper class for the SDK so I could call easy calls from Model and do things like StartListener(MicrosoftBandSensor sensorToActivate).
Any suggestions?
-- For Phil's comment
I was trying to create backing variables for both client and bandinfo which would be held in a class that the VM uses. I wrote my class as IDisposable so I could dispose of both when I was done with my wrapper. I may be using this wrong to be honest.
MicrosoftBand.MicrosoftBandClient = BandClientManager.Instance.ConnectAsync(pairedBands[0]).Result;
Is what I wanted to call making it a sync call since I wanted to make the calls to bandinfo and client in the constructor then hold both until the class was destroyed and just recall the vars when needed.
My VM has :
public BandInformation MicrosoftBand
{
get { return _microsoftBand; }
set { Set(() => MicrosoftBand, ref _microsoftBand, value); }
}
If they didn't pass the bandclient in the constructor I would use:
private async Task InitBand(IBandInfo bandInfo)
{
if (bandInfo == null)
{
var allBands = await BandClientManager.Instance.GetBandsAsync();
if (allBands.Length > 0)
{
bandInfo = allBands[0];
}
}
var bandClient = await BandClientManager.Instance.ConnectAsync(bandInfo);
MicrosoftBandInfo = bandInfo;
MicrosoftBandClient = bandClient;
if (MicrosoftBandClient == null)
{
AddErrorMessage("This sample app requires a Microsoft Band paired to your device.Also make sure that you have the latest firmware installed on your Band, as provided by the latest Microsoft Health app.");
}
}
This seems fine working with BandInfo. I get back a solid seeming to work object For the client I get "thread exited" and nothing else.
Note: I had it in a try catch throwaway version at one point and nothing threw n exception either.
I assume you can do this like you would any other IDisposable where you handle the disposing yourself.
I can reinstantiate the BandClient each time, just figured I needed to detach the events at some point, meaning I had to keep ahold of the bandclient. I could keep it until done and would add and remove events as I needed each time.
It's likely your blocking call to .Result within your VM constructor is what was causing the hang. IBandClientManager.ConnectAsync() may implicitly display UI (a Windows Runtime dialog asking the user to confirm that she wants to use that specific Bluetooth device). If you've blocked the UI thread when it attempts to display UI, you've now gotten yourself into a deadlock.
Calling Task.Result is almost never a good idea, much less doing so within a constructor where you have little idea on which thread the constructor is executing. If you're working with an async API (such as the Band SDK) then your best bet is to keep that interaction async as well. Instead, defer calling ConnectAsync() until you actually need to, and do so from an async method in your VM. (Deferring the connection is a good idea anyway because you want to minimize the time connected to the Band to preserve battery life.) Then call Dispose() as early as possible to close the Bluetooth connection.
So I went and looked at a bunch of examples. Finally I landed on the GravityHeroUAP demo on the MSDN site. https://msdn.microsoft.com/en-us/magazine/mt573717.aspx?f=255&MSPPError=-2147217396
I looked at his code and the source: https://github.com/kevinash/GravityHeroUWP
He was essentially doing what I wanted to do.
However, I noticed something Bizarre. In his viewmodel everything was static!
public static IBandInfo SelectedBand
{
get { return BandModel._selectedBand; }
set { BandModel._selectedBand = value; }
}
private static IBandClient _bandClient;
public static IBandClient BandClient
{
get { return _bandClient; }
set
{
_bandClient = value;
}
}
I ended up copying this pattern (though had to throw away my favorite MVVM lib in the process, though I am sure I can get it back).
My common pattern in my VM's:
public string ExceptionOnStart {
get { return _exceptionOnStart; }
set { Set(() => ExceptionOnStart, ref _exceptionOnStart, value); }
}
It seems to be working now!
That and I got data way too fast for the
await Windows.Storage.FileIO.AppendLinesAsync(dataFile, new List<string> { toWrite });
Thank you for the help Phil, it got me looking in the right direction!
Thank you very, very much. Spent WAY to long on this. Mark
I'm fairly new to writing BlackBerry applications, so maybe this is a stupid thing I'm overlooking. I have to use JDE 5 (client requirement) to support the older BlackBerry Curve 8520 phones.
What I am experiencing is that as soon as I place a DateField on my interface, the application slows down considerably, causing the UI to stutter. Even a simple layout that only has a single DateField and a button has the same effect. Then, as soon as I move on to the next layout, everything is fine again.
One of the layouts are created as follows (please comment if this is the incorrect way of doing it):
public void displaySomeLayout() {
final ButtonField okButton = new ButtonField("OK");
final DateField dobField = new DateField("Birthday", System.currentTimeMillis(), DateField.DATE);
/* some other non-ui code */
UiApplication.getUiApplication().invokeLater(new Runnable() {
public void run() {
applicationFieldManager.addAll(new Field[] {
dobField,
okButton
});
}
});
}
The application then just slows down a lot. Sometimes, after a minute of so it starts responding normally again, sometimes not.
The displaySomeLayout() method is called from the contructor of the Screen extending class. And then applicationFieldManager is a private VerticalFieldManager which is instantiated during class construction.
I'm not sure the problem is in the code that you've shown us. I think it's somewhere else.
However, here are a couple recommendations to improve the code you've shown:
Threading
First of all, the code you show essentially is being run in the Screen subclass constructor. There is almost no difference between this code:
public MyScreen() {
Field f = new ButtonField("Hello", ButtonField.CONSUME_CLICK);
add(f);
}
and this:
public MyScreen() {
addField();
}
private void addField() {
Field f = new ButtonField("Hello", ButtonField.CONSUME_CLICK);
add(f);
}
So, because your code is being run in the screen class's constructor, it should already be running on the UI thread. Therefore, there's no reason to use UiApplication.getUiApplication().invokeLater() here. Instead, just use this:
public void displaySomeLayout() {
final ButtonField okButton = new ButtonField("OK");
final DateField dobField = new DateField("Birthday", System.currentTimeMillis(), DateField.DATE);
/* some other non-ui code */
applicationFieldManager.add(dobField);
applicationFieldManager.add(okButton);
}
Sometimes, you do need to use invokeLater() to run UI code, even when you're already on the UI thread. For example, if your code is inside the Manager#sublayout() method, which runs on the UI thread, adding new fields directly will trigger sublayout() to be called recursively, until you get a stack overflow. Using invokeLater() can help there, by deferring the running of a block of code until sublayout() has completed. But, from the constructor of your screen class, you don't need to do that.
ObjectChoiceField
I'm also worried about the ObjectChoiceField you said you were using with 250 choices. You might try testing this field with only 10 or 20 choices, and see if that makes a difference.
But, even if the 250 choice ObjectChoiceField isn't the cause of your performance problems, I would still suggest a different UI.
On BlackBerry Java, you can use the AutoCompleteField. This field can be given all the country choices that you are now using. The user starts typing the first couple letters of a country, and quickly, the list narrows to just those which match. I personally think this is a better way to get through a very large list of choices.
My use case is that whenever an user types something an EditText, the input data is used for performing operations on the background. These operations might take long enough to cause an ANR. #TextChange together with #Background works fine if the operation is done quicly enough. But is the operation takes long enough, so that the user inputs more data, I will get threading issues as there will be multiple background tasks that will command the update of same UI component.
I think I achieve the wanted behaviour with AsyncTask API, but wanted to look for AndroidAnnotations based solutions as well, as it simplifies the code a lot. Great lib by the way.
Below are some code snippets that'll hopefully illustrate my point. Thanks for at least reading, comments/answers appreciated :)
#TextChange
void onUserInput(...) {
// This will start a new thread on each text change event
// thus leading to a situation that the thread finishing
// last will update the ui
// the operation time is not fixed so the last event is
// not necessary the last thread that finished
doOperation()
}
#Background
void doOperation() {
// Sleep to simulate long taking operation
Thread.sleep( 6000 );
updateUi()
}
#UiThread
void updateUi() {
// Update text field etc content based on operations
}
UPDATE: This is not possible at the moment, see DayS' answer below.
It's possible since AA 3.0. Read this thread.
#Override
protected void onStop() {
super.onStop();
boolean mayInterruptIfRunning = true;
BackgroundExecutor.cancelAll("longtask", mayInterruptIfRunning);
}
#Background(id="longtask")
public void doSomethingLong() {
// ...
}
There already was this kind of request on Android Annotations but it was closed because no solution was proposed. But if you have any idea about it, go ahead and re-open this issue ;)