Memory pressure loading a bunch of PNG images in MonoTouch - ios

I have a MonoTouch application that loads a frame every 1/12th of second. I'm using the UIkit, not opengl. I have an UIImage into the view, and with a backgroundtask i'm loading the images.
Everything is ok, but, after a minute (more or less), the application stops, and the tracer gives me a 'low memory pressure'
It works fine in the emulator, with no problems. I'm looking at the profiler and it seems that the memory is disposed, but when i try it on the iPad.... :(
I free the memory using image.Dispose(). I have 2 images in memory, showing one of then and then releasing the older. This behaviour is ok, because i have the same logic on the Windows Phone and it works fine.
I've tried not to use the backgroundTask, and load the image directly from the main Thread. It gives me more time!!! If i use the backgroundTask, the application runs for 30 seconds and then exits . If i NOT use the backgroundTask, it lasts 1 minute.
I don't know what to do!!!! Can anyone help me, please?
Thanks!!!
Texture2D.FromStream it's only a wrapper for UIImage.FromFile
This is the background worker:
void LoadTextureInBackground()
{
if (currentMovie == null) return;
DateTime timeOnstart = DateTime.Now;
// Mantenemos 2 texturas en memoria para no cargar sobre el mismo objeto texture.
string fileName = currentMovie.GetFileName();
if (lastFileName == fileName) return;
textureLoaded[currentTexture] = Texture2D.FromStream(Device.GraphicsDevice, fileName);
texture = textureLoaded[currentTexture];
currentTexture = 1 - currentTexture;
lastFileName = fileName;
GC.Collect();
System.Threading.Thread.Sleep(50);
}
This is the draw method:
/// <summary>
/// This is called when the game should draw itself.
/// </summary>
/// <param name="gameTime">Provides a snapshot of timing values.</param>
public void Draw(TimeSpan totalTime,TimeSpan elapsedTime)
{
if(currentMovie==null) return;
// Mantenemos 2 texturas en memoria para no cargar sobre el mismo objeto texture.
if (texture == null) return;
int newWidth = (int)(texture.Width*RenderSize);
int newHeight = (int)(texture.Height*RenderSize);
Texture2D drawTexture = texture;
// Al cargar usando Texture2D.FromStream, la textura NO lleva elAlpha premultiplicado
//Device.SpriteBatch.Draw(drawTexture, destination, Color.White);
if(CultTravel.AppDelegate.ImagePng.Image!=drawTexture.Texture)
{
AppDelegate.ImagePng.Image=drawTexture.Texture;
AppDelegate.ImagePng.Frame=new System.Drawing.RectangleF(0,480-newHeight,ImagePng.Image.CGImage.Width,newHeight);
// Si la textura que tengo cargada y lista para mostrar es diferente a la que mostraba, libero la antigua
if (lastTextureDraw!=null && lastTextureDraw != texture)
{
lastTextureDraw.Dispose();
}
lastTextureDraw = texture;
}
Note:
I've just solved the issue, BUT i have to disable the background worker, add the loading code in the Draw method and add the GC.Collect in the main thread:
if (lastTextureDraw!=null && lastTextureDraw != texture)
{
lastTextureDraw.Dispose();
// I MUST ADD THIS TO WORK AND DISABLE THE BACKGROUND TASK!!
GC.Collect();
}

I've just solved the issue, BUT i have to disable the background worker, add the loading code in the Draw method and add the GC.Collect in the main thread:
if (lastTextureDraw!=null && lastTextureDraw != texture)
{
lastTextureDraw.Dispose();
// I MUST ADD THIS TO WORK AND DISABLE THE BACKGROUND TASK!!
GC.Collect();
}
So, something is wrong when working with threads....

It works fine in the emulator, with no problems.
This is not an emulator but a simulator. As such it does not try to mimic a specific device, including it's memory restrictions. IOW the iOS simulator has access to much more memory than your iPad (1st gen 256MB, 2nd gen 512MB, 3rd gen 1GB).
I'm looking at the profiler
Which one ? MonoDevelop's HeapShot or Apple's Instruments ?
I free the memory using image.Dipose().
That's a good idea - but it could only work on the managed side. E.g. some API will cache images, that memory won't show on HeapShot since it's not managed memory (it's natively allocated memory). You'll have a better chance to track this using Apple's Instruments.
It could be something else too... which means, like #Jonathan.Peppers asked, that seeing how you're loading the images could help us to see what's wrong.

Related

BackgroundFetch in Codename One

I'm developing a Codename One app for iOS and I'm trying to use the BackgroundFetch interface.
I copied the sample code as it is written in Javadoc (https://www.codenameone.com/javadoc/com/codename1/background/BackgroundFetch.html) and I added the ios.background_modes=fetch build hint.
Launching the app on the simulator, the background operation is correctly executed.
Launching it on a real device (iPhone 7s, iOs 12.1.4), the behaviour is unpredictable. Despite the setPreferredBackgroundFetchInterval(10), I noticed almost every time I launch the app, the background operation is not executed. Rarely, the background operation is executed, but the app must be in background some minutes before to resume it, instead of 10 seconds, as set through the setPreferredBackgroundFetchInterval(10) method.
The Display.isBackgroundFetchSupported() method returns true.
I don't understand how to make if affordable and predictable.
EDIT
I modified the sample code, only in the performBackgroundFetch() implementation (the Display.setPreferredBackgroundFetchInterval(10) is not changed). I just put some text in the label:
#Override
public void performBackgroundFetch(long deadline, Callback<Boolean> onComplete) {
supported.setText("deadline: " + deadline + "; timeMillis: " + System.currentTimeMillis());
onComplete.onSucess(Boolean.TRUE);
}
I observed two different behaviours for simulator and real device.
In simulator, the method is executed exactly 10 seconds after entering in pause status. In real device, the method isn't executed 10 seconds after entering in pause status: in some cases, it's executed after 20 minutes (in other cases, it's not executed at all).
However, in both cases, I could calculate the difference between the deadline and the time when the method executed: it's always 25 minutes.
As an example, you can see the following screenshot of the app (running on iPhone):
Deadline = 1560246881647
Timestamp = 1560245381647
Deadline - Timestamp = 1500000 ms = 1500 s = 25 minutes.
As I understood, on iOS, there is a limit of 30 seconds to perform background fetches, otherwise the OS will kill the app. Moreover, the Display.setPreferredBackgroundFetchInterval() is used to set the preferred time interval between background fetches, but it's not guaranteed, as iOS keeps the control over the execution of background fetches.
What is the right way to use background fetch?
Here is the complete code:
public class MyApplication implements BackgroundFetch{
private Form current;
private Resources theme;
List<Map> records;
Label supported;
// Container to hold the list of records.
Container recordsContainer;
public void init(Object context) {
theme = UIManager.initFirstTheme("/theme");
// Enable Toolbar on all Forms by default
Toolbar.setGlobalToolbar(true);
// Pro only feature, uncomment if you have a pro subscription
// Log.bindCrashProtection(true);
}
public void start() {
if(current != null){
// Make sure we update the records as we are coming in from the
// background.
updateRecords();
current.show();
return;
}
Display d = Display.getInstance();
// This call is necessary to initialize background fetch
d.setPreferredBackgroundFetchInterval(10);
Form hi = new Form("Background Fetch Demo");
hi.setLayout(new BoxLayout(BoxLayout.Y_AXIS));
supported = new Label();
if (d.isBackgroundFetchSupported()){
supported.setText("Background Fetch IS Supported");
} else {
supported.setText("Background Fetch is NOT Supported");
}
hi.addComponent(new Label("Records:"));
recordsContainer = new Container(new BoxLayout(BoxLayout.Y_AXIS));
//recordsContainer.setScrollableY(true);
hi.addComponent(recordsContainer);
hi.addComponent(supported);
updateRecords();
hi.show();
}
/**
* Update the UI with the records that are currently loaded.
*/
private void updateRecords() {
recordsContainer.removeAll();
if (records != null) {
for (Map m : records) {
recordsContainer.addComponent(new SpanLabel((String)m.get("title")));
}
} else {
recordsContainer.addComponent(new SpanLabel("Put the app in the background, wait 10 seconds, then open it again. The app should background fetch some data from the Slashdot RSS feed and show it here."));
}
if (Display.getInstance().getCurrent() != null) {
Display.getInstance().getCurrent().revalidate();
}
}
public void stop() {
current = Display.getInstance().getCurrent();
if(current instanceof Dialog) {
((Dialog)current).dispose();
current = Display.getInstance().getCurrent();
}
}
public void destroy() {
}
/**
* This method will be called in the background by the platform. It will
* load the RSS feed. Note: This only runs when the app is in the background.
* #param deadline
* #param onComplete
*/
#Override
public void performBackgroundFetch(long deadline, Callback<Boolean> onComplete) {
supported.setText("deadline: " + deadline + "; timeMillis: " + System.currentTimeMillis());
onComplete.onSucess(Boolean.TRUE);
}
}
The setPreferredBackgroundFetchInterval javadoc states:
Sets the preferred time interval between background fetches. This is only a preferred interval and is not guaranteed. Some platforms, like iOS, maintain sovereign control over when and if background fetches will be allowed. This number is used only as a guideline.

Preventing the camera to rotate in iPad app using MvvmCross PictureChooser

I'm using Xamarin with MvvmCross to create an iPad application. In this application I use the PictureChooser plugin to take a picture with the camera. This all occurs in the way that can be seen in the related youtube video.
The code to accomplish this is fairly simple and can be found below. However when testing this on the actual device, the camera might be rotated.
private readonly IMvxPictureChooserTask _pictureChooserTask;
public CameraViewModel(IMvxPictureChooserTask pictureChooserTask)
{
_pictureChooserTask = pictureChooserTask;
}
private IMvxPictureChooserTask PictureChooserTask { get { return _pictureChooserTask; } }
private void TakePicture()
{
PictureChooserTask.TakePicture(400, 95,
async (stream) =>
{
using (var memoryStream = new MemoryStream())
{
stream.CopyTo(memoryStream);
var imageBytes = memoryStream.ToArray();
if (imageBytes == null)
return;
filePath = ProcessImage(imageBytes, FileName);
}
},
() =>
{
/* no action - we don't do cancellation */
}
);
}
This will lead to unwanted behavior. The camera should remain steady and be prevented in rotating within the App. I have been trying some stuff out, like preventing the app from rotating in the override bool ShouldAutorotate method while in camera mode, but unfortunately without any results.
Is there any setting that I forgot to set on the PictureChooser, or is the override method the item where I should perform some magic?
Thanks in advance.
Answer to this question has been raised in the comments of the question by user3455363, many thanks for this! Eventually it seemed to be a bug in iOS 8. The iOS 8.1 upgrade fixed this issue in my App!

zxing windows phone 8 Memory issue

I am developing a barcode scanner.
My application scans a Barcode, than navigates to a second page (with barcodetext on it) and theres a button where i can scan a new barcode when its clicked. So now my problem is that the viewfinderbrush fills the Ram of the windows phone everytime I scan a barcode. If I scan 100 Barcodes, my applikation chrashes because of an out of memory. What can I do to free the memory?
In the zxing demo its the same issue, when I open the scanner and use the back key and open the scanner several times again, the ram is getting higher and higher.
My code for getting the ram:
string storage1 = (DeviceStatus.ApplicationCurrentMemoryUsage / 1000000).ToString() + "MB";
lblram.Text = storage1 ;
I tried everything:
1. Navigationservice.removebackstack
2. Gc.collect (Garbagecollector)
3. disposing the camera
4. use of navigationservice.goback()
Nothing helps
I found out, that the problem is the viewfinderbrush, but i have to inizialize it every time, so thats a problem.
Code:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
string bla1 = (DeviceStatus.ApplicationCurrentMemoryUsage / 1000000).ToString() + "MB";
lblram.Text = bla1;//("Ram: " + Math.Round(((bla1 / 1024)/ 1024), 2));
_matches = new ObservableCollection<string>();
matchesList.ItemsSource = _matches;
// Check to see if the camera is available on the phone.
if (PhotoCamera.IsCameraTypeSupported(CameraType.Primary) == true)
{
_photoCamera = new PhotoCamera(CameraType.Primary);
// Event is fired when the PhotoCamera object has been initialized.
_photoCamera.Initialized += PhotoCameraOnInitialized;
_photoCamera.AutoFocusCompleted += PhotoCameraOnAutoFocusCompleted;
NavigationService.RemoveBackEntry();
viewfinderBrush.SetSource(_photoCamera);
}
else
{
// The camera is not supported on the phone.
MessageBox.Show("A Camera is not available on this phone.");
}
}
What can I do to solve this problem?

Is it possible to take photos programmatically without bringing the default camera application into foreground in blackberry.....?

I am developing an application which makes use of camera...and my app's requirement is :
I have to take photos from my "already running" background application ; but I have to keep camera application into background......means I want to capture picture without disturbing the current foreground applications.
and in addition without making any camera shutter sound....???
Is this possible If we call camera app - through Invoke.invokeApplication(Invoke.APP_TYPE_CAMERA, new CameraArguments())...........
thanks to all of u...specially to #donturner
.....sorry to come back on this post very late.
after some attempts I hv found that without bringing the camera screen into foreground we cant capture the snap.
1) If we put the camera screen into background OR
2) change the visibility of VideoControl's class object as false...
_videoControl.setVisible(false);
then no bytes (null value) will be received by...
byte[] imageBytes = _videoControl.getSnapshot( encoding );
So whats the use of this function....while we r using the video control class for capture snap or video???
_videoControl.setVisible(true);
I hv tried a different trick......
I call a thread just before bring the camera screen into foreground to off the back-light of the device.
then soon after that I bring the camera screen into foreground and capture the snap.
I hv tested the above trick on BB Flip (4.6) and Storm (5.0) devices and it takes snapshots successfully even when device back light is OFF.
But now I become stuck on some other problem....and that is the camera SHUTTER sound. I hv tried a lot but couldn't get success to mute it....
as #Michael Donohue suggested, I hv tried..
private void muteThread() {
Thread t = new Thread(new Runnable() {
public void run() {
try {
int i = 0;
while (i < 50) {
Audio.setVolume(0);
try {
Thread.sleep(50);
} catch (InterruptedException e) {
}
i++;
}
System.out.println("\n\n >>>>> End of Mute Thread. <<<< \n\n");
} catch (Exception e) {
}
}
});
t.start();
}
but it is not working... If this cant be done then how these application r providing this facility...and wht kind of functionality they hv used.
http://appworld.blackberry.com/webstore/content/97648/?lang=en
http://appworld.blackberry.com/webstore/content/79083/?lang=en
You cannot take a photo programmatically without displaying the camera preview feed to the user so at the very minimum you need to bring the camera preview surface into the UI foreground.

PrintCanvas3D won't work

I have some trouble tring to print graphics from Java3d some computer (Intel based Graphic cards) crash completly when printing. I got this exception.
javax.media.j3d.IllegalRenderingStateException: GL_VERSION
at javax.media.j3d.NativePipeline.createNewContext(Native Method)
at javax.media.j3d.NativePipeline.createNewContext(NativePipeline.java:2736)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:4895)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:2421)
at javax.media.j3d.Renderer.doWork(Renderer.java:895)
at javax.media.j3d.J3dThread.run(J3dThread.java:256)
DefaultRenderingErrorListener.errorOccurred:
CONTEXT_CREATION_ERROR: Renderer: Error creating Canvas3D graphics context
graphicsDevice = Win32GraphicsDevice[screen=0]
canvas = visualization.show3D.show.print.OffScreenCanvas3D[canvas0,0,0,3000x2167,invalid]
Java 3D ERROR : OpenGL 1.2 or better is required (GL_VERSION=1.1)
Java Result: 1
I know it said i have to upgrade to OpenGL 1.2 but after checking i already have 1.5 installed (error message is not accurate)
String glVersion = (String)getCanvas3D().queryProperties().get("native.version");
I tried to catch IllegalRenderingStateException but it doesn't work, JVM just crash in any case.
Doesnt anyone know how to have a printing function to work on Intel based Graphic cards ?
I found out the cause of my problem.
Some computer haven't OffScreenRendering support needed by PrintCanvas3D.java.
So i used robot to create a screen capture
public BufferedImage canvasCapture(Dimension size, Point locationOnScreen) {
Rectangle bounds = new Rectangle(locationOnScreen.x, locationOnScreen.y, size.width, size.height);
try{
Robot robot = new Robot(this.getGraphicsConfiguration().getDevice());
return robot.createScreenCapture(bounds);
}catch (Exception e){
e.printStackTrace();
return null;
}
}
Last tricky part was to detect when to switch from proper printing method to ScreenCapture method (since catching the raised exception doesn't work), after some search i found out that queryProperties() could give me this information
here is the code in my Frame3D to choose proper method
Boolean OffScreenRenderingSupport = (Boolean)getCanvas3D().queryProperties().get("textureLodOffsetAvailable");
if (OffScreenRenderingSupport){
bImage = getOffScreenCanvas3D().doRender(dim.width, dim.height);
}else{
bImage = getOffScreenCanvas3D().canvasCapture(getCanvas3D().getSize(), getCanvas3D().getLocationOnScreen());
}
If anyone can find a better way to handle this, please let me know ;)

Resources