webgl load in iphone very slow , and UGUI item squeezed in center - ios

i bulid webgl , and host in iis server android load the webgl has no problem bug
if use iphone to load webgl , need very time , ro stuck at about 90%
if i success use iphone load webgl , the UI item well all squeezed in center
is UnityWebGL not support iphone? i do somethine wrong ro lost somethine?

I can just understand that WebGl is very slow on iOS or crashes sometimes, right?
Well, the bad news is that Unity says that mobile browsers aren't supported at all.
See this answer
They recommend to use WASM export instead of asm.js and personally I think that you should keep the app as small as possible. Reduce the amount of data to minimum. Also switch off as many internal packages as you can using the package manager.

Related

Enforce use of independent flip mode with DXGI FLIP SwapChain

I currently face a problem with DXGI Swapchains (DirectX 11). My C++ application shows (live) video and my goal is to minimize latency. I have no user input to process.
In order to decrease latency I switched to a DXGI_SWAP_EFFECT_FLIP_DISCARD swapchain (I used BitBlt before - see For best performance, use DXGI flip model for further details). I use the following flags:
//Swapchain Init:
sc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
sc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT | DXGI_SWAP_CHAIN_FLAG_ALLOW_TEARING;
//Present:
m_pDXGISwapChain->Present(0, DXGI_PRESENT_ALLOW_TEARING);
On one computer the swapchain (windowed) goes into the "Hardware: Independent Flip" mode and I have perfectly low latency, as long as I have no other windows in front. On all other computers I tried, I am stuck in the "Composed: Flip" mode with higher latency (~ 25-30 ms more). The software (binary) is exactly the same. I am checking this with PresentMon.
What I find interesting is that on the computer where the independent flip mode works, it is also active without the ALLOW_TEARING flags - from what I understood they should be required for it. I btw also see tearing in this case, but that is a different problem.
I already tried to compare Windows 10 versions, graphic drivers and driver settings. GPU is a Quadro RTX 4000 for all systems. I couldn't spot any difference between the systems.
I would really appreciate any hints on additional preconditions for the independent flip mode I might have missed in the docs. Thanks for your help!
Update1: I updated the "working" system from Nvidia driver 511.09 to 473.47 (latest stable). After that I got the same behavior like on the other systems (no ind. flip). After going back to 511.09 it worked again. So the driver seems to have influence. The other systems also had 511.09 for my original tests though.
Update2: After dealing with all DirectX debug outputs, it still does not work as desired. I manage to get into independent flip mode only in real full screen mode or in windowed mode where the window has no decorations and actually covers the whole screen. Unfortunately, using the Graphics Tools for VS I never enter the independent flip and cannot do further analysis here. But it is interesting that when using the Graphics Tools debug, PresentMon shows Composed Flip, but the Graphics Analyzer from the Graphics Tools shows only DISCARD as SwapEffect for the SwapChain. I would have expected FLIP_DISCARD as I explicitly used DXGI_SWAP_EFFECT_FLIP_DISCARD.

Memory limit issue with screen casting using the broadcast extension and WebRTC protocol on iOS

This is my first question posted on stackoverflow.
I'm trying to make screen cast app using BroadcastExtension and WebRTC protocol. But broadcast extension's memory limit(50mb) is so tight that if an application tries to send the original video(886 x 1918 30fps) without any processing, it immediately dies after receiving a memory usage warning. After lowering the resolution and frame rate of the video, there is no problem. Investigating the application using the profiler does not seem to cause any problems with memory leaks. I guess it is because of the frames allocated during the encoding process inside WebRTC framework.
So my question is, is it possible to send the original video using WebRTC without any other processing, such as down scaling or lowering the frame rate?
Possible.
I forgot to mention in the question, but the library I used is Google WebRTC. I made two mistakes. One is to build the modified framework in debug mode, and the other is to use a software encoder(default is VP8). Because of this, it seems that the processing of the video frames was delayed and accumulated in the memory. DefaultEncoderFactory basically provides an encoder that operates in SW. (At least on iOS. Android seems to support HW-based decoder encoders automatically.) Fortunately, the iOS version google WebRTC framework supports the H264 hardware encoder(EncoderFactoryH264). In other cases you have to implement it yourself.
However, when using H264 to transmit, there is a problem that some platforms cannot play, for example, Android. The Google webrtc group seems to be aware of this problem, but at least it seems to me that it has not been resolved properly. Additional work is needed to solve this.

UIImagePickerController in source type Camera with allowsEditing to YES causes a “Terminated due to memory pressure” in iOS 7? [duplicate]

I am working on an iOS app in Xcode. Earlier I got it to start and run, up to a limited level of functionality. Then there were compilation failures claiming untouched boilerplate generated code had syntax errors. Copying the source code into a new project gets a different problem.
Right now, I can compile and start running, but it states before even the launch image shows up that the application was closed due to memory pressure. The total visual assets is around 272M, which could be optimized some without hurting graphical richness, and is so far the only area of the program expected to be large. (The assets may or may not be kept in memory; for instance every current loading image is populated and my code never accesses any loading image programmatically.) And it crashes before the loading image has itself loaded.
How can I address this memory issue? I may be able to slim down the way images are handled, but I suspect there is another root cause. Or is this excessive memory consumption?
Thanks,
Review the Performance Tuning section of Apple's iOS Programming documentation. Use Apple's Instruments application to determine how, when, and how much memory your app is using.
One approach you should consider is to disconnect the graphics resources from your application, and add them back one-by-one once you feel they meet the requirements and limitations of iOS.
Now, this part of my answer is opinion: it sounds like your app is a high risk for being rejected from the App Store, in case that is your intended destination for this app.

OpenGL ES apps appear to run MUCH FASTER when profiling in Instruments

I'm scared to ask this question because it doesn't include specifics and doesn't have any code samples, but that's because I've encountered it on three entirely different apps that I've worked on in the past few weeks, and I'm thinking specific code might just cloud the issue.
Scoured the web and found no reference to the phenomenon I'm encountering, so I'm just going to throw this out there and hope someone else has seen the same thing:
The 'problem' is that all the iOS OpenGL apps I've built, to a man, run MUCH FASTER when I'm profiling them in Instruments than when they're running standalone. As in, a frame rate roughly twice as fast (jumping from, eg, 30fps to 60fps). This is both measured with a code-timing loop and from watching the apps run. Instruments appears to be doing something magical.
This is on a device, not the iOS simulator.
If I profile my OpenGL apps and upload to a device — specifically, iPad 3 running iOS 5.1 — via Instruments, the frame rate is just flat-out much, much faster than running standalone. There appears to be no frame skipping or shennanigans like that. It simply does the same computation at around twice the speed.
Although I'm not including any code samples, just assume I'm doing the normal stuff. OpenGL ES 2.0, with VBOs and VAOs. Multithreading some computationally intensive code areas with dispatch queues/blocks. Nothing exotic or crazy.
I'd just like to know if anyone has experienced anything vaguely similar. If not, I'll just head back to my burrow and continue stabbing myself in the leg with a fork.
Could be that when you profile, a release build is used (by default) instead of a debug build when you just hit run.

Super slow Image processing on Android tablet

I am trying to implement SLIC superpixel algorithm in Android tablet (SLIC)
I port the code which in C++ to work with android environment using stl-lib and all. What application doing is taking an image from camera and send data to process in native code.
I got the app running but the problem is that it took 20-30 second to process a single frame (640 x 400) while in my notebook running with visual studio application would be almost instantly finish!
I check the memory leak, their isn't any... is their anything that might cause computation time to be way more expensive than VS2010 in notebook?
I know this question might be very open and not really specific but I'm really in the dark too. Hope you guys can help.
Thanks
PS. I check running time for each process, I think that every line of code execution time just went up. I don't see any specific function that take way longer than usual.
PSS. Do you think follow may cause the slow?
Memory size : investigated, during native not much of paused time show from GC
STL-library : not investigate yet, is it possible that function like vector, max and min running in STL may cause significant slow?
Android environment it self?
Lower hardware specification of Android Tablet (Acer Iconia tab - 1GHz Nvidia Tegra 250 dual-core processor and has 1GB of RAM)
Would be better to run in Java?
PSSS. If you have time please check out the code
I've taken a look to your code and can make the following recommendations:
First of all, you need to add the line APP_ABI := armeabi-v7a into your Application.mk file. Otherwise your code is compiled for old armv5te architecture where you have no any FPU (all floating point arithmetic is emulated), have less registers available and so on.
Your SLIC implementation intensively uses double floating-point values for computation. You should replace them with float wherever possible because ARM still misses hardware support for double type.

Resources