iOS App - Device begins to heat up after certain interval of time - ios

We have used Webrtc & ARKit to implement video communication in our iOS app. Using RTCVideoCapturer, we send the customized frame to the WebRTC. While sending the frame via WebRTC, we first capture a screenshot of the View, then get the CgImage and generate the pixel buffer.
This works perfectly, but after certain interval of time the device starts to heat up. An increase in CPU usage causes a heating problem.
What modifications can be made to reduce CPU usage?
CPU Utilization & Thermal State Changes

Related

prevent decreasing video quality in twilio

twillio video quality (resolution, height/width of video) decrease while connecting and never improved.
Test condition
device: pc and mobile app
mobile send video and pc receive
connecting several minutes without idle mode.
sometimes move mobile
network condition: stable in office
Result
video quality become worse by decreasing height and width of video
you can see that frame width and fram height has become smaller while frame rate is the same
in our condition, network is stable and does not have reason to decrease.
if it decrease worse, I would like it improve once condition become better.
Also, if the condition become worse, it is better to decrease frame rate rather than resolution of video.
How can I configure in that way?
Current implementation with Flutter
final connectOptions = ConnectOptions(
token,
roomName: _channelName,
preferredAudioCodecs: [OpusCodec()],
audioTracks: [LocalAudioTrack(true)],
dataTracks: [LocalDataTrack()],
videoTracks: [LocalVideoTrack(true, _cameraCapturer)],
enableNetworkQuality: true,
networkQualityConfiguration: NetworkQualityConfiguration(
remote: NetworkQualityVerbosity.NETWORK_QUALITY_VERBOSITY_MINIMAL,
),
enableDominantSpeaker: true,
);
_room = await TwilioProgrammableVideo.connect(connectOptions);
Twilio developer evangelist here.
Also, if the condition become worse, it is better to decrease frame rate rather than resolution of video.
This is not necessarily true. From the Twilio docs on developing high quality video applications:
Frame-rate and resolution are the two main capture constraints that affect video fidelity. When the video source is a camera showing people or moving objects, typically the perceptual quality is better at higher frame-rate. However, for screen-sharing, the resolution is typically more relevant.
To my knowledge, if a network connection is able to support higher resolution video, then WebRTC and Twilio Video will send the resolution that can be supported. However, network connection is not the only variable when it comes to the resolution, CPU usage is also a factor, and matters particularly on mobile devices. Continuing from the docs:
You should try to set resolution and frame-rate to the minimum value required by your use-case. Over-dimensioning resolution and frame-rate will have a negative impact on the CPU and network consumption and may increase latency. In addition, remember that the resolution and frame-rate you specify as capture constraints are just hints for the client video engine. The actual resolution and frame-rate may decrease if CPU overuse is detected or if the network capacity is not enough for the required traffic.
I see you have included the network quality API in your code. I'd recommend taking readings from that, as well as investigating CPU profiling in your application to see if you can get the resolution stable on your device.

Why is Print Screen versus what is actually displaying on the monitor are different?

I'm working on an application that screen captures a monitor in real-time, encodes it, sends it over ethernet, decodes it, then displays that monitor in an application.
So I put the decoder application on the same monitor that is being captured. I then open a timer application and put it next to the decoder application. I can then start the timer and see the latency between main instance of the timer and the timer within the application.
What's weird is that if I take a picture of the monitor with a camera, I get one latency measurement (almost always ~100ms) but if I take a Print Screen of the monitor, the latency between the two is much lower (~30-60ms).
Why is that? How does Print Screen work? Why would it result in 40+ ms difference? Which latency measurement should I trust?
Print Screen saves the screenshot to your clipboard which is stored on your RAM (highest speed storage system in your computer), whereas what you are doing probably writes the screenshot data to your HDD/SSD and then reads it again to send over the internet, which takes a lot longer to do.

Reducing battery usage on a SpriteKit game

I've finished developing a card game called Up and Down the River in SpriteKit. It is a fairly simple card game with a few animations such as the action of dealing and playing a card.
According to debugger tools, it is generally Very High energy utilization and the averages near 170 wakes per second. (Shown below)
What is typical for a SpriteKit game? Should a simple card game be using this much energy? If not, what should I be looking for in order to reduce the energy usage?
Note: This is being run on macOS, however the game is cross-platform (meaning iOS and macOS). I get similar results for running on an iOS device.
When SpriteKit is running it is constantly updating the screen (usually at 60 frames per second).
If you do not need this high speed you can reduce it to 30 or 20 or lower frames per second by setting preferredFramesPerSecond on the SKView, see https://developer.apple.com/reference/spritekit/skview
If your game is completely static while waiting for user input you can even set isPaused on the SKView to stop updates completely while you are waiting.

How fast can I send a UIImage between an iPhone and Apple Watch, with watchOS 2?

I'm building an Apple WatchOS 2 app which is continuously animated with generated images.
Because these can't be bundled with the app, they're generated in InterfaceController, and then set to display on the watch like so:
self.imageGroup?.setBackgroundImage(self.image)
Until this point, I've been generating these at a rate of 1 image per second, which feels fairly safe, but obviously gives a very low framerate of 1fps. Now I'm wondering how much this could be improved?
I measured the speed at which the UIImages themselves are generated, which is a fairly low .017 seconds. The size of these images is fairly consistent, too at about 10000 bytes. If there was no further delay, that'd give me a much more acceptable performance of about 58fps.
My question is - Is there a typical speed at which bluetooth communicates with my phone, which I could compare to that image size to determine a realistic frame rate?
Or - I presume that calling setBackgroundImage doesn't block the main thread while that happens. Is there a way that I can find out how long it takes for that to actually be set?
Apple doesn't have this speed documented because so much of it depends on connection strength. And since a user doesn't need to have the watch and phone right next to each other, the further away (or the type of objects between the phone and watch) a user is the slower it will transfer.
Your images are 10 KB, and you want to send 58 images per second so 580 KB or .58 MB per second? The amount of data doesn't sound unrealistic (though it will be a battery drain). However, each network call between the two devices will have some overhead. Do these image need to be sent in real time? If not you would likely get better performance if you could delay for 1-2 seconds initially and then batch a group of 58 images together which you would them animate on the watch. You would only have 1 network call every second which would be more more manageable for the devices than 58 calls per second.

Still pin image capture responding time

I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.
I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?

Resources