iOS force thermal throttling - ios

In our app we have a CPU intensive algorithm that we've had difficulties assessing quality from our production logs. Recently we've added thermal throttling (normal, serious, and critical) monitoring and have been able to correlate some issues of performance with users' devices hitting serious and critical thermal values.
While there are various ways to include sleeps and waits to add artificial CPU load. It's not obvious how to fit this into our algorithm and the scope of that investigation seems too much.
Is there a way to take a physical iOS device, connect to Xcode/debugger and trigger a throttling event and actually get the CPU to respond properly on the iOS device and throttle our apps performance. Then we would have a much more exact reproduction locally.

Related

AWS CloudWatch CPUUtilization vs NETSNMP discrepancies

For a few weeks now, we've noticed our ScienceLogic monitoring platform that uses SNMP is unable to detect CPU spikes that CloudWatch alarms are picking up. Both platforms are configured to poll every 5min, but SL1 is not seeing any CPU spikes more than ~20%. Our CloudWatch CPU alarm is set to fire off at 90%, which has gone off twice in the past 12 hours for this EC2 instance. I'm struggling to understand why?
I know that CloudWatch pulls the CPUUtilization metric direct from the hypervisor, but I can't imagine it would differ so much from the CPU percentage captured by SNMP. Anyone have any thoughts? I wonder if I'm just seeing a scaling issue in SNMP?
SL1:
CloudWatch:
I tried contacting Sciencelogic, and they asked me for the "formula" that AWS uses to capture this metric, which I'm not really sure I understand the question lol.
Monitors running inside/outside of a compute unit (here a virtual machine) can observe different results, which I think is rather normal.
The SNMP agent is inside the VM, so its code execution is affected heavily by high CPU events (threads blocked). You can recall similar high CPU events on your personal computer, where if an application consumed all CPU resources, other applications naturally became slow and not responsive.
While CloudWatch sensors are outside, which are almost never affected by the VM internal events.

Network Usage in windows 10 never going past 1%?

So i started making test streams on my youtube channel using Streamlabs OBS.
I turned on performance mode, looked at the stream but it was like 2 fps.
I looked at task manager and my network usage was NEVER going past 1%, and streamlabs's network usage rarely went past 0.1 mbps.
This happens with other things too, and i don't like it since it makes my internet so slow. Internet (if you're wondering): Verizon Fios 5Ghz connection.
The percentage simply shows the relation between the current network usage and the link speed of your network adapter. For example, if your link speed is 1 Gbps and you're transferring data at 10 Mbps, the network usage will be 1%.
When transferring data over the Internet, the percentage is generally not very useful, because the maximum speed is defined by your ISP and, in your case, is likely to be a lot lower than your adapter speed. Furthermore, your Internet speed can also be degraded by a poor Wi-Fi signal, by other users in the same network, etc.
What you should actually be looking at is the actual speed (in bit/s) at which you are sending data (look at the "Send" field in the performance tab of the task manager) and compare that to your Internet speed (which you can learn by doing a speed test).

Blackberry - GPS vs Geolocation - how to select the best?

Does Blackberry API provide any methods to determine which one, GPS or Geolocation is better in current situation (according to signal level, network bandwidth and any other environment properties)?
There's many, many different algorithms you could use to determine which is the optimal location mode to use.
A well-tuned algorithm would have to account for things like
how fast does your user need a location fix?
how accurate does the fix need to be? is it just being used to find nearby movie theatres, or is the fix used for navigation (which needs to be really accurate)?
which mobile carrier are you on? GPS results may be independent of the carrier, but other geolocation technologies will depend on the carrier, and their infrastructure (assuming you're using the cellular network, and not Wi-Fi)
is there any reason to need to limit network transmissions (e.g. for a metered data plan, where you are frequently updating the location)?
how important is battery usage?
which BlackBerry OS versions are you targeting?
I'm sure I'm missing some other factors, but hopefully you can see that it's not a simple problem that can be solved without knowing something about your app and network deployment.
Also, this kind of algorithm for BlackBerry (Java) apps has traditionally taken a lot of work to optimize. As such, many developers (or clients) would consider this a closely-guarded business secret. So, it might be hard to find someone to publish their algorithm (but it doesn't hurt to ask, right?).
That said, you might at least take a look at the BlackBerry Simple Location API, which is an open source implementation of a basic algorithm that selects between GPS and Geolocation modes (if you allow it to use both). From the Javadocs (for the Optimal mode):
Operates in both Geolocation and GPS mode based on availability. First fix is computed in Geolocation mode, subsequent fixes in
Standalone mode. However if Standalone mode fails, falls back to Geolocation mode temporarily until a retry is attempted in
Standalone mode after a certain waiting period has passed. See setRetryFactor(int).
For single fix calls to getLocation(int), Geolocation mode is used first with a fallback to
Standalone mode if Geolocation mode fails.
I see you're in Belarus, but I don't know where your clients, or users are. If they're in the US, you may also need to consider something like Nav Builder Inside for geolocation if your app will support the Verizon network.
Anyway, I know this probably isn't the answer you wanted, but maybe it's a start?

PubNub long polling vs sockets - mobile battery life

I recently began using PubNub in my iOS app, and am very happy with it. However, I have been looking at the other options available, such as Pusher and Realtime.co, which use Websockets. PubNub, on the other hand, uses long polling. I have done my own little speed comparisons and for my purposes, I find that they are all fast enough.
PubNub offers some nice features like message history and a list of everyone in the channel , so barring everything else I am leaning toward them. My question is, should I be concerned with battery life and heavy usage with a long-polling solution like PubNub? Would a Websockets solution be significantly more power efficient?
PubNub on Mobile with Battery Saving
As a preface to battery performance and efficiency, PubNub is an optimized service for mobile devices on-the-go when compared to alternative or self-hosted websocket solutions. PubNub offers a catch-up feature on Mobile phones that will automatically redeliver messages that were missed, especially for devices that are moving between cell-network towers and changing from 3G/4G to WiFi. Websockets tend to be unrecommended for mobile due to reliability in common scenarios and that is why PubNub will select the best transport for your device automatically; so you don't have to decide what makes the most sense for the phone in transit.
Battery Savings Pattern with PubNub
PubNub has a keep-alive connection that is uncommonly long and set to one hour. A ping is sent each 300 seconds (300,000ms). This long enough to provide the best mix between mobile performance and battery saving.
Battery Saving Tips on Mobile
Keeping messages as small as possible.
Sending Fewer messages less frequently.
Connect to only one channel rather than two or more.
Automatic Transport Detection
PubNub will automatically select the best transport for you when needed especially on mobile devices. An interesting conversation about websockets occurred in Portland, Oregon this last October 2012 at KRTConf that I recommend to you https://speakerdeck.com/3rdeden/realtimeconf-dot-oct-dot-2012
Let me know if this was helpful.
I don't think this is correct. See http://eon.businesswire.com/news/eon/20120927005429/en/Kaazing/HTML/HTML5
I am the one who actually did the testing for Kaazing on comparing WebSocket and regular http-based message transfers. I saw a drastic decrease in battery consumption with WebSocket. Now Kaazing has additional technology above and beyond WebSocket to reduce battery consumption, but even if you don't use Kaazing, you will still see some battery consumption efficiencies with WebSocket. I tried this out myself by running actual tests even for basic WS versus http without any special battery optimization algorithms.

Real time audio conversation iOS

I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter

Resources