Ios real time audio analysis battery life improvement - ios

I'm doing real time audio analysis with accelerate framework, my application consumes too much battery, how can i improve battery life without data loss in my analysis.

Related

The sound quality of slow playback using AVPlayer is not good enough even when using AVAudioTimePitchAlgorithmSpectral

In iOS, playback rate can be changed by setting AVPlayer.rate.
When AVPlayback rate is set to 0.5, the playback becomes slow.
By default, the sound quality of the playback at 0.5 playback rate is terrible.
To increase the quality, you need to set AVPlayerItem.audioTimePitchAlgorithm.
According to the API documentation, setting AVPlayerItem.audioTimePitchAlgorithm to AVAudioTimePitchAlgorithmSpectral makes the quality the highest.
The swift code is:
AVPlayerItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithm.spectral // AVAudioTimePitchAlgorithmSpectral
AVAudioTimePitchAlgorithmSpectral increases the quality more than default quality.
But the sound quality of AVAudioTimePitchAlgorithmSpectral is not good enough.
The sound still echoed and it is stressful to listen to it.
In Podcast App of Apple, when I set playback speed to 1/2, the playback becomes slow and the sound quality is very high, no echo at all.
I want my app to provide the same quality as the Podcast App of Apple.
Are there iOS APIs to increase sound quality much higher than AVAudioTimePitchAlgorithmSpectral?
If not, why Apple doesn't provide it, even though they use it in their own Podcast App?
Or should I use third party library?
Are there good libraries which is free or low price and which many people use to change playback speed?
I've been searching and trying to learn AudioKit and Audio Unit or even considering purchasing a third party time-stretch audio processing library to fix the quality issue of slow playback for the last 3 weeks.
Now finally I found a super easy solution.
AVPlayer can slow down audio with very good quality by setting AVPlayerItem.audioTimePitchAlgorithm
to AVAudioTimePitchAlgorithm.timeDomain instead of AVAudioTimePitchAlgorithm.spectral.
The documentation says:
timeDomain is a modest quality pitch algorithm that is less computationally intensive. Suitable for voice.
This means spectral is suitable for music. timeDomain is suitable for voice.
That's why the voice files which my app uses was echoed.
And that's why Apple's Podcasts App's slowed down audio quality is very high.
It must also uses this time domain algorithm.
And that's why AudioKit, which seems to be developed for music use, plays voice audio with bad quality.
I've encountered the same issues with increasing/decreasing speed while maintaining some level of quality. I couldn't get it to work well using Apples API's.
In the end I found that it's worth taking a look at this excellent 3rd party framework:
https://github.com/AudioKit/AudioKit
which allows you to do that and much more, in a straightforward manner.
Hope this helps

How long should audio samples be for music/speech discrimination?

I am working on a convolutional neural net which takes an audio spectrogram to discriminate between music and speech using the GTZAN dataset
If single samples are shorter, then this gives more samples overall. But if samples are too short, then they may lack important features?
How much data is needed for recognizing if a piece of audio is music or speech?
How long should the audio samples be ideally?
The length of audios vary on number of factors.
The basic idea is to get just enough samples.
Since audio changes constantly, it is preferred to work on a shorter data. However, very small frame would result into less/no feature to be captured.
On the other hand very large sample would capture too many features, thereby leading to complexity.
So, in most usecases, although the ideal audio length is 25seconds, but it is not a written rule and you may manipulate it accordingly.Just make sure the frame size is not very small or very large.
Update for dataset
Check this link for dataset of 30s
How much data is needed for recognizing if a piece of audio is music or speech?
If someone knew the answer to this question exactly then the problem would be solved already :)
But seriously, it depends on what your downstream application will be. Imagine trying to discriminate between speech with background music vs acapella singing (hard) or classifying orchestral music vs audio books (easy).
How long should the audio samples be ideally?
Like everything in machine learning, it depends on the application. For you, I would say test with at least 10, 20, and 30 secs, or something like that. You are correct in that the spectral values can change rather drastically depending on the length!

System performance and battery use when using the accelerometer and gyroscope

I’m working on a project that incorporates the use of the accelerometer and gyroscope.
In the specific app I’m working on, I turn on (e.g. .startGyroUpdatesToQueue) the accelerometer and gyroscope when required and off (e.g. .stopGyroUpdates()) when not needed similar to Apple's documentation recommendations.
However, I’ve noticed that there can be a slight delay when turning the accelerometer and gyroscope back on which the user notices every now and then. So the preference is to keep the accelerometer and gyroscope always on so the user gets a uninterrupted experience.
Questions:
1 - How efficient is the accelerometer and gyroscope on the system performance and battery use when these are enabled in an app?
2 - Is there evidence/data of the system performance and battery use when the accelerometer and gyroscope are on?
3 - Is there a way to pause the accelerometer and gyroscope instead of completely turning it off?
Answering number 3 first, on modern iPhones (5S and later) the accelerometer is never really turned off and resides in a special motion coprocessor. On these devices, the energy cost for creating the data is constant, but getting the data is expensive. It requires a timer to routinely wake up the main processor, read the data, wake up your application and execute an event on one of your threads. The closest thing to what you're asking would be for a way to have the timer turned on but not have it feed into your app. There does not appear to be a way to do this and the energy savings would probably not be that great if there were.
With that in mind, 1 is going to be fairly subjective. The processor and your app are both going to spend more time running, but if you were already doing work on the CPU will it add that much? Similarly, if users only spend 5% of their time in a screen where you don't need the accelerometer versus 50% of their time, the overall energy impact of having it constantly on will be a lot less. That really brings us to the heart of the question, number 2.
If you want to see what energy costs are associated with constantly polling the accelerometer versus only turning it on when needed, you should profile your app. When debugging your app, you can view CPU, energy, and other impacts of your app directly in Xcode using the Debug Navigator (⌘6). This is explained in Apple's Energy Efficiency Guide of iOS Apps: Measure Energy Impact with Xcode. You can also get a more detailed analysis with Instruments. Apple provides full details in their Energy Efficiency Guide of iOS Apps: Measure Energy Impact with Instruments.
Using the above tools, you should be able to get a feel for how much more energy will take to keep your accelerometer always on, and be able to make a reasoned decision about what to do.

User Activity Static/running/Walking/Driving based on CoreMotion data only

How can we detect user is driving/walking/running/static with CoreMotion data.
We can get user activity in iPhone 5s using CMMotionActivityManager. But how to get in lower version devices.
With the help of CLLocationManager I can get the device speed and based on speed I can decide the user state, which drain battery life of device.
Is there any possibility to detect Device State based on Core motion only?
Some application like like Place me app does, It detect user activity based on Coremotion data.
It is nice Machine Learning task. You need to
collect lots of data and annotate it (label each sample, whether it is driving/walking/running/static),
design a feature vector,
then train an appropriate classifier.
The details really wouldn't fit here, I suggest googleing "accelerometer activity recognition". In particular, among the first hits I find
Human Activity Recognition from Accelerometer Data Using a Wearable Device
Activity Recognition from Accelerometer Data
quite readable, relevant and useful.
The bad news is that it is more work to implement it than you probably think. Much more work. :(
In any case, I hope this answer helps a little bit.

What units of power/energy consumption have Energy Levels in the Energy Instrument for an iOS app?

I'm actually measuring the energy consumption of my iOS application through the Energy instrument. I want to know the measure (e.g., in Joules) of the energy levels given by the Energy Instrument for an iOS app. Is there any relationship between the common energy consumption unit (Joules) and those energy levels? Thanks in advance for your response!
Energy Diagnostics reports power consumption as an un-united number between 0 and 20; we call them "electricities" at my office. Powergremlin gives you some insight into the actual numbers that make up said "electicity" units.
Depending on which app is set up on the platform, also the specified standard to the calculations such as British/American/SI standards. In general, in the European measurement system, the energy unit can be expressed in GWh/MWh/kWh and in the British system is "BTU" or "ktoe". Also, there are online platforms to convert these units to each other

Resources