I am currently working with the MPMoviePlayerController and am analysing metrics for video playback. Specifically, analysing adaptive bitrates.
As part of testing I load a particular rendition of the video at a fixed bitrate (995kbps), however when reading from the observedBitrate property of my MPMovieAccessLogEvent, this value is much more inflated - to the tune of around 15mbps.
Is there any known reason why this bitrate being returned is considerably higher than that of the playback? I have double checked all values, and all playback, and it is definitely the observedBitrate that is inflated.
According to the documentation, this value is:
The empirical throughput across all media downloaded for the movie
player, in bits per second.
Update
I posted this question on the developer forums and have received an answer, which is still just conjecture but thought it might aid the question anyway and maybe provoke a better answer.
https://devforums.apple.com/thread/216659?tstart=0
It would be worth checking your HLS video with mediastreamvalidator
which will download and measure your segment bit rates.
There is a simple answer to this - the indicatedBitrate of a MPMovieAccessLogEvent (or AVPlayerItemAccessLogEvent for AVPlayer) is the bitrate from the current playlist, so is an average bitrate required to play the stream.
However, the observedBitrate is NOT averaged - it is the instantaneous bitrate (or download speed) which the player achieved while downloading a particular chunk of video.
Example: Playing a playlist with a 1000 Kb/s stream, in chunks of 10 seconds each. The device can achieve over 10MB/s download over WiFi, so it takes less than 1 second to download each chunk. Therefore, the player is downloading at over 10,000 Kb/s during each chunk.
I'd expect the player to return (approximately) these values:
indicatedBitrate: 1000 Kb/s
observedBitrate: 10,000 Kb/s
I'd been mystified by these large values myself, but I think this explains it.
This is just for illustration - these values are not very meaningful since we don't really know how long it takes to download a chunk, or indeed how big each chunk is. All the observedBitrate really tells you is how well the player is managing to keep up with the bitrate needed to play the stream. If the former is 10x bigger than the latter, then it is only using 10% of the available time to download each chunk. This ratio may be used as a quality-of-service indicator.
For example, if the observedBitrate is less than the indicatedBitrate then it is very likely that the player will stall due to buffering, but as long as it is greater, then all is well and the stream is likely to play smoothly.
Related
Is there any good solution of iOS AVPlayer to let user choose specified HLS video resolution / bandwidth?
So the question will be separated in to two
Get the resolution / bandwidth list in m3u8:
Specify the stream resolution and bandwidth
For 1. A workaround solution is to get indicatedBitrate of
AVPlayerItemAccessLogEvent
(Get bandwidth of stream from m3u stream)
The other possible solution is to download and parse m3u8, apart from the AVPlayer interface.
For 2. A workaround solution to change the default adaptive behavior of AVPlayer is to use preferredPeakBitRate or preferredMaximumResolution. But video quality might still get lower if network gets slower. (Change HLS bandwidth manually?)
Thank you.
I have a single-channel wave coming in at an 8000 Hz sampling rate.
I need to analyze frequencies that are between 5 Hz and 300 Hz in real-time, with emphasis on signals from 10 to 60 Hz.
My thought initially is to run the 8000 Hz sample into a buffer, collecting about 32000 samples. Then, run a 32000 window-sized fourier transform on it.
The reasoning here is that for lower-frequency signals, you need a larger window size (right?)
However, if I'm trying to display this signal in real-time, it seems like the AudioAnalyserNode might not be a good choice here. I know the WebAudio API would allow me to get the raw data, but ideally the AudioAnalyserNode would be able to run a new fft based on the previous 32000 samples, even if a smaller amount of samples have become newly available. At this point, it seems like the fft data is only updating once every four seconds.
Do I have to create a special "running bin" so that the display updates more frequently than once every 4 seconds? Or, what's the smallest window size I can use to still get reasonable values in this range? Is 32000 a large enough window size?
I am using the WebAudio API analyser node in javascript, but if I have to get the raw data, I'm also willing to change libraries to another one in javascript.
Using an AnalyserNode, you can call getFloatFrequencyData as often as you like. This will return the FFT of the last fftSize samples. These get smoothed together. For full details, see AnalyserNode Interface
Also, the WebAudio spec allows you to construct an AudioContext with a user-selectable sample rate. You could set your sample rate to 8000 Hz. Then your FFTs can have finer resolution with less complexity.
However, I don't think any browser has implemented this capability yet.
An alternative would be to get a supported audio card that allows a sample rate of 8000 Hz and set up your system to use that as the default audio output device, Then the audio context will have a sample rate of 8000 Hz.
I've got a dedicated thread that caputures audio from Alsa through snd_pcm_readi(). Periodically I get a short read, meaning snd_pcm_readi() returns a positive integer lower than my buffer size, and there's obviously a 'pop' sound in my audio stream. Then I set the thread priority to real-time and this gives a tangible benefit, far less short reads, but this doesn't solve.
Now the question: before going down the bumpy road of a real-time patched Linux kernel, there's something else I can do to squeeze out some more performance? Is calling snd_pcm_readi() in a dedicated thread the best way to pull audio out of Alsa?
For playback, the buffer size determines the latency.
For capture, it does not; only the period size determines how long you must wait until recorded samples are reported to be available.
So to prevent overruns, make the buffer as large as possible (e.g., by calling snd_pcm_hw_params_set_buffer_size_max() after setting the other parameters).
So, I have setup a multichannel mixer and a Remote I/O unit to mix/play several buffers of PCM data that I read from audio files.
For short sound effects in my game, I load the whole file into a memory buffer using ExtAudioFileRead().
For my background music, let's say I have a 3 minute compressed audio file. Assuming it's encoded as mp3 # 128 kbps (44,100 Hz stereo), that gives around 1 MB per minute, or 3 MB total. Uncompressed, in memory, I believe it's around ten times that if I remember correctly. I could use the exact same method as for small files; I believe ExtAudioFileRead() takes care of the decoding, using the (single) hardware decoder when available, but I'd rather not read the whole buffer at once, and instead 'stream' it at regular intervals from disk.
The first thing that comes to mind is going one step below to the (non-"extended") Audio File Services API and use AudioFileReadPackets(), like so:
Prepare two buffers A and B, each big enough to hold (say) 5 seconds of audio. During playback, start reading from one buffer and switch to the other one when reaching the end (i.e., they make up the two halves of a ring buffer).
Read first 5 seconds of audio from file into buffer A.
Read next 5 seconds of audio from file into buffer B.
Begin playback (from buffer A).
Once the play head enters buffer B, load next 5 seconds of audio into buffer A.
Once the play head enters buffer A again, load next 5 seconds of audio into buffer B.
Go to #5
Is this the right approach, or is there a better way?
I'd suggest using the high-level AVAudioPlayer class to do simple background playback of an audio file. See:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Chapters/Reference.html#//apple_ref/doc/uid/TP40008067
If you require finer-grained control and lower latency, check out Apple's AUAudioFilePlayer. See AudioUnitProperties.h for a discussion. This is an Audio Unit that that abstracts the complexities of streaming an audio file from disk. That said, it's still pretty complicated to set up and use, so definitely try AVAudioPlayer first.
I'm just researching at the moment the possibility of writing an app to record an hours worth of video/audio for a specific use case.
As the video will be an hour long I would want to encode on-the-fly and not after the recording has finished to keep disk usage to a minimum.
Do the video capture APIs write a large uncompressed file to disk that has to be encoded after or can they encode on-the-fly resulting in a optimised file written to disk?
It's important that the video is recorded at a lower resolution than the iPhone's advertised 720/1080p as I need to keep the file sizes down due to length of video (which will need to be uploaded).
Any information you have would be appreciated or even just a pointer in the right direction.
No they do not record uncompressed to disk (unless this is what you want). You can specify to record to a MOV/MP4 and have the video encoded in H264. Additionally you can control the average bit rate of the encoding. You can also specify the capture size, and output encoding size along with scaling options if needed. For demo code check out AVCamDemo in the WWDC 2010 sample code. This demo code may now be available in the docs.