I'm developing an iOS app using Swift and my problem is, trying to adjust the playback speed during audio streaming.
I am using AVPlayer for this project:
streamingPlayer = AVPlayer(playerItem: audioItem)
To adjust playback speed:
streamingPlayer.rate = slider.value
I tested it using a test server, and it works fine for normal links which help access audio files. There is no problem with the adjustment of playback speed and it works fine.
The problem occurs when the links involve more security. I can access the audio with a request through .ashx. The speed can be made slower (AVPlayer rate < 1) but the problem occurs when it has to be made faster. (Normal rate is 1, so problem is rate cannot be greater than 1)
I tried to use the function below, but it didn't help:
audioItem.canPlayFastForward
Another weird problem is that when testers tested it on high speed internet, it worked fine. I tested it on speeds between 15-30 mbps, and the problem still persists. I am assuming it is a problem with the connection to the server. I don't want users to use high speed internet to use the app. So can someone please help me optimize or solve this problem?
I have looked through the AVPlayer documentation and tried almost every functions provided.
Here is a sample link. Would appreciate if someone can help me out!
I realize that this is coming in 9 months later but I had a similar problem to yours and figured it out so I thought I'd post for anybody looking at this in the future. You have to set: AVAudioPlayerName.enableRate = true.
You can increase or decrease the rate by this single line code:
Avplayer.playImmediately(atRate: 1.25)// change the rate
Related
Herewith we are facing the data stalling in 360 video in youtube application and also observed in more content for example videoID of 'HemwKBjQ0Uc'(【VR】Elemental Demo - 60fps 4k 8k Stereo 360 with Ambisonic audio). In problematic case, buffer is removing from the RangeList using next range(DeleteAndRemoveRange(&next_range_itr)) and also problem observed within 30-60 secs for above mentioned content. And also we are using Cobalt 13.11 version, MergeWithAdjacentRangeIfNecessary() API has been problematic from our analysis. And also for our internal validation, we have increased the non video budget and 1080p resolution upto 50 MB, Data stallation has not been observed in 360 video and content was playing contionously for that content. For your information, We have checked with latest cobalt application and observed the same behaviour.
Please advise us to conclude this issue.
Data Stalling- Video frame didn’t recieve to ffmpeg_video_decoder even after giving kneedbuffer but audio data receiving as usual continuously. Can you please try to play a above mentioned video. And also we already have ensured in latest cobalt 19+ also having this issue.
Thanks in advance
Have you try to increase the video buffer budget set by variables like COBALT_MEDIA_BUFFER_VIDEO_BUDGET_4K and COBALT_MEDIA_BUFFER_MAX_CAPACITY_4K?
Yes xiaoming. We have already tried this and working fine but we need to know the reason of appndbuffer not happening while after this call happens (DeleteAndRemoveRange(&next_range_itr))
My iOS app uses AVPlayer to decode H.264 videos with AAC audio tracks out of local device storage. Content with bit rate spikes cause audio to drop shortly (less than a second) after the spike is played, yet video playback continues normally. Playing the videos through Safari seems to work fine, and this behavior is repeatable on several models of iPhones ranging from 6s through 8 plus.
I've been looking for any messages generated, delegates called with error information, or interesting KVOs, but there's been no helpful information so far. What might I do to get some sort of more detailed information that can point me in the right direction?
Turned out that the AVPlayer was configured to utilize methods for loading data in a custom way. The implementation of these methods failed to follow the pattern of satisfying the requests completely. (Apple docs are a vague about this.) The video portion of the AVPlayer asked for more data repeatedly, so eventually all its data got pulled. However, the audio portion patiently waited for the data to come in because there were neither an error state reported nor was all the data provided -- the presumption being that it was pending.
So, in short, sounds like there's provisions in the video handling code to treat missing data as a stall of some form and to plow onward, whereas audio doesn't have that feature. Not a bad design -- if audio cuts out it's very noticeable, and it's also by far the smaller stream so it's much less likely.
Despite spending quite a few days on the problem before posting, the lack of any useful signals made it hard to chase down the problem. I eventually reasoned that if there's no error in producing output from the stream, the problem must be in the delivery of the stream, and the problem revealed itself once I started tweaking the data loading code.
I'm using an app that records audio and streams it to another user. It's basically a VoIP call. The problem I'm running into is that the audio I'm streaming to the peer is delayed by about 0.5 seconds. This is quite noticeable, and a little annoying when you both try to talk at the same time.
I'm wondering if this is common among AVFoundation's AVAudioEngine, or if possibly it's something to do with the way I set it up.
I can include source code if this is NOT a known problem with AVAudioEngine, otherwise can you please suggest the best route to record audio with the least delay?
I would also prefer something that is fairly high-level, and compatible with swift 3/3.1. However, if there is not a solution that meets these needs, then recommend the tool you think seems best fit.
Thank you!
Ensure that you call "AVAudioEngine.inputNode.installTap" function with the minimum supported bufferSize of 100 ms or (sampleRate * 0.1) samples.
I'm designing a simple proof of concept for multitrack recorder.
Obvious starting point is to play from file A.caf to headphones while simultaneously recording microphone input into file B.caf
This question -- Record and play audio Simultaneously -- points out that there are three levels at which I can work:
AVFoundation API (AVAudioPlayer + AVAudioRecorder)
Audio Queue API
Audio Unit API (RemoteIO)
What is the best level to work at? Obviously the generic answer is to work at the highest level that gets the job done, which would be AVFoundation.
But I'm taking this job on from someone who gave up due to latency issues (he was getting a 0.3sec delay between the files), so maybe I need to work at a lower level to avoid these issues?
Furthermore, what source code is available to springboard from? I have been looking at SpeakHere sample ( http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html ). if I can't find something simpler I will use this.
But can anyone suggest something simpler/else? I would rather not work with C++ code if I can avoid it.
Is anyone aware of some public code that uses AVFoundation to do this?
EDIT: AVFoundation example here: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb=1
EDIT(2): Much nicer looking one here: http://www.switchonthecode.com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework
EDIT(3): How do I record audio on iPhone with AVAudioRecorder?
To avoid latency issues, you will have to work at a lower level than AVFoundation alright. Check out this sample code from Apple - Auriotouch. It uses Remote I/O.
As suggested by Viraj, here is the answer.
Yes, you can achieve very good results using AVFoundation. Firstly you need to pay attention to the fact that for both the player and the recorder, activating them is a two step process.
First you prime it.
Then you play it.
So, prime everything. Then play everything.
This will get your latency down to about 70ms. I tested by recording a metronome tick, then playing it back through the speakers while holding the iPhone up to the speakers and simultaneously recording.
The second recording had a clear echo, which I found to be ~70ms. I could have analysed the signal in Audacity to get an exact offset.
So in order to line everything up I just performSelector:x withObject: y afterDelay: 70.0/1000.0
There may be hidden snags, for example the delay may differ from device to device. it may even differ depending on device activity. It is even possible the thread could get interrupted/rescheduled in between starting the player and starting the recorder.
But it works, and is a lot tidier than messing around with audio queues / units.
I had this problem and I solved it in my project simply by changing the PreferredHardwareIOBufferDuration parameter of the AudioSession. I think I have just 6ms latency now, that is good enough for my app.
Check this answer that has a good explanation.
am newbie for multimedia work.i want to capture audio by samples and transfer to some other ios device via network.how to start my work??? .i have just gone through apple multi media guide and speakhere example ,it is full of c++ code and they are writing in file and then start services ,but i need buffer...please help me to start my work in correct way .
Thanks in advance
I just spent a bunch of time working on real time audio stuff you can use AudioQueue but it has latency issues around 100-200ms.
If you want to do something like the t-pain app, you have to use
RemoteIO API
Audio Unit API
They are equally difficult to implement, so I would just pick the remote IO path.
Source can be found here:
http://atastypixel.com/blog/using-remoteio-audio-unit/
I have upvoted the answer above, but I wanted to add a piece of information that took me a while to figure out. When using AudioQueue for recording, the intuitive notion is that the callback is done in regular intervals of whatever the number of samples represent. That notion is incorrect, AudioQueue seems to gather the samples for a long period of time, then deliver them in very fast iterations of the callback.
In my case, I was doing 20ms samples, and receiving 320 samples per callback. When printing out the timestamps for the call, I noticed a pattern of: 1 call every 2 ms, then after a while one call of ~180ms. Since I was doing VoIP, this presented the symptom of an increasing delay on the receiving end. Switching to Remote I/O seems to have solved the issue.