I have tried to live stream audio (AAC-LC) from iOS for 3 months without much success...
I tried Audio Queues, which work well but there is a strange delay (~4s) and I don't know why (high level API ?)
I tried Audio Units, it sometimes works on the simulator but never with the phone using a modified code from this source
I am really lost, can anyone help me ?
EDIT
I have to do a live streaming application (iPhone-> Wowza Server via RTSP). The video part works well with little delay (1s). Now I'm trying to add audio in addition to video but I'm stuck with the SDK.
tldr : I need to capture microphone input then send AAC frames over the network without getting huge delay
This app, which I just now completed, broadcasts audio between any two iOS devices on the same network:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
Compile it with the latest beta release of Xcode 9, and run it on two iOS 11 (beta) devices.
The app is simple; you launch it, and then start talking. Everything is automatic, from network connectivity to audio streaming.
Events generated by the app are displayed in an event log in the app:
Even though the code is simple and concise, the event log was provided to make understanding the app's architecture quicker and more easily.
Related
I'm currently running an Expo project with react-native-webview 9.4.0 installed, and I'm trying to use the WebView to play audio from a local .html file with a custom JS script using the Web Audio API. The audio works fine on the iOS simulator and on an iPhone when not on Silent Mode, but once I turn on Silent Mode it's impossible to get any audio to play.
There seems to have been a couple attempts to fix this issue over the past couple months (https://github.com/react-native-community/react-native-webview/pull/1218) and (https://github.com/react-native-community/react-native-webview/issues/1140), but even with the most recent version of react-native-webview I still can't get the Web Audio to play with the iPhone muted. I've tried various workarounds from StackOverflow without any success either. Does anyone else have this issue or have a workaround to suggest?
I am looking for a way of play an audio file and have it as input for the iOS simulator microphone. I am creating a bunch of UI tests for an iOS app that uses dictation (speech-to-text) and I didn't find a way of doing it using applications like SoundFlower, etc as I want to run in the CI and bypassing Apple's security Gatekeeper is not possible as I can't run the machine in recovery mode.
I want to launch the app as an UI test and send some audio to the "device microphone" as a way of triggering a wakeword. Mocking the code here is not an option, or better, not what we want, otherwise we won't test anything.
I was wondering if I can use AudioKit to do this.
I don't think AudioKit will help you with this. I would have thought Soundflower could do it, but you could also try Rogue Amoeba's Loopback if you haven't already.
The iOS simulator does inherit audio from the host Mac, so you could try an old school solution of physically attaching a cable to the microphone input from the output.
I'm writing mobile application with Adobe AIR. The application use AIR Microphone API to record sound to file and later replay it.
The problem manifest only on mobile devices, not simulator. Specifically only on iOS devices, android devices seem to work OK.
Sometimes the recorded sound is missing samples. I know this because I use iFunBox to copy the recorded file to another application that replay it. The dropped frame manifest during playback as very fast audio because only part of the samples were recorded.
Sometime the playback is to slow which manifest as very slow audio. I know this because when the recording is fine and the other application play the sound right or when I take a file I recorded in the simulator (which run on my MacBook) and it only play slow on the mobile device.
How can I make sure the sound is good even when the application is a bit busy?
I've built the application as ad-hoc package and install it on iPad using TestFlight and now everything seem to work just fine.
I guess during debug Adobe AIR did not manage to fill the sound buffer fast enough and cause the distortion.
For a streaming radio station, I have an AAC+ audio stream, inside an FLV container, delivered via HTTP. An example URL is http://3023.live.streamtheworld.com/ALTROCK_S01A_AAC. I wrote a simple AIR app (using the latest AIR and Flex SDK's) to play this stream, and it works fine on PC and Android, but doesn't play anything when deployed to the iOS simulator or a device (i.e., the bytes are loaded but there is no sound).
This is similar to Can FLV AAC stream be played in Android, but for iOS.
I wanted to use AIR in this scenario, since I need to listen for the Cue Points in the FLV - and this is easy to do if you're playing Flash in a web browser, so AIR seems like the natural choice. I have also looked at http://code.google.com/p/haxecast/ and https://code.google.com/p/project-thunder-snow/ but they all seem to use the same basic idea (parse the FLV using Netstream in "data generation mode" and feed the AAC+ data to a Video object) - and so they all hit the same wall on iOS.
I also came across this post which seems possibly related although it's not quite the same situation (e.g., it's not FLV).
Is AIR on iOS supposed to support this scenario- namely, streaming AAC+/FLV audio via HTTP?
EDIT: This post also appears to hit the same obstacle - so a lot of people are asking about this situation. Anyone from Adobe have any insight?
After much further research I've concluded that AIR on iOS just doesn't support this, and you have to build a native app (or at least use framework other than AIR) instead.
Is there a way to programmatically capture what's being played on an iOS device audio output? I want to capture an audio snippet and do some data processing on it.
There is an entire library devoted to this sort of thing that Apple built in for you called AVFoundation.
It's extensive and sprawling, so be prepared to do some work, but you'll be able to sample and manipulate the bit-streams directly with all types of audio.
If you are trying to have your app run in the background and strip audio from another app (like iTunes) or from the phone itself, put on your black hat, hack into the private API's of iOS, and release your stuff over in the jailbroken app-store, because that sort of thing is explicitly designed to not be possible with legal and legitimate apps.