Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have a bluetooth device that streams artificial audio data to an iOS app.
I say artificial because this 'sound' is not recorded, but synthesized by ways of transfer functions applied on another signal. The generated audio data has a frequency range of 30 - 80 Hz.
The data is sampled at 500Hz, and in Int32 type, with values 0 -> 4096 (12 bit).
Question: Using the core Audio framework, what steps should I take to playback this data through the iOS device's speakers as it is streaming in (i.e real-time playback)?
Yes, Core Audio (Audio Units, Audio Queue API) would be appropriate for near-real-time streaming playback (very short buffers). You will likely need to upsample your data to something more like 44.1 or 48 kHz, which are the typical iOS device hardware audio output rates.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to record incoming/outgoing audio/video streams in my iOS app that uses GoogleWebRTC pod. so far I haven't found any api that allows us to record Audio/Video streams. I tried to record local video feed from the camera using AVCaptureSession, but doing this freezes the outgoing video stream. this happens probably because multiple AVCaptureSession instances cannot use the same input device.
Is there any other way to record local and/or remote streams from WebRTC in iOS?
Replay kit, but that has limitations.
The alternative method is using the actual lib, no the cocoa pods, and extract audio/video from each incoming stream.
Once you have the actual video/audio currently being displayed/played you can combine and mux as you see fit.
Very complex though
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a video and audio streaming urls.I want to check the user's band width if its slow it should play the audio. If its fast enough should play the video. How can I check whether the bandwidth is slow or fast in swift
m3u8 Structure
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM- INF:BANDWIDTH=814508,CODECS="avc1.66.51,mp4a.40.34",RESOLUTION=720x576
chunklist_w247403833.m3u8
If your video and audio files are of the same content, then you should be able to configure a m3u8 file for your needs. I've not done this in practice but you should be able to create a m3u8 file like the example below and Apple's media streamer should then automatically detect the bandwidth and play the appropriate file, and as the bandwidth changes Apple should automatically switch files to provide the best experience
(you'll need to tweak the file below but this should give you a head start)
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=216000,RESOLUTION=400x300
amazingVideo.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=100000,RESOLUTION=400x300
justAudio.m3u8
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If i record audio in noisy background and then compare recorded audio file with local saved audio file to check both are same song are not ? In iPhone
How Shazam and Sound Hound app works to generate Fingerprint of recorded audio in iPhone?
can anyone explore knowledge on algorithm to generate Fingerprint from audio file in ios.
Thanks
The fast (but possibly not very helpful) answer is: They probably do a Fourier Transform of the audio and compare. If you have never heard of Fourier before, you may want to find an API for it - the transform itself is not a programming problem, it's a math/signals problem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We're trying to do low latency video on iOS while complying with Apple's HLS guidelines for video over cellular. From a technical perspective we can set our EXT-X-TARGETDURATION: to N seconds where N is less than 10 (think 2 or 3 seconds).
Practically, is this allowed? Does anyone have experiencing with an app getting approved when using HLS segments of 5 seconds or fewer? I have heard anecdotal evidence that this is not allowed.
Regarding Apple's rules, nothing I've seen has a restriction on segment minimums. However, there is a technical note that states:
"We strongly recommend a 10 second target duration."
The question is whether that recommendation is effectively an unstated rule.
Source: http://developer.apple.com/library/ios/#technotes/tn2224/_index.html
One further note - Apple provides a "Media Stream Validator Tool" which does not report errors on streams with target durations of 2 or 3 seconds. That further suggests the stream duration is "valid".
Thank you for any experience/thoughts. Also, if you know of any approved apps that you believe are using HLS segments of 5 seconds or fewer, that would also be helpful since I can check to see if they are indeed using shorter duration and use that as a comp for Apple's approval.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I want to produce ultrasonic sound from an iPhone device.
How can I do that?
At least the iPhone 4s is rated for 20Hz - 20,000Hz, which means it cannot generate ultrasonic frequencies.
I doubt that any mass-market iOS device will produce ultrasonic frequencies.
http://www.apple.com/iphone/iphone-4/specs.html
I know of no cellphones with ultrasonic transducers as standard. The small speaker on most phones may well have some response at ultrasonic frequencies, but the D>A converters and audio amplifiers will not.
You need special hardware.
Create the sound on your computer (http://www.blackcatsystems.com/software/audiotoolbox.html maybe?), store it as a WAV (or any other compatible format) and play it back using the audio playback capabilities on iOS.
EDIT
OK, so the phone itself can't do it.
Build some custom hardware that can emit sound waves at frequencies higher than 20,000Hz and then develop an application to utilise the hardware.