Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to record incoming/outgoing audio/video streams in my iOS app that uses GoogleWebRTC pod. so far I haven't found any api that allows us to record Audio/Video streams. I tried to record local video feed from the camera using AVCaptureSession, but doing this freezes the outgoing video stream. this happens probably because multiple AVCaptureSession instances cannot use the same input device.
Is there any other way to record local and/or remote streams from WebRTC in iOS?
Replay kit, but that has limitations.
The alternative method is using the actual lib, no the cocoa pods, and extract audio/video from each incoming stream.
Once you have the actual video/audio currently being displayed/played you can combine and mux as you see fit.
Very complex though
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is it possible to feed a live twilio phone call with a audio stream from a url?
This audio stream should then be played back to the caller. For clarification: I do not want the stream of the call itself (using twiMLs <Stream>), nor do I want to playback a fixed audio file (using twiMLs <Play>), I would rather like to play from a stream to the caller.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have a bluetooth device that streams artificial audio data to an iOS app.
I say artificial because this 'sound' is not recorded, but synthesized by ways of transfer functions applied on another signal. The generated audio data has a frequency range of 30 - 80 Hz.
The data is sampled at 500Hz, and in Int32 type, with values 0 -> 4096 (12 bit).
Question: Using the core Audio framework, what steps should I take to playback this data through the iOS device's speakers as it is streaming in (i.e real-time playback)?
Yes, Core Audio (Audio Units, Audio Queue API) would be appropriate for near-real-time streaming playback (very short buffers). You will likely need to upsample your data to something more like 44.1 or 48 kHz, which are the typical iOS device hardware audio output rates.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to be able to stream live video from an iOS device to a server. I tried to use an AVCaptureOutput that captures each frame as a CMSampleBuffer and appends it using an AVAssetWriter, but I don't know when or how to take the input from the file and send it to the server. How should it be formatted? How do I know when to send it?
Though i am not sharing any code with you, I am sharing my logic with you what i have done in one of my app.
First way(The easy one): There are lots of low cost third party library available for your use.
Second way(The hard one): Create small chunk of video for example 2sec or less, keep them in queue and upload it on the server(don't use afnetworking or http method it will slow down the process use some chat server like node.js or other). And keep one text file or db entry where you keep the track of the chunk file and its sequence. And once your first chunk is uploaded you can use ffmpg to make a video from the actual chunk, the more chunk you got add them in the main video file, and if you play the actual video on the device you don't have to do any more modification it will automatically fetch the new part once it is changed on the server.
Thank You. Hope it helps you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am planning to develop an app for recording voice for iOS. The app should provide functions to record, stop, play, pause, rewind and fast forward on the user interface. When recording, the user should have playback option. Is it possible to develop such an app?
It might possible, I never try it on hand. But you can try to record multiple file and keep record the number of file that being recorded and in the end, you can merge multiple file in to one.
The algorithm that I can think of is.
Start recording
Set FileRecordedNumber = 1
On pause, save the temporary file to FileRecord1.mp3 or something
When start recording again, you should increment the FileRecordedNumber
On pause, save the temporary file to FileRecord2.mp3
On play sound or stop, for loop FileRecordedNumber and merge the sound file
Play the sound
You can start new record sound file whenever you continue the record after play
Sorry I could not provide the code, but I found couple of article that might help you for merging sound file.
Combine two audio files into one in objective c
How to concatenate 2 or 3 audio files in iOS?
http://developer.telerik.com/featured/merging-audio-video-native-ios/
Good luck for you project
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If i record audio in noisy background and then compare recorded audio file with local saved audio file to check both are same song are not ? In iPhone
How Shazam and Sound Hound app works to generate Fingerprint of recorded audio in iPhone?
can anyone explore knowledge on algorithm to generate Fingerprint from audio file in ios.
Thanks
The fast (but possibly not very helpful) answer is: They probably do a Fourier Transform of the audio and compare. If you have never heard of Fourier before, you may want to find an API for it - the transform itself is not a programming problem, it's a math/signals problem.