I am trying to write an app in Apple Swift that monitors audio from the microphone and displays the volume level on a VU meter style graph. I know how to do it using AVAudioRecorder, but I don't want to record and save the audio, I just want to monitor and observe the volume levels, as I will be monitoring the audio for multiple hours and saving this to the phone would take up tons of space.
Can anybody lead me in the right direction as to how I can do this?
Thanks!
I do not have any code to show, as I a just looking for the right direction to go not debugging.
You can use AVCaptureSession:
add an input device (the microphone) using AVCaptureDeviceInput;
add an output AVCaptureAudioDataOutput, setting the sample buffer delegate
Once you start the session, the delegate will receive audio samples that you can process however you wish.
Don't forget to ask permission before using the AVCaptureDevice!
Related
I want to detect the main direction of the sound recorded from iPhone. For example, I want to detect if the sound comes from "front" or "rear" camera.
https://developer.apple.com/documentation/avfoundation/avaudiosessiondatasourcedescription
This link describes how to set, but not how to detect in real time.
UPDATE:
Example use:
I start recording with front and back camera at the same time. I want to detect if audio comes from front o rear to change camera automaticatlly.
Is there any way?
Thanks!
You can iterate over AVAudioSession.inputDataSources, to check out available sources and obtain the one you want, and then set it to AVAudioSession.setPreferredInput(). If You don't need to set the input but just check it, use AVAudioSession.currentRoute.inputs
There is a talking cat app well known for iOS devices, in which you speak your voice and he repeats. Analyzing this app, you'll see that it stops talking when you stop talking, that is, it stops to capture the audio when not receive another voice.
I was giving a analyzing the methods of AVAudioRecorder class, and not found any method in which to capture when the User stop to talking or recorder stops to receive external audio.
How can I capture when the audio recorder stops to receiving audio.
Process the audio stream as it is coming through. You can look at the frequency and volume of the stream. From there you can determine if the user has stopped talking.
I suggest frequency and volume as the recorder still picks up background audio. If the volume drops dramatically then the sounds the recorder is picking up must be further away from the device than before. The frequency can also lend itself to:
A.) Filter out the background audio in the audio used to replay the audio with a pitch change or any other changes. etc.
B.) I do not know the limits of frequency for the average human. But this covers the use case where the user has stopped talking, but have moved the device in such a way that the recorder still picks up load shuffling from moving fingers near the mic.
I need some guidance as I may have to shelve development until a later time.
I want to play a sound once the lights are switched off and the room goes dark, then stop the sound once the light is switched back on. I've discovered that Apple doesn't currently provide a way to access the ambient light sensor (not in any way that will get App Store approval).
The alternative I've been working on is to try and detect sound levels (using AVAudioPlayer/Recorder and example code from http://mobileorchard.com/tutorial-detecting-when-a-user-blows-into-the-mic/. I.e., when I detect voices of people in the room have dropped to a specific level (i.e. silence trying to compensate for background noise), I play my sounds.
However, if the people in the room start talking again and I detect the voices, I need to stop playing the sounds.
Q: is this self-defeating, i.e., the sound generated by the iPhone will essentially be picked up by the iPhone microphone and indistinguishable from any voices in the room? Methinks yes and unless there's an alternative approach to this, I'm at an impasse until light sensor API is opened up by Apple.
I don't think the noise made by the iPhone speaker will be picked up by the mic. The phone cancels sounds generated by the speaker. I read this once, and if I find the source I'll post it. Empirically, though, you can tell this is the case when you use speaker phone. If the mic picked up sound from the speaker that's an inch away from it, the feedback would be terrible.
Having said that, the only sure way to see if it will work for your situation is to try it out.
I agree with woz: the phone should cancel the sound it's emitting. About the ambient light sensor, the only alternative I see is using the camera, but it would be very energy inefficient, and would require the app to be launched.
I want to create an application for measuring the sound input on an iPad.
Users will have to scream into the iPad mic and a gauge will display the scream level. So I will have to get the mic volume and display the value on the gauge.
You can use the AVAudioRecorder for that. Have a look at this tutorial: Tutorial: Detecting When A User Blows Into The Mic
In Apple's documentation, focus on:
Using Audio Level Metering
meteringEnabled property
updateMeters
peakPowerForChannel:
averagePowerForChannel:
Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view: