I'm working with an audio app and Im using the amazing AudioKit.
Now, I want to color my app sounds with the eq-curve of an old speaker cabinet. I have this reference audio file which contains white noise that was played back through this speaker cabinet and then recorded with a mic.
I´ve manually analyzed the file in a music DAW and created a parametric eq with 60 AKEqualizerFilter, but it must be a better way..?! Both time wise and performance wise.. Ive also tried the AKConvolution by sending it a very short clip of the filtered white noise, but its most hearable on the very short slap back it produces..
Any tips would be great! Thanks
Related
I am developing an app for iOS devices that is supposed to have a waveforms of music files like on SoundCloud. The problem is that I have achieved generation of waveform of fully downloaded file, how to generate a waveform of streaming audio during its playback? If someone's aware of how SoundCloud presents its waveforms please reply.
If we are talking about SoundCloud, for displaying an audio waveform, what I think is that they are -somehow- working with metadata for each specific audio to draw its desired waveform; Why it might be right? that's because the waveform will be drawn for each audio even before playing it (without waiting for streaming it). Applying the previous approach might be suitable solution for your issue.
However, I suggest to checkout this library, it might contains what are you looking for (drawing the waveform while streaming the audio file).
Also checking this Q&A might be helpful to your case.
Hope this helped.
I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.
I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.
I'm using AVMutableComposition and AVAssetExportSession to composite several discrete audio clips/files together into a single file, similarly to this post but there will be no "video" track. I'd like to give the track some visual appeal using a still image so that when the user plays the clip they don't just see a generic quicktime icon, ideally I'd replace the image with branding or something relevant to the audio content. How would I go about doing it and is there a way to do it without dramatically increasing file size(ie some way to have a really slow framerate or just something so its not generating 30 fps for what is non moving art.) Appreciate any help on this.
AVAssetWriter will allow you to create video from a still image. This question provides a great example of how to do so.
In my BB application I need to play/record sound and simultaneously show a Sound Graphic Analyser(as shown in image attached) within the application.I have searched forums but have found nothing significant.
I want to show graphics as shown when playing or recording music dependending upon the pitch of the sound. Is this possible?
As far as I know there is no API in RIM SDK to obtain such data upon playing a media file. But you can analyze the sound file contents by yourself, draw diagram and implement "cursor" (vertical green line on your image) that will be based on the time passed after start of the sound playing.