AudiKit AKPitchShifter & AKTimePitch Pitch Correction - ios

I'm trying to get an autotune like sound from AKPitchShifter but the most I get is chipmunk type sound. I've played with different combinations with the AKTimePitch.pitch and the AKPitchShifter.shift both individually and together but everything comes out squeaky and too robotic.
I'm new to this library. Is there anything that I can add, such as other AudioKit classes, to get the sound close to autotune.
do {
let file = try AKAudioFile(readFileName: "someones-voice.wav")
let player = try AKAudioPlayer(file: file)
player.looping = true
let timePitch = AKTimePitch(player)
timePitch.pitch = 0.5
AKManager.output = timePitch
let pitchShifter = AKPitchShifter(player)
pitchShifter.shift = 1.5
AKManager.output = pitchShifter
try AKManager.start()
player.play()
} catch {
print(error.localizedDescription)
}

Resolved in git through a pull request addressing a few errors: https://github.com/lsamaria/AutoTuneSampler/pull/3

Related

Audiokit AKFFTTap generate array of 0 in offline rendering mode

I'm trying to use Audiokit and its AKFFTap to get the fft data of an audiofile.
I manage to get them in real time processing but as soos as I do it in offline rendering mode the generated data are 0.
So I was wondering if it was possible to get it with the offline rendering mode?
Here is the code I use:
class OfflineProcessingClass {
var tracker: AKFrequencyTracker!
var fftTap: AKFFTTap!
// ....
private func process(audioFile: AKAudioFile) throws {
// Make connection
let player = try AKAudioPlayer(file: audioFile)
tracker = AKFrequencyTracker(player)
fftTap = AKFFTTap(tracker)
AudioKit.output = tracker
// Setup offline rendering mode
let timeIntervalInSeconds: TimeInterval = 0.1
let sampleInterval = Int(floor(timeIntervalInSeconds * audioFile.sampleRate))
try AudioKit.engine.enableManualRenderingMode(
.offline,
format: audioFile.fileFormat,
maximumFrameCount: AVAudioFrameCount(sampleInterval)
)
// Setup buffer
let buffer = AVAudioPCMBuffer(
pcmFormat: AudioKit.engine.manualRenderingFormat,
frameCapacity: AudioKit.engine.manualRenderingMaximumFrameCount
)
// Start processing
try AudioKit.start()
player.start()
// Read file offline
while AudioKit.engine.manualRenderingSampleTime < audioFile.length {
let frameCount = audioFile.length - manualRenderingSampleTime
let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
try! AudioKit.engine.renderOffline(framesToRender, to: buffer)
// track is good
print("\(tracker.amplitude) dB - \(tracker!.frequency) Hz")
// Array of 0
print(fftTap.fftData) /////////////// <====== Error is here
}
// End processing
player.stop()
AudioKit.engine.stop()
}
}
Do you see something wrong in this code?
This is because handleTapBlock in BaseTap does a dispatch async on the main queue. That means, since you're occupying the main queue in your for loop, the BaseTap will never have the opportunity to get any callbacks. You'll need to relinquish the main queue for that to work.

WebRTC(iOS): local video is not getting stream on remote side

I am trying to make an app with Audio, video call using WebRTC.
remote video and audio are working properly in my app, but my local stream is not appearing on the client side.
here is what I have written to add a video track
let videoSource = self.rtcPeerFactory.videoSource()
let videoCapturer = RTCCameraVideoCapturer(delegate: videoSource)
guard let frontCamera = (RTCCameraVideoCapturer.captureDevices().first { $0.position == .front }),
// choose highest res
let format = (RTCCameraVideoCapturer.supportedFormats(for: frontCamera).sorted { (f1, f2) -> Bool in
let width1 = CMVideoFormatDescriptionGetDimensions(f1.formatDescription).width
let width2 = CMVideoFormatDescriptionGetDimensions(f2.formatDescription).width
return width1 < width2
}).last,
// choose highest fps
let fps = (format.videoSupportedFrameRateRanges.sorted { return $0.maxFrameRate < $1.maxFrameRate }.last) else {
print(.error, "Error in createLocalVideoTrack")
return nil
}
videoCapturer.startCapture(with: frontCamera,
format: format,
fps: Int(fps.maxFrameRate))
self.callManagerDelegate?.didAddLocalVideoTrack(videoTrack: videoCapturer)
let videoTrack = self.rtcPeerFactory.videoTrack(with: videoSource, trackId: K.CONSTANT.VIDEO_TRACK_ID)
and this is to add Audio track
let constraints: RTCMediaConstraints = RTCMediaConstraints.init(mandatoryConstraints: [:], optionalConstraints: nil)
let audioSource: RTCAudioSource = self.rtcPeerFactory.audioSource(with: constraints)
let audioTrack: RTCAudioTrack = self.rtcPeerFactory.audioTrack(with: audioSource, trackId: K.CONSTANT.AUDIO_TRACK_ID)
my full webRTC log attached here.
some logs I am getting (I think this is something wrong)
(thread.cc:303): Waiting for the thread to join, but blocking calls have been disallowed
(basic_port_allocator.cc:1035): Port[31aba00:0:1:0:relay:Net[ipsec4:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=2]]: Port encountered error while gathering candidates.
...
(basic_port_allocator.cc:1017): Port[38d7400:audio:1:0:local:Net[en0:192.168.1.x/24:Wifi:id=1]]: Port completed gathering candidates.
(basic_port_allocator.cc:1035): Port[3902c00:video:1:0:relay:Net[ipsec5:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=3]]: Port encountered error while gathering candidates.
finally, find the solution
it was due to TCP protocol in the TURN server.

How to change CreationDate Resource Value

I'm writing a recording app that enables the user to trim parts of previous recordings and concatenate them into one new recording.
My problem is: let's say I recorded an hour long track and I want to trim the first 2 minutes of that track. when I'll export these 2 minutes the creation date of this track will be "now", and I need it to match the date these 2 minutes actually took place.
So basically I'm trying to modify the tracks Url Resource Values, but I want to change only the creation date.
Is there a way to do this? or is there a way to add a new resource value key? or a way to attach the needed date to the url?
func trimStatringPoint(_ from: Date, startOffSet: TimeInterval, duration: TimeInterval, fileName: String, file: URL, completion: fileExportaionBlock?) {
if let asset = AVURLAsset(url: file) as AVAsset? {
var trimmedFileUrl = documentsDirectory().appendingPathComponent(fileName)
let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A)
exporter?.outputFileType = AVFileTypeAppleM4A
exporter?.outputURL = trimmedFileUrl
let start = CMTimeMake(Int64(startOffSet), 1)
let end = CMTimeMake(Int64(startOffSet + duration), 1)
exporter?.timeRange = CMTimeRangeFromTimeToTime(start, end)
exporter?.exportAsynchronously { handler in
if exporter?.status != AVAssetExportSessionStatus.completed {
print("Error while exporting \(exporter?.error?.localizedDescription ?? "unknown")")
completion?(nil)
return
}
}
//------------------------------------------------------
// this code needs to be replaced
do {
var resourceValus = URLResourceValues()
resourceValus.creationDate = from
try trimmedFileUrl.setResourceValues(resourceValus)
} catch {
deleteFile(atPath: trimmedFileUrl)
print("Error while setting date - \(error.localizedDescription)")
completion?(nil)
return
}
//------------------------------------------------------
completion?(trimmedFileUrl)
}
Have you tried mofifying metadata of the exported recording?
https://developer.apple.com/documentation/avfoundation/avmetadatacommonkeycreationdate
AVMutableMetadataItem *item = [AVMutableMetadataItem metadataItem];
metaItem.key = AVMetadataCommonKeyCreationDate;
metaItem.keySpace = AVMetadataKeySpaceCommon;
metaItem.value = [NSDate date];
NSArray *metadata = #{ metaItem };
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPresetMediumQuality];
exportSession.metadata = metadata;

Changing advice language doesn't work

I'm new with Skobbler SDK and learn with the Swift Demo + the well documented tuto (http://developer.skobbler.com/getting-started/ios#sec01)
However, I still can't configure the advice language settings using their instructions ...
Here is my code :
let settings = SKAdvisorSettings()
settings.advisorVoice = "fr"
settings.language = SKAdvisorLanguage.FR
settings.advisorType = SKAdvisorType.AudioFiles
settings.resourcesPath = NSBundle.mainBundle().resourcePath! + "/SKMaps.bundle/AdvisorConfigs/Languages"
The event is define by :
func routingService(routingService: SKRoutingService!, didChangeCurrentAdvice currentAdvice: SKRouteAdvice!, isLastAdvice: Bool) {
NSLog("New advice "+currentAdvice.adviceInstruction)
}
Si I get "in 90 meters turn right " for instance.
By the way, no audio files are played neither
Could you please give me a hand :) ? Thank you in advance
There is a bug in the code that is supposed to "play the audio advice" (in AudioService.m) - the name of the .mp3 file was not correctly built.
I've fixed this by making the following change:
func playAudioFile(audioFileName: String) {
var soundFilePath: String = audioFilesFolderPath + "/" + audioFileName
soundFilePath = soundFilePath + ".mp3"
if !NSFileManager.defaultManager().fileExistsAtPath(soundFilePath)
{
return
}
else
{
audioPlayer = try? AVAudioPlayer(contentsOfURL: NSURL(fileURLWithPath: soundFilePath), fileTypeHint: nil)
audioPlayer.delegate = self
audioPlayer.play()
}
}
This affected only the swift demo and will be fixed in the next update
Ok I found my mistake by replacing :
settings.advisorType = SKAdvisorType.AudioFiles
with
settings.advisorType = SKAdvisorType.TextToSpeech
However I still don't know how to use prerecorded files ... Even with the section "Using prerecorded files in tuto ...
Did you set your settings as the advisorConfigurationSettings of your SKRoutingService?
[SKRoutingService sharedInstance].advisorConfigurationSettings = advisorSettings;
You will also have to set the path for the audio files like this:
NSBundle* advisorResourcesBundle = [NSBundle bundleWithPath:[[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"SKAdvisorResources.bundle"]];
NSString* soundFilesFolder = [advisorResourcesBundle pathForResource:#"Languages" ofType:#""];
NSString* audioFilesFolderPath = [NSString stringWithFormat:#"%#/%#/sound_files",soundFilesFolder,userLanguageCode];
[AudioService sharedInstance].audioFilesFolderPath = audioFilesFolderPath;
userLanguageCode would be "fr" in your case

How to get the power of audio buffer using The Amazing Audio Engine

I am fairly new to iOS Development, and I am a complete newbie with audio stuff.
I am trying to get the loudness or the power of the audio that is getting played using TAAE. I am not sure if what I am doing makes any sense.
Here is my code
static var gameStatus : GameStatus = .Starting
private init(){
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
initializeAudioTrack()
}
func initializeAudioTrack() {
let file = NSBundle.mainBundle().URLForResource("01 Foreign Formula", withExtension:
"mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSample = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
let rightSample = UnsafeMutablePointer<Float>(audioBufferList[1].mBuffers.mData)
var accumulator = Float(0.0)
for i in 0...frames {
accumulator += leftSample[Int(i)] * leftSample[Int(i)]
}
var power = accumulator / Float(frames)
println(power)
}
println(audioController?.masterOutputVolume)
audioController?.addChannels([channel])
audioController?.addOutputReceiver(receiver)
audioController?.useMeasurementMode = true
audioController?.preferredBufferDuration = 0.005
audioController?.start(nil)
}
I looked everywhere trying to understand how to get this done but it is kind of hard for me to know what should I be looking for.
Basically all I need is to find the power of audio (intensity, bass etc) to determine and manipulate certain stuff in the game I am building.
I would really love any kind of explanation or help.
Feel free to write code in Objective-C or other

Resources