I am trying to make an app with Audio, video call using WebRTC.
remote video and audio are working properly in my app, but my local stream is not appearing on the client side.
here is what I have written to add a video track
let videoSource = self.rtcPeerFactory.videoSource()
let videoCapturer = RTCCameraVideoCapturer(delegate: videoSource)
guard let frontCamera = (RTCCameraVideoCapturer.captureDevices().first { $0.position == .front }),
// choose highest res
let format = (RTCCameraVideoCapturer.supportedFormats(for: frontCamera).sorted { (f1, f2) -> Bool in
let width1 = CMVideoFormatDescriptionGetDimensions(f1.formatDescription).width
let width2 = CMVideoFormatDescriptionGetDimensions(f2.formatDescription).width
return width1 < width2
}).last,
// choose highest fps
let fps = (format.videoSupportedFrameRateRanges.sorted { return $0.maxFrameRate < $1.maxFrameRate }.last) else {
print(.error, "Error in createLocalVideoTrack")
return nil
}
videoCapturer.startCapture(with: frontCamera,
format: format,
fps: Int(fps.maxFrameRate))
self.callManagerDelegate?.didAddLocalVideoTrack(videoTrack: videoCapturer)
let videoTrack = self.rtcPeerFactory.videoTrack(with: videoSource, trackId: K.CONSTANT.VIDEO_TRACK_ID)
and this is to add Audio track
let constraints: RTCMediaConstraints = RTCMediaConstraints.init(mandatoryConstraints: [:], optionalConstraints: nil)
let audioSource: RTCAudioSource = self.rtcPeerFactory.audioSource(with: constraints)
let audioTrack: RTCAudioTrack = self.rtcPeerFactory.audioTrack(with: audioSource, trackId: K.CONSTANT.AUDIO_TRACK_ID)
my full webRTC log attached here.
some logs I am getting (I think this is something wrong)
(thread.cc:303): Waiting for the thread to join, but blocking calls have been disallowed
(basic_port_allocator.cc:1035): Port[31aba00:0:1:0:relay:Net[ipsec4:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=2]]: Port encountered error while gathering candidates.
...
(basic_port_allocator.cc:1017): Port[38d7400:audio:1:0:local:Net[en0:192.168.1.x/24:Wifi:id=1]]: Port completed gathering candidates.
(basic_port_allocator.cc:1035): Port[3902c00:video:1:0:relay:Net[ipsec5:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=3]]: Port encountered error while gathering candidates.
finally, find the solution
it was due to TCP protocol in the TURN server.
Related
I'm trying to get an autotune like sound from AKPitchShifter but the most I get is chipmunk type sound. I've played with different combinations with the AKTimePitch.pitch and the AKPitchShifter.shift both individually and together but everything comes out squeaky and too robotic.
I'm new to this library. Is there anything that I can add, such as other AudioKit classes, to get the sound close to autotune.
do {
let file = try AKAudioFile(readFileName: "someones-voice.wav")
let player = try AKAudioPlayer(file: file)
player.looping = true
let timePitch = AKTimePitch(player)
timePitch.pitch = 0.5
AKManager.output = timePitch
let pitchShifter = AKPitchShifter(player)
pitchShifter.shift = 1.5
AKManager.output = pitchShifter
try AKManager.start()
player.play()
} catch {
print(error.localizedDescription)
}
Resolved in git through a pull request addressing a few errors: https://github.com/lsamaria/AutoTuneSampler/pull/3
I'm trying to use Audiokit and its AKFFTap to get the fft data of an audiofile.
I manage to get them in real time processing but as soos as I do it in offline rendering mode the generated data are 0.
So I was wondering if it was possible to get it with the offline rendering mode?
Here is the code I use:
class OfflineProcessingClass {
var tracker: AKFrequencyTracker!
var fftTap: AKFFTTap!
// ....
private func process(audioFile: AKAudioFile) throws {
// Make connection
let player = try AKAudioPlayer(file: audioFile)
tracker = AKFrequencyTracker(player)
fftTap = AKFFTTap(tracker)
AudioKit.output = tracker
// Setup offline rendering mode
let timeIntervalInSeconds: TimeInterval = 0.1
let sampleInterval = Int(floor(timeIntervalInSeconds * audioFile.sampleRate))
try AudioKit.engine.enableManualRenderingMode(
.offline,
format: audioFile.fileFormat,
maximumFrameCount: AVAudioFrameCount(sampleInterval)
)
// Setup buffer
let buffer = AVAudioPCMBuffer(
pcmFormat: AudioKit.engine.manualRenderingFormat,
frameCapacity: AudioKit.engine.manualRenderingMaximumFrameCount
)
// Start processing
try AudioKit.start()
player.start()
// Read file offline
while AudioKit.engine.manualRenderingSampleTime < audioFile.length {
let frameCount = audioFile.length - manualRenderingSampleTime
let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
try! AudioKit.engine.renderOffline(framesToRender, to: buffer)
// track is good
print("\(tracker.amplitude) dB - \(tracker!.frequency) Hz")
// Array of 0
print(fftTap.fftData) /////////////// <====== Error is here
}
// End processing
player.stop()
AudioKit.engine.stop()
}
}
Do you see something wrong in this code?
This is because handleTapBlock in BaseTap does a dispatch async on the main queue. That means, since you're occupying the main queue in your for loop, the BaseTap will never have the opportunity to get any callbacks. You'll need to relinquish the main queue for that to work.
The following builds and runs and prints the non error console message at the end when passed two valid MIDIEndPointRefs. But midi events are not passed thru from source to dest as expected. Is something missing?
func createThru2(source:MIDIEndpointRef?, dest:MIDIEndpointRef?) {
var connectionRef = MIDIThruConnectionRef()
var params = MIDIThruConnectionParams()
MIDIThruConnectionParamsInitialize(¶ms)
if let s = source {
let thruEnd = MIDIThruConnectionEndpoint(endpointRef: s, uniqueID: MIDIUniqueID(1))
params.sources.0 = thruEnd
params.numSources = 1
print("thru source is \(s)")
}
if let d = dest {
let thruEnd = MIDIThruConnectionEndpoint(endpointRef: d, uniqueID: MIDIUniqueID(2))
params.destinations.0 = thruEnd
params.numDestinations = 1
print("thru dest is \(d)")
}
var localParams = params
let nsdata = withUnsafePointer(to: ¶ms) { p in
NSData(bytes: p, length: MIDIThruConnectionParamsSize(&localParams))
}
let status = MIDIThruConnectionCreate(nil, nsdata, &connectionRef)
if status == noErr {
print("created thru")
} else {
print("error creating thru \(status)")
}
}
Your code works fine in Swift 5 on macOS 10.13.6. A thru connection is established and MIDI events are passed from source to destination. So the problem does not seem to be the function you posted, but in the endpoints you provided or in using Swift 4.2.
I used the following code to call your function:
var source:MIDIEndpointRef = MIDIGetSource(5)
var dest:MIDIEndpointRef = MIDIGetDestination(9)
createThru2(source:source, dest:dest)
5 is a MIDI keyboard and 9 is a MIDI port on my audio interface.
Hmmm. I just tested this same code on Swift 5 and OS X 12.4, and it doesn't seem to work for me. MIDIThruConnectionCreate returns noErr, but no MIDI packets seem to flow.
I'm using the MIDIKeys virtual source, and MIDI Monitor virtual destination.
I'll try it with some hardware and see what happens.
I'm trying to use AVAudioConverter to convert compressed audio data (in this case mp3, 96kbps, CBR, stereo) into PCM data (the standard output format, obtained by calling audioEngine.outputNode.outputFormatForBus(0)). I use an AudioFileStream to get the packets. I have previously used an AudioQueue to play these packets, so they are valid.
This is where I call convertToBuffer:
let outputBuffer = AVAudioPCMBuffer(PCMFormat: outputFormat, frameCapacity: outputBufferFrameLength)
outputBuffer.frameLength = outputBufferFrameLength
var error:NSError?
audioConverter.convertToBuffer(outputBuffer, error: &error, withInputFromBlock: { (packetCount:AVAudioPacketCount, inputStatus:UnsafeMutablePointer<AVAudioConverterInputStatus>) -> AVAudioBuffer? in
return self.getAudioConverterInput(packetCount, inputStatus: inputStatus)
})
if let error = error{
print("conv error:", error)
}else{
// TODO: Do stuff with buffer
}
And this is the function that handles the input.
func getAudioConverterInput(packetCount:AVAudioPacketCount, inputStatus:UnsafeMutablePointer<AVAudioConverterInputStatus>) -> AVAudioBuffer?{
if let currentAudioFormat = currentAudioFormat, firstPackets = packets.first{
inputStatus.memory = .HaveData
let buffer = AVAudioCompressedBuffer(format: currentAudioFormat, packetCapacity: packetCount, maximumPacketSize: Int(currentMaximumPacketSize))
var currentPackets:Packets = firstPackets
var currentStartOffset:Int64 = 0
for i in 0..<packetCount{
let currentDescription = currentPackets.packetDescriptions[currentPacket]
memcpy(buffer.data.advancedBy(Int(currentStartOffset)), currentPackets.data.advancedBy(Int(currentDescription.mStartOffset)), Int(currentDescription.mDataByteSize))
buffer.packetDescriptions[Int(i)] = AudioStreamPacketDescription(mStartOffset: currentStartOffset, mVariableFramesInPacket: currentDescription.mVariableFramesInPacket, mDataByteSize: currentDescription.mDataByteSize)
currentStartOffset += Int64(currentDescription.mDataByteSize)
currentPacket+=1
if (currentPackets.numberOfPackets == UInt32(currentPacket)){
currentPacket = 0
packets.removeFirst()
if let firstPackets = packets.first{
currentPackets = firstPackets
}else{
buffer.packetCount = i + 1
return buffer
}
}
}
buffer.packetCount = packetCount
return buffer
}
inputStatus.memory = .NoDataNow
return nil
}
This function is called once every time I call the convertToBuffer(...) function with the packetCount variable set to '1'. The error "conv error: Error Domain=NSOSStatusErrorDomain Code=-50 "(null)"" is thrown.
What am I doing wrong here?
I am fairly new to iOS Development, and I am a complete newbie with audio stuff.
I am trying to get the loudness or the power of the audio that is getting played using TAAE. I am not sure if what I am doing makes any sense.
Here is my code
static var gameStatus : GameStatus = .Starting
private init(){
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
initializeAudioTrack()
}
func initializeAudioTrack() {
let file = NSBundle.mainBundle().URLForResource("01 Foreign Formula", withExtension:
"mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSample = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
let rightSample = UnsafeMutablePointer<Float>(audioBufferList[1].mBuffers.mData)
var accumulator = Float(0.0)
for i in 0...frames {
accumulator += leftSample[Int(i)] * leftSample[Int(i)]
}
var power = accumulator / Float(frames)
println(power)
}
println(audioController?.masterOutputVolume)
audioController?.addChannels([channel])
audioController?.addOutputReceiver(receiver)
audioController?.useMeasurementMode = true
audioController?.preferredBufferDuration = 0.005
audioController?.start(nil)
}
I looked everywhere trying to understand how to get this done but it is kind of hard for me to know what should I be looking for.
Basically all I need is to find the power of audio (intensity, bass etc) to determine and manipulate certain stuff in the game I am building.
I would really love any kind of explanation or help.
Feel free to write code in Objective-C or other