Connection of varispeed with RemoteIO in iOS - ios

I am working with audio units to play and change the speed of playback. Since AudioGraph is deprecated.
What I have done, I have successfully played Audio coming from UDP via audio-units and made connections like:
converterUnit -> varispeed -> outConverterUnit -> RemoteIO (Out)
Our format for playing is int16(PCM), but varispeed requires float datatype, So we are using converters for varispeed.
Here is my code:
var ioFormat = CAStreamBasicDescription(
sampleRate: 48000.0,
numChannels: 1,
pcmf: .int16,
isInterleaved: false
)
var varispeedFormat = CAStreamBasicDescription(
sampleRate: 16000,
numChannels: 1,
pcmf: .float32,
isInterleaved: false
)
init(_ client: UDPClient, _ tcpClient: TCPClient, _ opusHelper: OpusHelper, _ tvTemp: UILabel) {
super.init()
let success = initCircularBuffer(&circularBuffer, 4096)
if success {
print("Circular buffer init was successful")
} else {
print("Circular buffer init not successful")
}
self.opusHelper = opusHelper
self.tvTemp = tvTemp
monotonicTimer = MonotonicTimer()
self.udpClient = client
self.tcpClient = tcpClient
//Creating Description for REMOTE IO
var outputDesc = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_VoiceProcessingIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0
)
let inputComponent = AudioComponentFindNext(nil, &outputDesc)
check(error: AudioComponentInstanceNew(inputComponent!, &outputUnit), description: "Output unit instance new failed")
//Creating Description for converterUnit
var firstConverterDesc = AudioComponentDescription(
componentType: OSType(kAudioUnitType_FormatConverter),
componentSubType: OSType(kAudioUnitSubType_AUConverter),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0
)
let firstConverterComponent = AudioComponentFindNext(nil, &firstConverterDesc)
check(error: AudioComponentInstanceNew(firstConverterComponent!, &firstConverterUnit), description: "First converter unit instance new failed")
//Creating Description for Varispeed Unit
var variSpeedConverterDesc = AudioComponentDescription(
componentType: OSType(kAudioUnitType_FormatConverter),
componentSubType: OSType(kAudioUnitSubType_Varispeed),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0
)
let variSpeedConverterComponent = AudioComponentFindNext(nil, &variSpeedConverterDesc)
check(error: AudioComponentInstanceNew(variSpeedConverterComponent!, &varispeedUnit), description: "First converter unit instance new failed")
//Creating Description for outConverter
var secondConverterDesc = AudioComponentDescription(
componentType: OSType(kAudioUnitType_FormatConverter),
componentSubType: OSType(kAudioUnitSubType_AUConverter),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0
)
let secondConverterComponent = AudioComponentFindNext(nil, &secondConverterDesc)
check(error: AudioComponentInstanceNew(secondConverterComponent!, &secondConverterUnit), description: "Second converter unit instance new failed")
//Converting incoming bytes to AUConverter (Float 32 format)
check(error: AudioUnitSetProperty(
firstConverterUnit!,
AudioUnitPropertyID(kAudioUnitProperty_StreamFormat),
AudioUnitScope(kAudioUnitScope_Output),
0,
&varispeedFormat,
MemoryLayoutStride.SizeOf32(varispeedFormat)
),
description: "Failed to set input of second converter to our temp format"
)
//Putting converted bytes in varispeed unit
check(error: AudioUnitSetProperty(
varispeedUnit!,
AudioUnitPropertyID(kAudioUnitProperty_StreamFormat),
AudioUnitScope(kAudioUnitScope_Input),
0,
&varispeedFormat,
MemoryLayoutStride.SizeOf32(varispeedFormat)
),
description: "Failed to set input format as varispeed format of the second converter unit"
)
//Getting converted bytes from varispeed unit
check(error: AudioUnitSetProperty(
varispeedUnit!,
AudioUnitPropertyID(kAudioUnitProperty_StreamFormat),
AudioUnitScope(kAudioUnitScope_Output),
0,
&varispeedFormat,
MemoryLayoutStride.SizeOf32(varispeedFormat)
),
description: "Failed to set input format as varispeed format of the second converter unit"
)
//Putting converted bytes in outConverterUnit
check(error: AudioUnitSetProperty(
secondConverterUnit!,
AudioUnitPropertyID(kAudioUnitProperty_StreamFormat),
AudioUnitScope(kAudioUnitScope_Input),
0,
&varispeedFormat,
MemoryLayoutStride.SizeOf32(varispeedFormat)
),
description: "Failed to set input of second converter to our temp11 format"
)
//Getting converted bytes from outConverterUnit in int16(PCM)
check(error: AudioUnitSetProperty(
secondConverterUnit!,
AudioUnitPropertyID(kAudioUnitProperty_StreamFormat),
AudioUnitScope(kAudioUnitScope_Output),
0,
&ioFormat,
MemoryLayoutStride.SizeOf32(ioFormat)
),
description: "Failed to set input of second converter to our temp11 format"
)
//Connecting firstConverter to varispeed
var tempConnection = AudioUnitConnection(
sourceAudioUnit: firstConverterUnit!,
sourceOutputNumber: 0,
destInputNumber: 0
)
check(error: AudioUnitSetProperty(
varispeedUnit!,
AudioUnitPropertyID(kAudioUnitProperty_MakeConnection),
AudioUnitScope(kAudioUnitScope_Input),
0,
&tempConnection,
MemoryLayoutStride.SizeOf32(AudioUnitConnection())
),
description: "Failed to connect second converter to output Unit"
)
//Connecting verispeedUnit to to outConverter
var temp1Connection = AudioUnitConnection(
sourceAudioUnit: varispeedUnit!,
sourceOutputNumber: 0,
destInputNumber: 0
)
check(error: AudioUnitSetProperty(
secondConverterUnit!,
AudioUnitPropertyID(kAudioUnitProperty_MakeConnection),
AudioUnitScope(kAudioUnitScope_Input),
0,
&temp1Connection,
MemoryLayoutStride.SizeOf32(AudioUnitConnection())
),
description: "Failed to connect second converter to output Unit"
)
//Connecting outConverter to outputUnit
var secondToOutputConnection = AudioUnitConnection(
sourceAudioUnit: secondConverterUnit!,
sourceOutputNumber: 0,
destInputNumber: 0
)
check(error: AudioUnitSetProperty(
outputUnit!,
AudioUnitPropertyID(kAudioUnitProperty_MakeConnection),
AudioUnitScope(kAudioUnitScope_Input),
0,
&secondToOutputConnection,
MemoryLayoutStride.SizeOf32(AudioUnitConnection())
),
description: "Failed to connect second converter to output Unit"
)
check(error: AudioUnitInitialize(outputUnit!), description: "Failed to init output unit")
check(error: AudioUnitInitialize(firstConverterUnit!), description: "Failed to init first converter unit")
check(error: AudioUnitInitialize(varispeedUnit!), description: "Failed to init varispeed unit")
check(error: AudioUnitInitialize(secondConverterUnit!), description: "Failed to init second converter unit")
var playbackCallback = AURenderCallbackStruct(
inputProc: AudioController_PlaybackCallback,
inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())
)
check(error: AudioUnitSetProperty(
outputUnit!,
AudioUnitPropertyID(kAudioUnitProperty_SetRenderCallback),
AudioUnitScope(kAudioUnitScope_Input),
kOutputBus,
&playbackCallback,
MemoryLayout<AURenderCallbackStruct>.size.ui
),
description: "Failed to set recording render callback"
)
}
The parameter for changing the playback speed is given here.
func performPlayback(
_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
let buffer = ioData[0].mBuffers
let bytesToCopy = ioData[0].mBuffers.mDataByteSize
var bufferTail: UnsafeMutableRawPointer?
// print("BYTES TO COPY: \(bytesToCopy)")
self.availableBytes = 0
bufferTail = TPCircularBufferTail(&self.circularBuffer, &self.availableBytes)
bytesToWrite = min(bytesToCopy, self.availableBytes)
check(error: AudioUnitSetParameter(
varispeedUnit!,
AudioUnitParameterID(kVarispeedParam_PlaybackRate),
AudioUnitScope(kAudioUnitScope_Global),
0,
AudioUnitParameterValue(1.9999000082426955),
0
),
description: "Failed to set parameter rate for varispeed unit"
)
print("BYTES TO WRITE: \(bytesToWrite)")
if bytesToWrite >= 3840 {
memcpy(buffer.mData, bufferTail, Int(bytesToWrite))
TPCircularBufferConsume(&self.circularBuffer, bytesToWrite)
} else {
let silence = [Int16](repeating: 0, count: Int(bytesToCopy))
memcpy(buffer.mData, silence, Int(bytesToCopy))
}
return noErr
}
The problem is, I don't feel any difference in voice if I use varispeed or not. Can anyone point out the problem in my code?
https://stackoverflow.com/a/60552075/7798783
https://stackoverflow.com/a/34750857/7798783
https://stackoverflow.com/a/34671010/7798783
I have studied these answers and tried to implement them in our situation with no result.
I think scopes and elements might be the problem.

Related

(Discord Bot Related Question) I make a mute command but when I run it, I get this "Cannot Read Property 'split' of undefined"

I'm using autocode for this question, Every time I rerun the command there is always an error that said "Cannot Read Property 'split' of undefined" Here is the code below:
const event = context.params.event
const { guild_id, token, channel_id, member, author} = event;
const error = require('../../../helpers/error.js')
try {
if (!event?.mentions?.length) throw new Error(`You need to mention a user!`);
let emojis = await lib.utils.kv['#0.1.16'].get({key: `emojis`, defaultValue: {moderator: null, user: null, reason: null, channel: null, error: null, timeout: null, clock: null}});
let base = await lib.airtable.query['#1.0.0'].select({table: `Config`, where: [{'Guild__is': guild_id}]});
if (!base?.rows?.length) base = await lib.airtable.query['#1.0.0'].insert({table: `Config`, fieldsets: [{'Guild': guild_id}]});
let cNum = (await lib.airtable.query['#1.0.0'].max({table: `Cases`, where: [{'Guild__is': guild_id}], field: `Case`})).max.max + 1
let isAdmin = false, targetAdmin = false, reason = event.content.split(' ').slice(3).join(' ') || `No reason provided`
let guildInfo = await lib.discord.guilds['#0.2.4'].retrieve({guild_id});
let target = await lib.discord.guilds['#0.2.4'].members.retrieve({user_id: event.mentions[0].id, guild_id});
if (guildInfo.owner_id == author.id) isAdmin = true
if (guildInfo.owner_id == target.user.id) targetAdmin = true
if (member.roles.includes(base.rows[0].fields.Moderator)) isAdmin = true
if (target.roles.includes(base.rows[0].fields.Moderator)) targetAdmin = true
if (guildInfo.roles.filter(role => (role.permissions & (1 << 3)) === 1 << 3 && member.roles.includes(role.id))?.length) isAdmin = true
if (guildInfo.roles.filter(role => (role.permissions & (1 << 3)) === 1 << 3 && target.roles.includes(role.id))?.length) targetAdmin = true
if (!isAdmin)
return lib.discord.channels['#release'].messages.create({channel_id, content: `<:error:${emojis.error}> You don't have permission to use \`sm.mute\``});
if (targetAdmin)
return lib.discord.channels['#release'].messages.create({channel_id, content: `<:error:${emojis.error}> You can't mute a member who is mod/admin`});
let match = event.context.split(' ')[2].match(/(\d+)(s|m|h|d)/)
if (!match)
throw new Error(`Please provide a duration in days(\`d\`), hours(\`h\`), minutes(\`m\`) or seconds(\`s\`).`)
let secs = new Date().getSeconds()
let time = parseInt(match[1]), unit = match[2]
if (unit === 's') time *= 1
else if (unit === 'm') time *= 60
else if (unit === 'h') time *= 60 * 60
else if (unit === 'd') time *= 24 * 60 * 60
let until = `<t:${Math.floor(new Date().setSeconds(secs + time) / 1000)}>`
await lib.discord.guilds['#release'].members.timeout.update({user_id: target.user.id, guild_id, communication_disabled_until_seconds: time, reason});
await lib.discord.channels['#release'].messages.create({channel_id, content: `<:succes:${emojis.success}>**${target.user.username}** was muted for ${match[1]} ${match[2].replace('s', 'seconds').replace('d', 'days').replace('h', 'hours').replace('m', 'minutes')} | ${reason}`});
if (base.rows[0].fields.Logging) {
await lib.discord.channels['#release'].messages.create({
channel_id: base.rows[0].fields.Logging,
content: ``,
embeds: [{
type: 'rich',
color: 0xE72020,
description: `
<:user:${emojis.user}> | <#${target.user.id}> | ${target.user.username}#${target.user.discriminator}
<:moderator:${emojis.moderator}> | <#${author.id}> | ${author.username}#${author.discriminator}
<:reason:${emojis.reason}> | ${reason}
<:timeout:${emojis.timeout} | ${until.replace('>', ':R>')} | ${time}s`,
footer: {text: `Case Number: ${cNum}`},
author: {name: `${cNum} | Mute`, icon_url: target.user.avatar_url || 'https://cdn.discordapp.com/attachments/911294620439293993/965586325208195142/avatar.png'},
timestamp: new Date().toISOString(),
}],
});
}
await lib.airtable.query['#1.0.0'].insert({table: `Cases`,
fieldsets: [{
User: target.user.id,
Reason: reason,
Type: `Mute`,
Moderator: author.id,
Timestamp: `<t:${Math.floor(new Date().getTime() / 1000)}>`,
Case: cNum,
Guild: guild_id,
}],
});
} catch (e) {
await error.prefix(channel_id, e.message)
Help me! I need some help to fix this. Please explain this answer of how to fix "Cannot Read Property 'split' of undefined"

Swift 2 sqlite - library routine called out of sequence

I am using below code to insert records to sqlite. SQLite DB is already created and sqlite file exists in the right place.
However, when i run my test it gives an error:
Any help on this is appreciated. Already spent some time on this.
By the way, am new to SWIFT :(
"sqlite3_errmsg(carParkDB) UnsafePointer 0x000000010f44a863 "library routine called out of sequence"
Error appears at the first bind statement
sqlite3_bind_text(insertStmt, 1, carParkData.dataSource.rawValue, -1, SQLITE_TRANSIENT)
Below is my DAOImpl
import Foundation
class CarParkDaoImpl : CarParkDao{
var carParkDB : COpaquePointer = nil
var insertStmt: COpaquePointer = nil
let SQLITE_TRANSIENT = unsafeBitCast(-1, sqlite3_destructor_type.self)
func initializeDB(){
let carParkDataSQ = Constants.Paths.path + "/CarParkData.sqlite"
print("Sqlite file: \(carParkDataSQ)")
if(sqlite3_open(carParkDataSQ, &carParkDB) == SQLITE_OK){
let ret:Int32 = sqlite3_exec(carParkDB, Constants.CAR_PARK_SQL.createSql, nil, nil, nil);
if ( ret != SQLITE_OK){
print("Failed to create table: \(Constants.CAR_PARK_SQL.createSql)")
print("Error: \(sqlite3_errmsg(carParkDB))")
closeDB()
}
}else{
print("Falied to open : \(carParkDataSQ)")
print("Error: \(sqlite3_errmsg(carParkDB))")
closeDB()
}
}
func closeDB(){
sqlite3_close(carParkDB)
}
init(){
initializeDB()
}
/**
Responsible to insert the car park data
- Parameter carParkData: CarParkData.
- returns Bool
*/
func insert(carParkData: CarParkData) -> Bool{
prepareInsStatments()
var ret: Bool = false
sqlite3_bind_text(insertStmt, 1, carParkData.dataSource.rawValue, -1, SQLITE_TRANSIENT)
sqlite3_bind_text(insertStmt, 2, carParkData.address, -1, SQLITE_TRANSIENT)
sqlite3_bind_double(insertStmt, 3, carParkData.latitude)
sqlite3_bind_double(insertStmt, 4, carParkData.longitude)
sqlite3_bind_int(insertStmt, 5, Int32(carParkData.ltaCarParkID))
sqlite3_bind_text(insertStmt, 6, carParkData.ltaArea, -1, SQLITE_TRANSIENT)
sqlite3_bind_int(insertStmt, 7, Int32(carParkData.ltaLots))
sqlite3_bind_double(insertStmt, 8, carParkData.ltaPrice)
sqlite3_bind_text(insertStmt, 9, carParkData.hdbShortTermParking, -1, SQLITE_TRANSIENT)
sqlite3_bind_text(insertStmt, 10, carParkData.hdbCarParkType, -1, SQLITE_TRANSIENT)
sqlite3_bind_text(insertStmt, 11, carParkData.hdbFreeParking, -1, SQLITE_TRANSIENT)
sqlite3_bind_text(insertStmt, 12, carParkData.hdbNightParking, -1, SQLITE_TRANSIENT)
sqlite3_bind_int(insertStmt, 13, Int32(carParkData.hdbId))
sqlite3_bind_text(insertStmt, 14, carParkData.hdbAdHocParking, -1, SQLITE_TRANSIENT)
sqlite3_bind_text(insertStmt, 15, carParkData.hdbCarParkNo, -1, SQLITE_TRANSIENT)
let rc:Int32 = sqlite3_bind_text(insertStmt, 16, carParkData.hdbTypeOfParking, -1, SQLITE_TRANSIENT)
if (rc != SQLITE_OK) {
print(stderr, "failed to prepare statement: %s\n",
sqlite3_errmsg(carParkDB));
sqlite3_close(carParkDB);
return ret;
}
if(sqlite3_step(insertStmt) == SQLITE_DONE){
ret = true;
}
sqlite3_reset(insertStmt)
sqlite3_clear_bindings(insertStmt)
return ret
}
/**
Responsible to finalize all the prepared statements
*/
func finalize(){
sqlite3_finalize(insertStmt)
sqlite3_finalize(updateStmt)
sqlite3_finalize(deleteStmt)
sqlite3_finalize(selectAllStmt)
sqlite3_finalize(selectByIdStmt)
sqlite3_finalize(selectByDataSourceStmt)
}
func prepareInsStatments(){
let stmt = Constants.CAR_PARK_SQL.insertSql.cStringUsingEncoding(NSUTF8StringEncoding)
let ret:Int32 = sqlite3_prepare_v2(carParkDB, stmt!, -1, &insertStmt, nil)
if (ret != SQLITE_OK) {
print(stderr, "failed to prepare statement: %s\n",
sqlite3_errmsg(carParkDB));
sqlite3_close(carParkDB);
}
}
}
Here is my insert sql
static let insertSql:String =
"INSERT INTO CAR_PARK_DATA ( DATASOURCE, ADDRESS, LATITUDE, LONGITUDE "
+ " , LTA_CAR_PARK_ID, LTA_AREA, LTA_LOTS, LTA_PRICE, HDB_SHORT_TERM_PAKING "
+ " , HDB_CAR_PARK_TYPE, HDB_FREE_PARKING, HDB_NIGHT_PARKING, HDB_ID "
+ " , HDB_ADHOC_PARKING, HDB_CAR_PARK_NO, HDB_TYPE_PARK_SYSTEM ) "
+ " VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) "
Here is my test method
func testInsert(){
carParkData = CarParkData(dataType: DataSourceType.LTA.rawValue, address: "This is Test address", latitude: 100.0, longitude: 200.0, carParkID : 10, ltaCarParkID : 20, ltaArea: "LTA Area", ltaLots: 20, hdbShortTermParking: "Test HDB Short Praking", hdbCarParkType: "HDB Car Park Type", hdbFreeParking: "HDB Free Parking", hdbNightParking: "HDB Night Parking", hdbId : 30, hdbAdHocParking: "HDB Ad Hoc Parking", hdbCarParkNo: "HDB Car Park No", hdbTypeOfParking: "HDB Parking type", ltaPrice: 10.96778, favourite: true)
var val:Bool = carParkDao.insert(carParkData);
XCTAssertTrue(val)

NSUserDefaults only loads correctly half of the time

I am currently creating an app with Swift2 that stores user entered data. Right now I am just using NSUserDefaults even though I will be using an external DB in the future.
With my current implementation, I have 2 Dictionaries stored to NSUserDefaults. "allNouns", "myNouns", and "nounTimes". When I load my app, the data only loads every other time. I have my code to get the data and the log print outs below.
func saveObject(object: AnyObject, objectKey: String) {
let objectData = NSKeyedArchiver.archivedDataWithRootObject(object)
NSUserDefaults.standardUserDefaults().setObject(objectData, forKey: objectKey)
}
func loadObject(objectKey: String) -> AnyObject? {
var object : AnyObject? = nil
if( NSUserDefaults.standardUserDefaults().objectForKey(objectKey) != nil ) {
let objectData = NSUserDefaults.standardUserDefaults().objectForKey(objectKey) as? NSData
if let objectData = objectData {
object = NSKeyedUnarchiver.unarchiveObjectWithData(objectData)!
}
}
return object
}
override func viewDidLoad() {
super.viewDidLoad()
if( NSUserDefaults.standardUserDefaults().objectForKey("allNouns") != nil ) {
allNouns = loadObject("allNouns") as! [Int : Noun]
allNounIdList = Array(allNouns.keys)
}
if( NSUserDefaults.standardUserDefaults().objectForKey("myNouns") != nil ) {
myNouns = loadObject("myNouns") as! [Int : [NSDate]]
myNounIdList = Array(myNouns.keys)
}
}
Here are all the places I save data:
func stashNoun(nounId: Int) {
let myNounTimes = myNouns[nounId]
if( myNounTimes == nil || myNounTimes!.isEmpty ) {
myNouns[nounId] = [ NSDate() ]
myNounIdList.append(nounId)
}
else {
myNouns[nounId]!.append(NSDate())
}
saveObject(myNouns, objectKey: "my")
}
#IBAction func addButtonClicked(sender: AnyObject) {
let newNoun = Noun(name: nameTextField, type: typeTextField, year: yearTextField)
allNouns[newNoun.id] = newNoun
allNounIdList.append(newNoun.id)
saveObject(allNouns, objectKey: "allNouns")
}
Here is a log of when I have data:
["AppleKeyboards": (
"en_US#hw=US;sw=QWERTY",
"emoji#sw=Emoji",
"en_US#hw=US;sw=QWERTY"
), "allNouns": <62706c69 73743030 d4010203 04050665 66582476 65727369 6f6e5824 6f626a65 63747359 24617263 68697665 72542474 6f701200 0186a0af 10130708 1718191a 1b292a31 3f404e4f 50515260 6155246e 756c6cd3 090a0b0c 1116574e 532e6b65 79735a4e 532e6f62 6a656374 73562463 6c617373 a40d0e0f 10800280 03800480 05a41213 14158006 8009800b 80108012 13688188 08c2d54a e71391c9 681dc633 746913ce dfdb57f5 78a12113 cddfb115 bd62466c d71c1d0b 1e1f2021 22222425 26222256 72656769 6f6e5563 6f6c6f72 52696454 79656172 54747970 65546e61 6d658007 80078008 13688188 08c2d54a e7100380 07800751 63d22b2c 2d2e5a24 636c6173 736e616d 65582463 6c617373 65735e43 6f726b53 74617368 2e57696e 65a22f30 5e436f72 6b537461 73682e57 696e6558 4e534f62 6a656374 d732330b 34353637 3838243b 3c383856 72656769 6f6e5563 6f6c6f72 52696454 79656172 54747970 65546e61 6d65800a 800a8008 1391c968 1dc63374 69100280 0a800a51 62d74142 0b434445 46474824 4a4b4c4d 56726567 696f6e55 636f6c6f 72526964 54796561 72547479 7065546e 616d6580 0e800f80 0813cedf db57f578 a1211107 c5800d80 0c5f101a 696c6927 73207375 70657220 64656c69 63696f75 73207769 6e655c70 696e6f74 20677269 67696f5a 41757374 72616c69 616e5577 68697465 d753540b 55565758 5959245c 5d595956 72656769 6f6e5563 6f6c6f72 52696454 79656172 54747970 65546e61 6d658011 80118008 13cddfb1 15bd6246 6c100180 11801151 61d22b2c 62635c4e 53446963 74696f6e 617279a2 64305c4e 53446963 74696f6e 6172795f 100f4e53 4b657965 64417263 68697665 72d16768 54726f6f 74800100 08001100 1a002300 2d003200 37004d00 53005a00 62006d00 74007900 7b007d00 7f008100 86008800 8a008c00 8e009000 9900a200 ab00b400 c300ca00 d000d300 d800dd00 e200e400 e600e800 f100f300 f500f700 f900fe01 09011201 21012401 33013c01 4b015201 58015b01 60016501 6a016c01 6e017001 79017b01 7d017f01 81019001 97019d01 a001a501 aa01af01 b101b301 b501be01 c101c301 c501e201 ef01fa02 00020f02 16021c02 1f022402 29022e02 30023202 34023d02 3f024102 43024502 4a025702 5a026702 79027c02 81000000 00000002 01000000 00000000 69000000 00000000 00000000 00000002 83>, "AppleKeyboardsExpanded": 1, "AddingEmojiKeybordHandled": 1, "AppleLanguages": (
"en-US"
), "ApplePasscodeKeyboards": (
"en_US"
), "nounTimes": <62706c69 73743030 d4010203 0405063f 40582476 65727369 6f6e5824 6f626a65 63747359 24617263 68697665 72542474 6f701200 0186a0af 100f0708 15161718 1d21272a 2d313438 3b55246e 756c6cd3 090a0b0c 1014574e 532e6b65 79735a4e 532e6f62 6a656374 73562463 6c617373 a30d0e0f 80028003 8004a311 12138005 800a800c 800e1368 818808c2 d54ae713 cddfb115 bd62466c 13cedfdb 57f578a1 21d20a0b 191ca21a 1b800680 088009d2 1e0b1f20 574e532e 74696d65 2341bbaa 1eb23666 778007d2 22232425 5a24636c 6173736e 616d6558 24636c61 73736573 564e5344 617465a2 2426584e 534f626a 656374d2 1e0b2820 2341bbab 0aa3ea33 6a8007d2 22232b2c 574e5341 72726179 a22b26d2 0a0b2e1c a12f800b 8009d21e 0b322023 41bbaa1c 3b5fcfc4 8007d20a 0b351ca1 36800d80 09d21e0b 39202341 bbab0a88 d0ea0780 07d22223 3c3d5c4e 53446963 74696f6e 617279a2 3e265c4e 53446963 74696f6e 6172795f 100f4e53 4b657965 64417263 68697665 72d14142 54726f6f 74800100 08001100 1a002300 2d003200 37004900 4f005600 5e006900 70007400 76007800 7a007e00 80008200 84008600 8f009800 a100a600 a900ab00 ad00af00 b400bc00 c500c700 cc00d700 e000e700 ea00f300 f8010101 03010801 10011301 18011a01 1c011e01 23012c01 2e013301 35013701 39013e01 47014901 4e015b01 5e016b01 7d018001 85000000 00000002 01000000 00000000 43000000 00000000 00000000 00000001 87>, "AppleLocale": en_US, "NSInterfaceStyle": macintosh, "MSVLoggingMasterSwitchEnabledKey": 0, "NSLanguages": (
"en-US",
en
), "AppleITunesStoreItemKinds": (
audiobook,
"tv-episode",
booklet,
software,
"software-update",
"itunes-u",
ringtone,
"tv-season",
movie,
mix,
newsstand,
song,
wemix,
tone,
artist,
"podcast-episode",
podcast,
document,
eBook,
album,
"music-video"
), "AppleLanguagesDidMigrate": 9.0, "myNouns": <62706c69 73743030 d4010203 04050653 54582476 65727369 6f6e5824 6f626a65 63747359 24617263 68697665 72542474 6f701200 0186a0af 10100708 15161718 26272e3c 3d4b4c4d 4e4f5524 6e756c6c d3090a0b 0c101457 4e532e6b 6579735a 4e532e6f 626a6563 74735624 636c6173 73a30d0e 0f800280 038004a3 11121380 05800880 0a800f13 68818808 c2d54ae7 13cddfb1 15bd6246 6c13cedf db57f578 a121d719 1a0b1b1c 1d1e1f1f 2122231f 1f567265 67696f6e 55636f6c 6f725269 64547965 61725474 79706554 6e616d65 80068006 80071368 818808c2 d54ae710 03800680 065163d2 28292a2b 5a24636c 6173736e 616d6558 24636c61 73736573 5e436f72 6b537461 73682e57 696e65a2 2c2d5e43 6f726b53 74617368 2e57696e 65584e53 4f626a65 6374d72f 300b3132 33343535 21383935 35567265 67696f6e 55636f6c 6f725269 64547965 61725474 79706554 6e616d65 80098009 800713cd dfb115bd 62466c10 01800980 095161d7 3e3f0b40 41424344 45214748 494a5672 6567696f 6e55636f 6c6f7252 69645479 65617254 74797065 546e616d 65800d80 0e800713 cedfdb57 f578a121 1107c580 0c800b5f 101a696c 69277320 73757065 72206465 6c696369 6f757320 77696e65 5c70696e 6f742067 72696769 6f5a4175 73747261 6c69616e 55776869 7465d228 2950515c 4e534469 6374696f 6e617279 a2522d5c 4e534469 6374696f 6e617279 5f100f4e 534b6579 65644172 63686976 6572d155 5654726f 6f748001 00080011 001a0023 002d0032 0037004a 00500057 005f006a 00710075 00770079 007b007f 00810083 00850087 00900099 00a200b1 00b800be 00c100c6 00cb00d0 00d200d4 00d600df 00e100e3 00e500e7 00ec00f7 0100010f 01120121 012a0139 01400146 0149014e 01530158 015a015c 015e0167 0169016b 016d016f 017e0185 018b018e 01930198 019d019f 01a101a3 01ac01af 01b101b3 01d001dd 01e801ee 01f30200 02030210 02220225 022a0000 00000000 02010000 00000000 00570000 00000000 00000000 00000000 022c>]
Here is when it does not have my data saved:
["AppleLocale": en_US, "NSInterfaceStyle": macintosh, "MSVLoggingMasterSwitchEnabledKey": 0, "NSLanguages": (
"en-US",
en
), "AppleKeyboards": (
"en_US#hw=US;sw=QWERTY",
"emoji#sw=Emoji",
"en_US#hw=US;sw=QWERTY"
), "AppleKeyboardsExpanded": 1, "AppleITunesStoreItemKinds": (
audiobook,
"tv-episode",
booklet,
software,
"software-update",
"itunes-u",
ringtone,
"tv-season",
movie,
mix,
newsstand,
song,
wemix,
tone,
artist,
"podcast-episode",
podcast,
document,
eBook,
album,
"music-video"
), "AddingEmojiKeybordHandled": 1, "AppleLanguagesDidMigrate": 9.0, "AppleLanguages": (
"en-US"
), "ApplePasscodeKeyboards": (
"en_US"
)]
When you're done sending the data to NSUserDefaults you must call synchronize from its instance to be sure that your data was persisted. Otherwise your data may not be persisted in a timely fashion (the app may be terminated before this call is made automatically).
Check the documentation.

How do I use an effect audio unit?

I'm trying to play 8000hz samples and pass them through an effect.
I aim to boost the audio volume (sample code doesn't do it, yet).
I figured I needed an effect audio unit, chained to the remote audio unit.
I also read about the effect unit being very strict with the format it can handle, mainly requiring 44.1khz samples, floating point and 32 bit samples (.
So, I added a convertor unit.
Also, since I figure (not sure though) iOS can't play 32 bit samples (thanks #hotpaw2 !) - I added another conversion back to 16 bits.
The problem is, I always get error -10868 while initializing the audio graph.
I get it without the last conversion unit as well.
If I connect the convert unit to the output (no effect unit), everything works fine (8k samples play just fine).
What's going on?
/* Must use play & record category, for reasons beyond the scope of this question */
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDuckOthers | AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionAllowBluetooth error:nil];
NSError* err = nil;
if (![[AVAudioSession sharedInstance] setPreferredSampleRate:44100 error:&err]){
NSLog(#"%#",err);
}
AudioUnit effect,convert,output,oconvert;
AUNode neffect,nconvert,noutput,noconvert;
AUGraph graph;
AudioComponentDescription deffect,dconvert,doutput;
AudioStreamBasicDescription in_format,out_format,effect_format;
// Formats
memset(&in_format,0,sizeof(in_format));
memset(&out_format, 0, sizeof(out_format));
memset(&effect_format, 0, sizeof(effect_format));
in_format.mSampleRate = 8000;
in_format.mChannelsPerFrame = 1;
in_format.mFormatID = kAudioFormatLinearPCM;
in_format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
in_format.mBitsPerChannel = 16;
in_format.mFramesPerPacket = 1;
in_format.mBytesPerFrame = in_format.mChannelsPerFrame * (in_format.mBitsPerChannel / 8);
in_format.mBytesPerPacket = in_format.mBytesPerFrame * in_format.mFramesPerPacket;
out_format.mSampleRate = 44100;
out_format.mChannelsPerFrame = 1;
out_format.mFormatID = kAudioFormatLinearPCM;
out_format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
out_format.mBitsPerChannel = 16;
out_format.mFramesPerPacket = 1;
out_format.mBytesPerFrame = out_format.mChannelsPerFrame * (out_format.mBitsPerChannel / 8);
out_format.mBytesPerPacket = out_format.mBytesPerFrame * out_format.mFramesPerPacket;
effect_format.mSampleRate = 44100;
effect_format.mChannelsPerFrame = 1;
effect_format.mFormatID = kAudioFormatLinearPCM;
effect_format.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
effect_format.mBitsPerChannel = 32;
effect_format.mFramesPerPacket = 1;
effect_format.mBytesPerFrame = effect_format.mChannelsPerFrame * (effect_format.mBitsPerChannel / 8);
effect_format.mBytesPerPacket = effect_format.mBytesPerFrame * effect_format.mFramesPerPacket;
// Descriptions
memset(&doutput, 0, sizeof(doutput));
memset(&deffect, 0, sizeof(deffect));
memset(&dconvert, 0, sizeof(dconvert));
doutput.componentType = kAudioUnitType_Output;
doutput.componentSubType = kAudioUnitSubType_RemoteIO;
doutput.componentManufacturer = deffect.componentManufacturer = dconvert.componentManufacturer = kAudioUnitManufacturer_Apple;
dconvert.componentType = kAudioUnitType_FormatConverter;
dconvert.componentSubType = kAudioUnitSubType_AUConverter;
deffect.componentType = kAudioUnitType_Effect;
deffect.componentSubType = kAudioUnitSubType_DynamicsProcessor;
// Create graph
SdCheck(NewAUGraph(&graph));
// Create nodes;
SdCheck(AUGraphAddNode(graph, &deffect, &neffect));
SdCheck(AUGraphAddNode(graph, &doutput, &noutput));
SdCheck(AUGraphAddNode(graph, &dconvert, &nconvert));
SdCheck(AUGraphAddNode(graph, &dconvert, &noconvert));
// Open graph
SdCheck(AUGraphOpen(graph));
// Get units
SdCheck(AUGraphNodeInfo(graph, neffect,NULL, &effect));
SdCheck(AUGraphNodeInfo(graph, noutput,NULL, &output));
SdCheck(AUGraphNodeInfo(graph, nconvert,NULL, &convert));
SdCheck(AUGraphNodeInfo(graph, noconvert,NULL, &oconvert));
// Set formats
SdCheck(AudioUnitSetProperty (output,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&out_format,
sizeof(out_format)));
SdCheck(AudioUnitSetProperty (convert,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&effect_format,
sizeof(effect_format)));
SdCheck(AudioUnitSetProperty (convert,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&in_format,
sizeof(in_format)));
SdCheck(AudioUnitSetProperty (effect,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&effect_format,
sizeof(effect_format)));
SdCheck(AudioUnitSetProperty (effect,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&effect_format,
sizeof(effect_format)));
SdCheck(AudioUnitSetProperty (oconvert,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&out_format,
sizeof(out_format)));
SdCheck(AudioUnitSetProperty (oconvert,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&effect_format,
sizeof(effect_format)));
// Connect nodes
SdCheck(AUGraphConnectNodeInput(graph, nconvert, 0, neffect, 0));
SdCheck(AUGraphConnectNodeInput(graph, neffect, 0, noconvert, 0));
SdCheck(AUGraphConnectNodeInput(graph, noconvert, 0, noutput, 0));
// Set render callback
AURenderCallbackStruct input;
memset(&input, 0, sizeof(input));
input.inputProc = SdInputProc;
input.inputProcRefCon = (__bridge void*)self;
SdCheck(AUGraphSetNodeInputCallback(graph, nconvert, 0, &input));
// Initialize graph
/*** The following fails with error -10868 (unsupported format) ***/
SdCheck(AUGraphInitialize(graph));
Most effect Audio Units on iOS require that a 32-bit floating point format be used on their input and output connections. Your example code attempts to configure an effect unit with 16-bit integer I/O, which won't work.

Adding a AULowPass filter, in between 2 AudioUnits

I have modified the code provided by Tim Boldstad http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/ (may God bless him), and added a small slider to be able to change the output tone frequency form 40hz to 200000 hz. I now want to be able to use a LPF on the tone generated.
First of all, does any1 have a detailed guide which explains how to do this. I've tried simply adding a node in between, but it doesn't work, apparently, I need to convert 16 bit integer samples to the floating 8.24 format, before giving audio sample inputs to the filter, and then i have to convert it back to 16 bit integer. Is this the problem? or have i connected the node wrongly?
Where am i supposed to set the filters cutoff frequency and other parameters?
Can anyone explain what AudioUnitGetProperty does? Apple documentation on these topics are EXTREMELY fragmented and utterly worthless :(
-(void) initializeAUGraph
{
OSStatus result= noErr;
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode mixerNode;
AUNode effectsNode;
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_LowPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponentDescription mixer_desc;
mixer_desc.componentType=kAudioUnitType_Mixer;
mixer_desc.componentSubType=kAudioUnitSubType_MultiChannelMixer;
mixer_desc.componentFlags=0;
mixer_desc.componentFlagsMask=0;
mixer_desc.componentManufacturer=kAudioUnitManufacturer_Apple;
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
result= AUGraphAddNode(mGraph, &output_desc, &outputNode);
result= AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
result=AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
result=AUGraphConnectNodeInput(mGraph, mixerNode, 0, effectsNode, 0);
result=AUGraphConnectNodeInput(mGraph, effectsNode, 0, outputNode, 0);
result=AUGraphOpen(mGraph);
//getting mixxer
result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
UInt32 numbuses = 1;
UInt32 size = sizeof(numbuses);
result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, size);
//=====
CAStreamBasicDescription desc;
// Loop through and setup a callback for each source you want to send to the mixer.
// Right now we are only doing a single bus so we could do without the loop.
for (int i = 0; i < numbuses; ++i)
{
// Setup render callback struct
// This struct describes the function that will be called
// to provide a buffer of audio samples for the mixer unit.
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
// Set a callback for the specified node's specified input
result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &renderCallbackStruct);
// Get a CAStreamBasicDescription from the mixer bus.
size = sizeof(desc);
result = AudioUnitGetProperty( mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
i,
&desc,
&size);
// Initializes the structure to 0 to ensure there are no spurious values.
memset (&desc, 0, sizeof (desc));
// Make modifications to the CAStreamBasicDescription
// We're going to use 16 bit Signed Ints because they're easier to deal with
// The Mixer unit will accept either 16 bit signed integers or
// 32 bit 8.24 fixed point integers.
desc.mSampleRate = kGraphSampleRate; // set sample rate
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
desc.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
desc.mChannelsPerFrame = 1;
desc.mFramesPerPacket = 1;
desc.mBytesPerFrame = ( desc.mBitsPerChannel / 8 ) * desc.mChannelsPerFrame;
desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;
printf("Mixer file format: "); desc.Print();
// Apply the modified CAStreamBasicDescription to the mixer input bus
result = AudioUnitSetProperty( mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
i,
&desc,
sizeof(desc));
}
// Apply the CAStreamBasicDescription to the mixer output bus
result = AudioUnitSetProperty( mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&desc,
sizeof(desc));
//************************************************************
//*** Setup the audio output stream ***
//************************************************************
// Get a CAStreamBasicDescription from the output Audio Unit
result = AudioUnitGetProperty( mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&desc,
&size);
// Initializes the structure to 0 to ensure there are no spurious values.
memset (&desc, 0, sizeof (desc));
// Make modifications to the CAStreamBasicDescription
// AUCanonical on the iPhone is the 8.24 integer format that is native to the iPhone.
// The Mixer unit does the format shifting for you.
desc.SetAUCanonical(1, true);
desc.mSampleRate = kGraphSampleRate;
// Apply the modified CAStreamBasicDescription to the output Audio Unit
result = AudioUnitSetProperty( mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&desc,
sizeof(desc));
// Once everything is set up call initialize to validate connections
result = AUGraphInitialize(mGraph);
}
Can anyone explain what AudioUnitGetProperty does?
Well, it gets the value of a property from an Audio Unit. A "property" is typically something you deal with as a programmer (e.g. audio stream format, connection state), whereas a "parameter" is usually something you expose to the user (e.g. low pass cutoff frequency, mixer volume). Notice that there are AudioUnitGetParameter and AudioUnitSetParameter functions to compliment the AudioUnitGetProperty and AudioUnitSetProperty functions.
You're basically expected to "just know" what an Audio Unit's properties / parameters are and what values they're expecting. The best source of documentation on this are two headers in AudioUnit.framework. Namely, AudioUnitProperties.h and AudioUnitParameters.h. The next best source is Xcode's autocomplete. For example, the AULowPass' parameters are kLowPassParam_CutoffFrequency and kLowPassParam_Resonance, so you can just type kLowPassParam and Xcode will show you what's available. The other AUs typically follow this scheme.
...but it doesn't work, apparently
I'm going to need more information. Do you mean you just can't hear the difference? The AULowPass starts with a very high cutoff frequency, so unless you set it something lower you probably won't hear any difference at all.
Try setting the cutoff frequency to something quite low, for example 500hz. You do that like this:
AudioUnitSetParameter(mEffects,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
500,
0);

Resources