I am attempting to use an AVAudioEnvironmentNode to produce 3D spatialized sound for a game I'm working on. The AVAudioEnvironmentNode documentation states, "It is important to note that only inputs with a mono channel connection format to the environment node are spatialized. If the input is stereo, the audio is passed through without being spatialized. Currently inputs with connection formats of more than 2 channels are not supported." I have indeed found this to be the case. When I load audio buffers with two channels into an AudioPlayerNode and connect the node to an AVAudioEnvironmentNode, the output sound is not spatialize. My question is, how can I send mono data to the AVAudioEnvironmentNode?
I've tried creating a mono .wav file using Audacity as well as loading an AVAudioPCMBuffer with sine wave data programmatically. I find that either way, when I create a single channel audio buffer and attempt to load the buffer into an AudioPlayerNode, my program crashes with the following error:
2016-02-17 06:36:07.695 Test Scene[1577:751557] 06:36:07.694 ERROR:
[0x1a1691000] AVAudioPlayerNode.mm:423: ScheduleBuffer: required
condition is false: _outputFormat.channelCount ==
buffer.format.channelCount 2016-02-17 06:36:07.698 Test
Scene[1577:751557] *** Terminating app due to uncaught exception
'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
_outputFormat.channelCount == buffer.format.channelCount'
Checking the AVAudioPlayerNode output bus does indeed reveal that it expects 2 channels. It's unclear to me how this can be changed, or even if it should be.
I stipulate that I have very little experience working with AVFoundation or audio data in general. Any help you can provide would be greatly appreciated.
I hope you have solved this already, but for anyone else:
When connecting your AVAudioPlayerNode to an AVAudioMixerNode or any other node that it can connect to, you need to specify the number of channels there:
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: AVAudioFormat.init(standardFormatWithSampleRate: 96000, channels: 1))
You can check if the sample rate is 96000 for your file in Audacity, or 'Get Info' in Finder.
Related
I am developing MIDI Player by referring to the following Web-Page.
http://twocentstudios.com/2017/02/20/bouncing-midi-to-audio-on-ios/
I don't do any recording, I just want to play the SMF file.
However, when I run setPreload (true), it says "ASSERTION FAILED: Preroll mode set during render" and my app hangs.
I searched for "Preroll mode set during render" but couldn't find any valid information.
Please help someone.
EDIT:
hi, #dspr.
The percussion sounds even if I don't do "AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1)".
I think this is because the BANK for percussion is automatically assigned to ch.10.
However, in this state, the piano and guitar and others do not sound.
AVAudioUnitMIDI Instrument needs kAUMIDISynthProperty_EnablePreload to analyze which tone is assigned to which track in the SMF file, right?
Which method does AVAudioUnitMIDIInstrument use to preload SMF files?
(1) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1) to AVAudioUnitMIDISynth
(2) << How to preload? >>
(3) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 0) to AVAudioUnitMIDISynth
(4) Start AVAudioSequencer
MIDI Player uses the kAUMIDISynthProperty_EnablePreload property of MIDISynth for that purpose. See the Apple comment about it below. Note the It should only be used prior to MIDI playback and must be set back to 0 before attempting to start playback sentence at the end :
/*!
#constant kAUMIDISynthProperty_EnablePreload
#discussion Scope: Global
Value Type: UInt32
Access: Write
Setting this property to 1 puts the MIDISynth in a mode where it will attempt to load
instruments from the bank or file when it receives a program change message. This
is used internally by the MusicSequence. It should only be used prior to MIDI playback,
and must be set back to 0 before attempting to start playback.
*/
EDIT : frankly, I'm a little bit reserved about your link
One strategy I haven’t tried would be to pitch shift the MIDI up one octave, play it back at 2x, record it at 88.2kHz, then downsample to 44.1kHz. AVAudioSession presumably can’t go past 48kHz though.
Clearly, the person who wrote that has a very poor knowledge about audio and sampling. Playing a MIDI song transposed one octave up at double tempo is really not equivalent than playing the same recorded in audio at double speed whatever you make the recording at 88.2kHz or any other sample rate. As a simple example, what happens is the file contains a drum set ? A snare drum (40) will become a Chinese cymbal (52) played two times slower ?
As I can understand this post, the described hack has for unique purpose to make recording. So if you simply want to play your MIDI file back you can certainly find a simpler and better example.
I am using a CANCase VN1640A between 2 ECUs in order to falsify a CAN message. Below the bridge simulation setup:
In my CAPL Code, the received messages from channel 1 will be redirected to channel 3 and vice-versa. (So far I am not falsifying any message)
variables{
message can1. msgCAN1;
message can3. msgCAN3;
}
on message can1.{
msgCAN3=this;
if(this.dir == rx)
output(msgCAN3);
}
on message can3.{
msgCAN1 = this;
if(this.dir == rx)
output(msgCAN1);
}
But when I start CANoe I get this Error message:
This error means that CANoe tries to send more as it could. The transmit buffer is overflowed. I have changed the hardware configuration of Transmit Queue size to the max 32768 messages, also the Receive Latency to very fast but unfortunately the error occur again.
Does anyone have any hints that could help to solve this problem and thanks in advance.
The error message can mean, that CANoe tries to send more as it could. The transmit buffer is overflowed. This can have several causes:
the bus is full of high prior messages and therefore CAN hardware cannot send
You have a program which writes messages very quick to the buffer, so that the card can´t send (while loops for).
Error frames occur when sending and thus the card cannot send.
Vector tool provides a loop test:
Send messages from CH1 to CH3. If this is working fine, it looks like the problem is caused by your CANoe configuration.
The necessary test programs are part of the Vector Driver Setup Files and located in the folder Common. You can download the Driver Setup File from www.vector.com/driver-setup.
CAN Highspeed Looptest: http://kb.vector.com/entry/589/
CAN Low-speed Looptest: http://kb.vector.com/entry/590/
If the loop test works fine, you can see the time, the busload etc. If not, you will get a failed message.
Note:
Reduce the number of channels used in CANoe/CANalyzer under:
Configuration | Options | Measurement | General | Channel usage.
Are there more selected channels in the CANoe configuration than assigned CANcabs in the Vector Hardware Config?
(Start | Control Panel | Hardware and Sound | Vector Hardware)
Please check the channel and application assignment in the Vector Hardware Config.
Kindly check the hardware mapping in CANoe. This error mostly arises when the mapping is not correct or disturbed.
Go to Hardware-> Network Hardware configuration -> Driver -> Select proper channel for the vector hardware
I hope this helps !
So this error does NOT mean that CANoe tries to send more as it could.
It means instead:
We have (many) error frames on the CAN bus. CANoe tries to send messages which does not work (for whatever reason) -> error frames are the result. The CAN controller will retry to send the frame which might again lead to an error frame. Now over time the Send Requests accumulate and lead to further error frames. At some point the buffer for the error frames does overflow which leads to the message you see in the write window.
Solution:
We have to check the Trace Window and check what kind of error frames we get there (and then take suitable measures to prevent them).
I want to play the recorded audio directly to speaker when headset is plugged in an iOS device.
What I did is calling AudioUnitRender in AURenderCallback func so that the audio data is writed to AudioBuffer structure.
It works well if the "IO buffer duration" is not set or set to 0.020seconds. If the "IO buffer duration" is set to a small value (0.005 etc.) by calling setPreferredIOBufferDuration, AudioUnitRender() will return an error:
kAudioUnitErr_CannotDoInCurrentContext (-10863).
Any one can help to figure out why and how to resolve it please? Thanks
Just wanted to add that changing the output scope sample rate to match the input scope sample rate of the input to the OSx kAudioUnitSubType_HALOutput Audio Unit that I was using fixed this error for me
The buffer is full so wait until a subsequent render pass or use a larger buffer.
This same error code is used by AudioToolbox, AudioUnit and AUGraph but only documented for AUGraph.
To avoid spinning or waiting in the render thread (a bad idea!), many
of the calls to AUGraph can return:
kAUGraphErr_CannotDoInCurrentContext. This result is only generated
when you call an AUGraph API from its render callback. It means that
the lock that it required was held at that time, by another thread. If
you see this result code, you can generally attempt the action again -
typically the NEXT render cycle (so in the mean time the lock can be
cleared), or you can delegate that call to another thread in your app.
You should not spin or put-to-sleep the render thread.
https://developer.apple.com/reference/audiotoolbox/kaugrapherr_cannotdoincurrentcontext
I am trying to build a very simple audio effects chain using Core Audio for iOS. So far I have implemented an EQ - Compression - Limiter chain which works perfectly fine in the simulator. However on device, the application crashes when connecting nodes to the AVAudioEngine due to an apparent mismatch in the input and output hardware formats.
'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
IsFormatSampleRateAndChannelCountValid(outputHWFormat)'
Taking a basic example, my Audio Graph is as follows.
Mic -> Limiter -> Main Mixer (and Output)
and the graph is populated using
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: engine.inputNode!.outputFormatForBus(0))
which crashes with the above exception. If I instead use the limiter's format when connecting to the mixer
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: limiter.outputFormatForBus(0))
the application crashes with an kAudioUnitErr_FormatNotSupported error
'com.apple.coreaudio.avfaudio', reason: 'error -10868'
Before connecting the audio nodes in the engine, inputNode has 1 channel and a sample rate of 44.100Hz, while the outputNode has 0 channels and a sample rate of 0Hz (deduced using outputFormatForBus(0) property). But this could be because there is no node yet connected to the output mixer? Setting the preferred sample rate on AVAudioSession made no difference.
Is there something that I am missing here? I have Microphone access (verified using AVAudioSession.sharedInstance().recordPermission()), and I have set the AVAudioSession mode to record (AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryRecord)).
The limiter is an AVAudioUnitEffect initialized as follows:
let limiter = AVAudioUnitEffect( audioComponentDescription:
AudioComponentDescription(
componentType: kAudioUnitType_Effect,
componentSubType: kAudioUnitSubType_PeakLimiter,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0) )
engine.attachNode(limiter)
and engine is a global, class variable
var engine = AVAudioEngine()
As I said, this works perfectly fine using the simulator (and Mac's default hardware), but continually crashes on various iPads on iOS8 & iOS9. I have a super basic example working which simply feeds the mic input to a player to the output mixer
do {
file = try AVAudioFile(forWriting: NSURL.URLToDocumentsFolderForName(name: "test", WithType type: "caf")!, settings: engine.inputNode!.outputFormatForBus(0).settings)
} catch {}
engine.connect(player, to: engine.mainMixerNode, format: file.processingFormat)
Here the inputNode has 1 channel and 44.100Hz sampling rate, while the outputNode has 2 channels and 44.100Hz sampling rate, but no mismatching seems to occur. Thus the issue must be the manner in which the AVAudioUnitEffect is connected to the output mixer.
Any help would be greatly appreciated.
This depends on some factors outside of the code you've shared, but it's possible you're using the wrong AVAudioSession category.
I ran into this same issue, under some slightly different circumstances. When I was using AVAudioSessionCategoryRecord as the AVAudioSession category, I ran into this same issue when attempting to connect an audio tap. I not only received that error, but my AVAudioEngine inputNode showed an outputFormat with 0.0 sample rate.
Changing it to AVAudioSessionCategoryPlayAndRecord I received the expected 44.100Hz sample rate and the issue resolved.
I'm receiving a series of UDP packets from a socket containing encoded PCM buffers. After decoding them, I'm left with an int16 * audio buffer, which I'd like to immediately play back.
The intended logic goes something like this:
init(){
initTrack(track, output, channels, sample_rate, ...);
}
onReceiveBufferFromSocket(NSData data){
//Decode the buffer
int16 * buf = handle_data(data);
//Play data
write_to_track(track, buf, length_of_buf, etc);
}
I'm not sure about everything that has to do with playing back the buffers though. On Android, I'm able to achieve this by creating an AudioTrack object, setting it up by specifying a sample rate, a format, channels, etc... and then just calling the "write" method with the buffer (like I wish I could in my pseudo-code above) but on iOS I'm coming up short.
I tried using the Audio File Stream Services, but I'm guessing I'm doing something wrong since no sound ever comes out and I feel like those functions by themselves don't actually do any playback. I also attempted to understand the Audio Queue Services (which I think might be close to what I want), however I was unable to find any simple code samples for its usage.
Any help would be greatly appreciated, specially in the form of example code.
You need to use some type of buffer to hold your incoming UDP data. This is an easy and good circular buffer that I have used.
Then to play back data from the buffer, you can use Audio Unit framework. Here is a good example project.
Note: The first link also shows you how to playback using Audio Unit.
You could use audioQueue services as well, make sure your doing some kind of packet re-ordering, if your using ffmpeg to decode the streams there is an option for this.
otherwise audio queues are easy to set up.
https://github.com/mooncatventures-group/iFrameExtractor/blob/master/Classes/AudioController.m
You could also use AudioUnits, a bit more complicated though.