AudioUnitSetParameter(appBufferContext->unit, kDynamicsProcessorParam_MasterGain, kAudioUnitScope_Output, 0, 0.5 , 0);
This line returns -50. What is the meaning of it? Actually I want to adjust volume of steamed Audio packets.
You have invoked this method with a bad/invalid parameter (kAudio_ParamError == An error in the parameter list of the function). Without knowing more about your graph, I recommend first double checking your Scopes and elements, per: https://developer.apple.com/documentation/audiotoolbox/1438454-audiounitsetparameter?language=occ
For example, is your scope == kAudioUnitScope_Output correct, what about the element == 0?
If you don't know, use CAShow() to debug your graph, so you can see exactly how components are wired up.
In the future, you can check CoreAudio error codes here:
https://www.osstatus.com/search/results?platform=all&framework=all&search=-50
Related
I checked the video stream displayed well in qml video surface. now I want to get the video frame data to do something not bad thing. but, It seems not doing well until now... I made a simple pipeline like below for focus on a test.
nvarguscamerasrc - appsink
I used QGst::Utils::ApplicationSink to get a frame data. I referenced an example "appsink-src"
/* making pipeline */
QGst::ElementPtr source, sink;
SubClassApplicationSink *appsink;
source = QGst::ElementFactory::make("nvarguscamerasrc");
sink = QGst::ElementFactory::make("appsink");
appsink = new SubClassApplicationSink();
// configure elements
source->setProperty("sensor-id", n);
appsink->setElement(sink);
appsink->enableDrop(true);
appsink->setMaxBuffers(7654321);
m_pipeline->add(source, sink);
source->link(sink);
subclass of ApplicationSink implements some callbacks eos, preroll, sample.
and I prints logs some values in a buffer I got from the new sample.
the same outputs are repeated as callback function is called.
result: [start-end offsets are -1, no flags, memory count 1, memory size 1008]
I don't know why... How do you think?
I solved the issue. the problem was a pipeline's composition. after put a "nvvidconv" element between "nvarguscamerasrc" and "appsink" then I could get video frames successfully.
I don't know why needs a nvvidconv element. but, It seems because of source's video type, "video/x-raw(memory:NVMM)" which means using DMA buffers for performance reasons.
https://forums.developer.nvidia.com/t/what-is-the-meaning-of-memory-nvmm/180522
IXAudio2SourceVoice has a GetState function which returns an XAUDIO2_VOICE_STATE structure. This structure has a SamplesPlayed member, which is:
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the XAUDIO2_END_OF_STREAM flag).
What I want to be able to do it stop the source voice, flush all its buffers, and then reset the SamplesPlayed counter to zero. Neither calling Stop nor FlushSourceBuffers will by themselves reset SamplesPlayed. And while flagging the last buffer with XAUDIO2_END_OF_STREAM does correctly reset SamplesPlayed back to zero, this seemingly only works if that last buffer is played to completion; if the buffer is flushed, then SamplesPlayed does not get reset. I have also tried calling Discontinuity both before and after stopping/flushing with no effect.
My current workaround is, after stopping and flushing the source voice, to submit a tiny 1-sample silent buffer with the XAUDIO2_END_OF_STREAM flag set and then let the source voice play to process that buffer and thus reset SamplesPlayed to zero. This works fine-ish for my use case, but it seems pretty hacky/clumsy. Is there a better solution?
Looking at the XAudio2 source, there's no exposed way to do that in the API other than letting a packet play with XAUDIO2_END_OF_STREAM.
Calling Discontinuity sets up the end-of-stream flag on the currently playing buffer, or if there's none playing and a queued buffer it sets it there. You need to call Discontinuity and then let the voice play to completion before you recycle it.
I am developing MIDI Player by referring to the following Web-Page.
http://twocentstudios.com/2017/02/20/bouncing-midi-to-audio-on-ios/
I don't do any recording, I just want to play the SMF file.
However, when I run setPreload (true), it says "ASSERTION FAILED: Preroll mode set during render" and my app hangs.
I searched for "Preroll mode set during render" but couldn't find any valid information.
Please help someone.
EDIT:
hi, #dspr.
The percussion sounds even if I don't do "AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1)".
I think this is because the BANK for percussion is automatically assigned to ch.10.
However, in this state, the piano and guitar and others do not sound.
AVAudioUnitMIDI Instrument needs kAUMIDISynthProperty_EnablePreload to analyze which tone is assigned to which track in the SMF file, right?
Which method does AVAudioUnitMIDIInstrument use to preload SMF files?
(1) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1) to AVAudioUnitMIDISynth
(2) << How to preload? >>
(3) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 0) to AVAudioUnitMIDISynth
(4) Start AVAudioSequencer
MIDI Player uses the kAUMIDISynthProperty_EnablePreload property of MIDISynth for that purpose. See the Apple comment about it below. Note the It should only be used prior to MIDI playback and must be set back to 0 before attempting to start playback sentence at the end :
/*!
#constant kAUMIDISynthProperty_EnablePreload
#discussion Scope: Global
Value Type: UInt32
Access: Write
Setting this property to 1 puts the MIDISynth in a mode where it will attempt to load
instruments from the bank or file when it receives a program change message. This
is used internally by the MusicSequence. It should only be used prior to MIDI playback,
and must be set back to 0 before attempting to start playback.
*/
EDIT : frankly, I'm a little bit reserved about your link
One strategy I haven’t tried would be to pitch shift the MIDI up one octave, play it back at 2x, record it at 88.2kHz, then downsample to 44.1kHz. AVAudioSession presumably can’t go past 48kHz though.
Clearly, the person who wrote that has a very poor knowledge about audio and sampling. Playing a MIDI song transposed one octave up at double tempo is really not equivalent than playing the same recorded in audio at double speed whatever you make the recording at 88.2kHz or any other sample rate. As a simple example, what happens is the file contains a drum set ? A snare drum (40) will become a Chinese cymbal (52) played two times slower ?
As I can understand this post, the described hack has for unique purpose to make recording. So if you simply want to play your MIDI file back you can certainly find a simpler and better example.
I'm receiving a series of UDP packets from a socket containing encoded PCM buffers. After decoding them, I'm left with an int16 * audio buffer, which I'd like to immediately play back.
The intended logic goes something like this:
init(){
initTrack(track, output, channels, sample_rate, ...);
}
onReceiveBufferFromSocket(NSData data){
//Decode the buffer
int16 * buf = handle_data(data);
//Play data
write_to_track(track, buf, length_of_buf, etc);
}
I'm not sure about everything that has to do with playing back the buffers though. On Android, I'm able to achieve this by creating an AudioTrack object, setting it up by specifying a sample rate, a format, channels, etc... and then just calling the "write" method with the buffer (like I wish I could in my pseudo-code above) but on iOS I'm coming up short.
I tried using the Audio File Stream Services, but I'm guessing I'm doing something wrong since no sound ever comes out and I feel like those functions by themselves don't actually do any playback. I also attempted to understand the Audio Queue Services (which I think might be close to what I want), however I was unable to find any simple code samples for its usage.
Any help would be greatly appreciated, specially in the form of example code.
You need to use some type of buffer to hold your incoming UDP data. This is an easy and good circular buffer that I have used.
Then to play back data from the buffer, you can use Audio Unit framework. Here is a good example project.
Note: The first link also shows you how to playback using Audio Unit.
You could use audioQueue services as well, make sure your doing some kind of packet re-ordering, if your using ffmpeg to decode the streams there is an option for this.
otherwise audio queues are easy to set up.
https://github.com/mooncatventures-group/iFrameExtractor/blob/master/Classes/AudioController.m
You could also use AudioUnits, a bit more complicated though.
In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, ¤tOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback