I am trying to play MIDI notes by loading sound font file for Piano,but I have an issue that its not producing proper note sound its regardless of octave notes.I am using MIKMIDI Library.Also I debugged and noticed that every note number is correct but not the sound.I am using following code which sends MIDI messages to an AudioUnit.Please can anyone help to get it solved.
- (void)handleMIDIMessages:(NSArray *)commands
{
for (MIKMIDICommand *command in commands)
{
OSStatus err = MusicDeviceMIDIEvent(self.instrumentUnit, command.statusByte, command.dataByte1, command.dataByte2,0); NSLog(#"%#",command); if (err) NSLog(#"Unable to send MIDI command to synthesizer %#: %#", command, #(err));
}
}
Related
I am working on online radio app I managed to play the streamed mp3 packets from the Icecast server using AudioQueueServices, what I am struggling with is implementing a recording feature.
Since the streaming is in mp3 format I can not write the Audio packets directly to file using AudioFileWritePackets.
To leverage The automatic conversion of Extended Audio I am using ExtAudioWriteFile to write to a wav file. I have setup the AudioStreamBasicDescription of the incoming packets using the FileStreamOpen call back function AudioFileStream_PropertyListenerProc and the destination format I populated manually.The code successfully creates the file and writes the packet to it but on playback what I hear is a white noise;
Here is my code
// when the recording button is pressed this function creates the file and setup the asbd
-(void)startRecording{
recording = true;
OSStatus status;
NSURL *baseUrl=[self applicationDocumentsDirectory];//returns the document direcotry of the app
NSURL *audioUrl = [NSURL URLWithString:#"Recorded.wav" relativeToURL:baseUrl];
//asbd setup for the destination file/wav file
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM; dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
//creating the file
status = ExtAudioFileCreateWithURL(CFBridgingRetain(audioUrl), kAudioFileWAVEType, &(dstFormat), NULL, kAudioFileFlags_EraseFile, &recordingFilRef);
// tell the EXtAudio File ApI what format we will be sending samples
//recordasbd is the asbd of incoming packets populated in AudioFileStream_PropertyListenerProc
status = ExtAudioFileSetProperty(recordingFilRef, kExtAudioFileProperty_ClientDataFormat, sizeof(recordasbd), &recordasbd);
}
// a handler called by packetproc call back function in AudiofileStreamOpen
- (void)handlePacketsProc:(const void *)inInputData numberBytes:(UInt32)inNumberBytes numberPackets:(UInt32)inNumberPackets packetDescriptions:(AudioStreamPacketDescription *)inPacketDescriptions {
if(recording){
// wrap the destination buffer in an audiobuffer list
convertedData.mNumberBuffers= 1;
convertedData.mBuffers[0].mNumberChannels = recordasbd.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = inNumberBytes;
convertedData.mBuffers[0].mData = inInputData;
ExtAudioFileWrite(recordingFilRef,recordasbd.mFramesPerPacket * inNumberPackets, &convertedData);
}
}
My questions are:
Is my approach right can I write mp3 packets to wav file this way If so what am I missing ??
If my approach is wrong please tell me any other way you think is right.A nudge in the right direction is more than enough for me
I am so grateful for any help I have read every SO question I could get my hands on this topic, I also looked closely at apples Convertfile example but I could not figure out what I am missng
Thanks in advance for any help
Why not write the raw mp3 packets directly to a file? Without using ExtAudioFile at all.
They will form a valid mp3 file and will be much smaller than the equivalent wav file.
I'm trying to use OpenAL for an IOS game I'm working on, but having an issue opening the audio device. Specifically, when I call the function alcOpenDevice(NULL), I get 'NULL' in return. This is causing issues, of course, but I can't tell what I'm doing wrong.
I'm new to OpenAL, so I've been looking at a couple guides here and here to see what I need to do. If I download their sample projects and test 'em, they both work fine. If I copy their files into my project, and ignore the files I made, they still work fine. I'm assuming something got lost in translation when I started rebuilding the code for use in my project. Asking around and searching online hasn't given me any leads though, so I'm hoping someone here could put me on the right track.
Here's the actual setup code I'm using in my AudioPlayer.m
- (void)setup {
audioSampleBuffers = [NSMutableDictionary new];
audioSampleSources = [NSMutableArray new];
[self setupAudioSession];
[self setupAudioDevice];
[self setupNotifications];
}
- (BOOL)setupAudioSession {
// // This has been depricated.
//
// /* Setup the Audio Session and monitor interruptions */
// AudioSessionInitialize(NULL, NULL, AudioInterruptionListenerCallback, NULL);
//
// /* Set the category for the Audio Session */
// UInt32 session_category = kAudioSessionCategory_MediaPlayback;
// AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(session_category), &session_category);
//
// /* Make the Audio Session active */
// AudioSessionSetActive(true);
BOOL success = NO;
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
success = [session setCategory:AVAudioSessionCategoryPlayback error:&error];
if (!success) {
NSLog(#"%# Error setting category: %#", NSStringFromSelector(_cmd), [error localizedDescription]);
return success;
}
success = [session setActive:YES error:&error];
if (!success) {
NSLog(#"Error activating session: %#", [error localizedDescription]);
}
return success;
}
- (BOOL)setupAudioDevice {
// 'NULL' uses the default device.
openALDevice = alcOpenDevice(NULL); // Returns 'NULL'
ALenum error = alGetError(); // Returns '0'
NSLog(#"%i", error);
if (!openALDevice) {
NSLog(#"Something went wrong setting up the audio device.");
return NO;
}
// Create a context to use with the device, and make it the current context.
openALContext = alcCreateContext(openALDevice, NULL);
alcMakeContextCurrent(openALContext);
[self createAudioSources];
// Setup was successful
return YES;
}
- (void)createAudioSources {
ALuint sourceID;
for (int i = 0; i < kMaxConcurrentSources; i++) {
// Create a single source.
alGenSources(1, &sourceID);
// Add it to the array.
[audioSampleSources addObject:[NSNumber numberWithUnsignedInt:sourceID]];
}
}
Note: I'm running IOS 7.1.1 on a new iPad air, and using Xcode 5.1.1. This issue has been confirmed on the iPad, my simulator, and an iPod touch.
The Short Answer:
Apple's implementation of alcOpenDevice() only returns the device once. Every subsequent call returns NULL. This function can be called by a lot of Apple audio code, so take out EVERY TRACE of audio code before using OpenAL and manually calling that function yourself.
The Long Answer:
I spent half a day dealing with this problem while using ObjectAL, and ended up doing exactly what you did, re-making the entire project. It worked, until out of curiosity I copied the entire project over, then same problem again, alcOpenDevice(NULL) returned NULL. By chance I stumbled upon the answer. It was this bit of code in my swift game scene:
let jumpSound = SKAction.playSoundFileNamed("WhistleJump.mp3", waitForCompletion: false)
And then I remembered I had this problem before without SKAction involved. That time it turned out I was using ObjectAL in two different ways, I used OALSimpleAudio in one place, and OpenAL objects in another, and it was initializing my audio session twice.
The common thread between these two incidents is both times alcOpenDevice() was called more than once during the life of the application. The first time it was ObjectAL calling it twice due to my misuse of the library. The second SKAction.playSoundFileNamed() must have called alcOpenDevice() before my ObjectAL code did. Upon further research I found this bit in the OpenAL 1.1 Specification:
6.1.1. Connecting to a Device
The alcOpenDevice function allows the application (i.e. the client program) to connect to a device (i.e. the server).
ALCdevice * alcOpenDevice (const ALCchar *deviceSpecifier);
If the function returns NULL, then no sound driver/device has been found. The argument is a null terminated string that requests a certain device or device configuration. If NULL is specified, the implementation will provide an implementation specific default.
My hunch is that Apple's implementation of this function only returns the correct device ONCE for the life of the application. Every time alcOpenDevice is called after that, it returns NULL. So bottom line: Take out every trace of audio code before switching to OpenAL. Even code that seems safe, like SKAction.playSoundFileNamed() still might contain a call to alcOpenDevice() buried deep in its implementation.
For those using ObjectAL, here is the console output of this problem to help them find their way here from google, as I couldn't find a good answer myself:
OAL Error: +[ALWrapper openDevice:]: Could not open device (null)
OAL Error: -[ALDevice initWithDeviceSpecifier:]: <ALDevice: 0x17679b20>: Failed to create OpenAL device (null)
OAL Error: +[ALWrapper closeDevice:]: Invalid Enum (error code 0x0000a003)
OAL Warning: -[OALAudioSession onAudioError:]: Received audio error notification, but last reset was 0.012216 seconds ago. Doing nothing.
fatal error: unexpectedly found nil while unwrapping an Optional value
This SO answer seems to validate my comment about AVAudioSession conflicting with OpenAL. Try removing AVAudioSession, or initializing OpenAL first (tho I imagine this would cause the inverse problem).
Alright, so I ended up starting over in a fresh project with a copy-pasted version of AudioSamplePlayer from the first sample project I linked. -It worked.
I then edited it step by step back to the format I had set up in my project. -It still works!
I still don't know what I did wrong the first time, and I'm not sure it was even in my audio player anymore, but It's running now. I blame gremlins.
...maybe alien surveillance.
I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.
I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like this and this and this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.
I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhone method (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoad to see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)
Here is the main block of code I'm working with (this is what I copied into my viewDidLoad to test), although there are a few more methods in the article I linked to:
void DisableSpeakerPhone () {
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, ¤tRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
So my question is this: can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?
PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!
UPDATE:
From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];
at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecord to send audio out of the receiver on its own, so something is still wrong.
More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):
_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
and the media for that iPod Music Player is chosen through an MPMediaPickerController:
- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
if (mediaItemCollection) {
[_musicPlayer setQueueWithItemCollection: mediaItemCollection];
[_musicPlayer play];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...
EDIT: I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?
Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.
So if you have your AudioSession sharedInstance by getting,
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];
The session category has to be AVAudioSessionCategoryPlayAndRecord
You can get the current output by checking this value.
AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;
And now depending on the port you want to send it to, simply toggle the output using
if ([portType isEqualToString:#"Receiver"]) {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}
This should be a quick way to toggle the outputs to the speaker phone and receiver.
You have to initialise your audio session first.
Using the C API
AudioSessionInitialize (NULL, NULL, NULL, NULL);
In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):
Initialization using AVAudioSession
self.audioSession = [AVAudioSession sharedInstance];
Setting the audioSession category using AVAudioSession
[self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
error:nil];
For further research, if you want better search terms, here are the full names of the constants for the speakers:
const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;
see apple's docs here
But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:
"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."
update
In your updated question you reveal that you are using the MPMusicPlayerController class. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.
If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder / AVAudioPlayer or Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSession properties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.
Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.
update 2
In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating system arbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.
The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.
In your comment you wonder:
Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?
That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary - it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps - that should give you a sense to the complexity you will face. This may have got easier since iOS6 but I wouldn't be so sure!
1 there is an otherAudioPlaying read-only BOOL property in ios6, but that's about it
Swift 3.0 Code
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
try? audioSession.overrideOutputAudioPort(.speaker)
}
else {
try? audioSession.overrideOutputAudioPort(.none)
}
swift 5.0
func activateProximitySensor(isOn: Bool) {
let device = UIDevice.current
device.isProximityMonitoringEnabled = isOn
if isOn {
NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord)
try session.setActive(true)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
} else {
NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
}
}
#objc func proximityStateDidChange(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
let session = AVAudioSession.sharedInstance()
do{
let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
let portType = routePort?.portType
if let type = portType, type.rawValue == "Receiver" {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
}
}
I 've been using CoreMidi to connect to USB devices and/or WiFi hosts. It works fine and sends my midi events.
I want to send them to the device itself to be played. Like the MusicPlayer, but I don't want to send midi files, just my own midi events.
What should I do? I tried connecting to the first destination available (MIDIGetNumberOfDestinations) but it didn't work.
A corrected answer now that I understand better the question.
A sample of one of my projects.
// Setup MIDI input port
MIDIClientRef client = NULL;
MIDIPortRef inport = NULL;
CheckError (MIDIClientCreate(CFSTR("MyApplication"),
NULL,
NULL,
&client),
"Couldn't create MIDI client");
CheckError (MIDIInputPortCreate(client,
CFSTR("MyApplication Input port"),
&inport),
"Couldn't create input port");
[self setInputPort:inport];
[self setMidiClient:client];
[self setDestinationEndpoint:[[self midiSession] destinationEndpoint]];
I am trying to convert caf file to m4a file using AudioUnit. I have implemented the code to convert. When I tried to run the application, I am getting following error message;
couldn't set destination client format (-66672)
I got the sample code from following link;
http://developer.apple.com/library/ios/#samplecode/iPhoneExtAudioFileConvertTest/Introduction/Intro.html
CODE:size = sizeof(clientFormat);
XThrowIfError(ExtAudioFileSetProperty(sourceFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), "couldn't set source client format");
//UInt32 encoderSpecifier = kAudioFormatMPEG4AAC;
//XThrowIfError(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size), "AudioFormatGetPropertyInfo: couldn't get property info");
size = sizeof(clientFormat);
XThrowIfError(ExtAudioFileSetProperty(destinationFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), "couldn't set destination client format");
AudioConverterRef audioConverter;
size = sizeof(audioConverter);
XThrowIfError(ExtAudioFileGetProperty(destinationFile, kExtAudioFileProperty_AudioConverter, &size, &audioConverter), "Couldn't get Audio Converter!");
I am not getting the solution for it. I have tried like setting the properties to the output file. But I am getting the same issue.
Please help me to resolve it.
I encountered this one too - It's not well-documented, but the reason is probably that you have to set the audio category to one compatible with hardware encoding.
In particular, any audio session that provides mixing with other sounds on the device will stop the encoder from working.
I know that AVAudioSessionCategoryPlayAndRecord, AVAudioSessionCategorySoloAmbient and AVAudioSessionCategoryAudioProcessing work for sure (as long as you're not overriding the kAudioSessionProperty_OverrideCategoryMixWithOthers property).
I've actually assembled everything you need to encode any audio file to AAC into an asynchronous class: TPAACAudioConverter