Changing the sequencer in a Sequencer object in MidiSystem - javax.sound.midi

I'm working on a Java program that uses a Sequencer that I get from MidiSystem while using a JFrame object that let's me pick different sequences to play. So far, the only way I can play a new sequence is to stop the program and start over. Is there a way I can change the sequence or track while the JFrame stays active instead of stopping the program and restarting. Thanks.

// Get the default sequencer connected to the default synth
Sequencer sequencer = MidiSystem.getSequencer();
// Create a Midi sequence
Sequence newSequence = new Sequence(...);
... // Add track(s) to newSequence
// Sets the current sequence on which the sequencer operates.
sequencer.setSequence(sequence)

Related

receivedMIDIController not calling from sequencer

Current State:
I'm playing a midi file using AppleSequencer.
VirtualPorts and Listener are implemented, "receivedMIDINoteOn/Off" working great.
My Problem
"receivedMIDIController" not calling from my sequencer.
Message send from Logic Pro X's MIDI Out Port is calling "receivedMIDIController". (Same MIDI File)
I want to know what is happening, anyone can help me please?
Ok, new track with "replaceMIDINoteData" have only "modiNoteDataEvent"
var sequencer:AppleSequencer = AppleSequencer()
let t = sequencer.newTrack("track")
t?.replaceMIDINoteData(with: seqSty.tracks[0].getMIDINoteData())
//sequencer has Note Evnet Only.
a sequencer from MIDI Data has all kind of events, and then replace data with "MIDINoteData"
sequencer = AppleSequencer(fromData: data)
sequencer.tracks[0].replaceMIDINoteData(with: seqSty.tracks[0].getMIDINoteData())
//sequencer has all events from MIDI

Playing random audio files in sequence with AKPlayer

I am working on a sort of multiple audio playback project. First, I have 10 mp3 files in a folder. I wanted AKPlayer to play one of these audio files randomly, but in sequence - one after the other. But playing a random file after another random file seems to be tricky. Here's what I've written:
let file = try? AKAudioFile(readFileName: String(arc4random_uniform(9)+1) + ".mp3")
let player = AKPlayer(audioFile: file!)
player1.isLoopiong = true
player.buffering = .always
AudioKit.output = AKPlayer
try? AudioKit.start()
player.start(at: startTime)
This code loops the first chosen random file forever - but I simply wanted to play each random files once. Is there any way I can reload the 'file' so the player starts again when it's done playing? I've tried calling multiple AKPlayers (but calling 10 players must be wrong), if player.isPlaying = false, sequencer, etc, but couldn't exactly figure out how. Apologize for such a newbie question. Thank you so much.
AKPlayer has a completion handler
to be called when Audio is done playing. The handler won’t be called
if stop() is called while playing or when looping from a buffer.
The completion handler type is AKCallback, which is a typealias for () -> Void. If you have some good reason not to use 10 AKPlayers, you could probably use the completion handler to change the file and restart the player. But you could also create an array with 10 AKPlayers, each loaded with a different file, and have a function that selects a player at random for playback (or that cycles through a a pre-shuffled array). The completion handler for each player in the array could call this function, when appropriate. As per the doc quoted above, make sure that the AKPlayer is not looping or else the completion handler won't be called.
yes, you can use the completionHandler of the player to load a new file into the same player when playback finishes. In your completion block:
player.load(url: nextFile)
player.play()
Another approach is to use the AKClipPlayer with 10 clips of a predetermined random order and schedule them in sequence. This method will be the most seamless (if that matters).

AVAudioRecorder in Swift 3: Get Byte stream instead of saving to file

I am new to iOS programming and I want to port an Android app to iOS using Swift 3. The core functionality of the app is to read the byte stream from the microphone and to process this stream live. So it is not sufficient to store the audio stream to a file and process it after recording has stopped.
I already found the AVAudioRecorder class which works, but I don't know how to process the data stream live (filtering, sending it to a server, etc). The init-function of the AVAudioRecorder looks like that:
AVAudioRecorder(url: filename, settings: settings)
What I would need is a class where I can register an event handler or something like that which is called every time x bytes have been read so I can process it.
Is this possible with AVAudioRecorder? If not, is there another class in the Swift library that allows me to process audio streams live? In Android I use android.media.AudioRecord so It would be great if there's an equivalent class in Swift.
Regards
Use Audio Queue service in Core Audio framework.
https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1
static const int kNumberBuffers = 3; // 1
struct AQRecorderState {
AudioStreamBasicDescription mDataFormat; // 2
AudioQueueRef mQueue; // 3
AudioQueueBufferRef mBuffers[kNumberBuffers]; // 4
AudioFileID mAudioFile; // 5
UInt32 bufferByteSize; // 6
SInt64 mCurrentPacket; // 7
bool mIsRunning; // 8
};
Here’s a description of the fields in this structure:
1 Sets the number of audio queue buffers to use.
2 An AudioStreamBasicDescription structure (from CoreAudioTypes.h)
representing the audio data format to write to disk. This format gets
used by the audio queue specified in the mQueue field. The mDataFormat
field gets filled initially by code in your program, as described in
Set Up an Audio Format for Recording. It is good practice to then
update the value of this field by querying the audio queue's
kAudioQueueProperty_StreamDescription property, as described in
Getting the Full Audio Format from an Audio Queue. On Mac OS X v10.5,
use the kAudioConverterCurrentInputStreamDescription property instead.
For details on the AudioStreamBasicDescription structure, see Core
Audio Data Types Reference.
3 The recording audio queue created by your application.
4 An array holding pointers to the audio queue buffers managed by the
audio queue.
5 An audio file object representing the file into which your program
records audio data.
6 The size, in bytes, for each audio queue buffer. This value is
calculated in these examples in the DeriveBufferSize function, after
the audio queue is created and before it is started. See Write a
Function to Derive Recording Audio Queue Buffer Size.
7 The packet index for the first packet to be written from the current
audio queue buffer.
8 A Boolean value indicating whether or not the audio queue is
running.

Aftertouch / Pressure Midi command not working in AVFoundation

I am using AVAudioUnitSampler to play some midi sounds, i have a soundfont loaded and have sucessfully use start note, stop note and apply pitch bend midi commands. I am now trying to incorporate aftertouch or pressure commands as it is called in AVFoundation.
So my code looks roughly like this (simplified):
self.midiAudioUnitSampler.startNote(60, withVelocity: 60, onChannel: 0)
//some time later...
self.midiAudioUnitSampler.sendPressure(20, onChannel: 0)
The note is humming away but the send pressure commands seem to have no effect on the sound output. I have tried using send pressure and sendPressureForKey to and no luck.
What am i doing wrong or am I misunderstanding what sendPressure does? I expect it to change the volume of the note after it is played.
Btw i have a setup where the note is being played and i have a separate Control to fire pressureCommands into the samplee at some time after the note playback has been started.
My guess is that the sampler does not know what to do with aftertouch messages. If you want to change the volume of the note (and any other notes playing) you could send your value to parameter 7 (volume) instead:
self.midiAudioUnitSampler.sendController(7, withValue: 20, onChannel: 0)
From my experience I have the feeling that the sampler does responds to MIDI controller 7.

Removing Silence from Audio Queue session recorded audio in ios

I'm using Audio Queue to record audio from the iphone's mic and stop recording when silence detected (no audio input for 10seconds) but I want to discard the silence from audio file.
In AudioInputCallback function I am using following code to detect silence :
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
OSStatus Status AudioQueueGetProperty(inAQ,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if(meters[0].mPeakPower < _threshold)
{ // NSLog(#"Silence detected");}
But how to remove these packets? Or Is there any better option?
Instead of removing the packets from the AudioQueue, you can delay the write up by writing it to a buffer first. The buffer can be easily defined by having it inside the inUserData.
When you finish recording, if the last 10 seconds is not silent, you write it back to whatever file you are going to write. Otherwise just free the buffer.
after the file is recorded and closed, simply open and truncate the sample data you are not interested in (note: you can use AudioFile/ExtAudioFile APIs to properly update any dependent chunk/header sizes).

Resources