Midi Timing Issues with Delphi ASIO VST and MiniHost - delphi

I'm coming from a background of using MSC* MidiSequencer for a Delphi XE2 project and have been playing with DelphiASIOVST this weekend on the off chance the MIDI may be stable enough to use as my core MIDI engine while also allowing me to support VST plug ins. I pulled the D16 trunk off the SVN and compiled effortlessly after a few path tweaks.
I understand a great deal of what I'm seeing but I'm wondering if others have experienced issues with MIDI file playback in the the MiniHost example application. Specifically with a one track melodic performance it sounds like notes are getting skipped and/or playing back a bit later over other notes that are playing as they should. Basically it's just hit or miss if a note is even played at all.
I have numerous pro sequencers on my machine and the MIDI files are fine there. they also support VST with little to no problems. I also know the MIDI lowest level file format and know the file structure is sound.
Can the TMidiFile play direct to the standard MIDI synth in the computer? I'm trying to rule out VST issues by getting a direct pipeline to the built in synth. Barring that, anyone seen these issues or know of some more/better examples of MIDI file to VST using the component set?

I use FL studios with my Midi and odds are is you need to turn down your buffer quality so that there is little to no delay.
It's probably by default set to about mid-high range which means you will almost for sure have 1 - 1.5 second delay
Don't turn it down too low otherwise you'll get trash can sound where everything sounds hollow and robotic, but keep smashing the keys while you're adju8sting the setting

Is the wordclock functioning properly? Do you have the ability of driving off another midi clock-source, just to test with?
Though you said: "I have numerous pro sequencers on my machine and the MIDI files are fine there", you could also try http://www.reaper.fm (works on Linux/BSD, Mac and Win) DAW and import the midi directly into that, then set your default midi device as that which you wish to test with.

Check the Midi Overflow settings.
Ensure each of your Midi Devices has a unique ID.
Get a midi throughput app like Midi-ox http://www.midiox.com/ To see realtime messages and data. and see where things are going.

Midi workflow checking is required for setting as per our requirements.
Set all devices with unique ids that have been specified in your midi overflow.
Midi throughput application is required to see the messages that are realtime and data that are also realtime.
User has to see the things where they are going to what purpose.
Hope this will help u...

Related

Using AVAudioSequencer to send MIDI to third-party AUv3 instruments

I'm having trouble controlling third-party AUv3 instruments with MIDI using AVAudioSequencer (iOS 12.1.4, Swift 4.2, Xcode 10.1) and would appreciate your help.
What I'm doing currently:
Get all AUs of type kAudioUnitType_MusicDevice.
Instantiate one and connect it to the AVAudioEngine.
Create some notes, and put them on a MusicTrack.
Hand the track data over to an AVAudioSequencer connected to the engine.
Set the destinationAudioUnit of the track to my selected Audio Unit.
So far, so good, but...
When I play the sequence using AVAudioSequencer it plays fine the first time, using the selected Audio Unit. On the second time I get either silence, or a sine wave sound (and I wonder who is making that). I'm thinking the Audio Unit should not be going out of scope in between playbacks of the sequence, but I do stop the engine and restart it again for the new round. (But it should even be possible to swap AUs while the engine is running, so I think this is OK.)
Are there some steps that I'm missing? I would love to include code, but it is really hard to condense it down to its essence from a wall of text. But if you want to ask for specifics, I can answer. Or if you can point me to a working example that shows how to reliably send MIDI to AUv3 using AVAudioSequencer, that would be great.
Is AVAudioSequencer even supposed to work with other Audio Units than Apple's? Or should I start looking for other ways to send MIDI over to AUv3?
I should add that I can consistently send MIDI to the AUv3 using the InstrumentPlayer method from Apple's AUv3Host sample, but that involves a concurrent thread, and results in all sorts of UI sync and timing problems.
EDIT: I added an example project to GitHub:
https://github.com/jerekapyaho/so54753738
It seems that it's now working in iPadOS 13.7, but I don't think I'm doing anything that different than earlier, except this loads a MIDI file from the bundle, instead of generating it from data on the fly.
If someone still has iOS 12, it would be interesting to know if it's broken there, but working on iOS 13.x (x = ?)
In case you are using AVAudioUnitSampler as an audio unit instrument, the sine tone happens when you stop and start the audio engine without reloading the preset. Whenever you start the engine you need to load any instruments back into the sampler (e.g. a SoundFont), otherwise you may hear the sine. This is an issue with the Apple AUSampler, not with 3rd party instruments.
Btw you can test it under iOS 12 using the simulator.

Vivado Logic Analyzer Waveform Procedure

I have been using Vivado Logic Analyzer for months. and believe me it took so much time to properly see the debug singals on waveform. I usually mark the debug signals on block design and then synthesize and generate bitstream . But sometimes i can see my clock on debug "FCLK" or sometimes "ProcessingSystemFCLK, using (Setup_debug on synthesized designs ) . Then also sometimes i can see proper transiitons of waveform on ILA , and sometimes i can see only one straight value there; No transiitons whatsoever. Sometimes I get LUTRAM error and sometimes the bit stream generated successfully.
It will be appreciated if one can tell me the proper sequence for debugging signals and whether first to program device using Vivado or using SDK. And also kindly clarify above points too.
thanks so much
Regards
There are many bugs lurking in the Vivado ILA code, I've run into many myself. I have had the most success generating the ILA in a managed IP project and manually instantiating it in the RTL (use the example project to get a template). That way you can be sure what clock it is running on. If you are getting different clocks I would guess this is why your probes seem to behave differently.
If your device is getting full, Vivado can sometimes fail routing with large ILA blocks. If you rerun the build you may get different results.
As far as programming it, it doesn't matter if it is programmed with Vivado or SDK, but Hardware Manager only exists in Vivado, so you'll need to bring this up and point to the .ltx file to view the probes (don't forget to refresh the device).

How to connect Speex Decompressor to Audio Mixer in Delphi code (Mitov AudioLab components)

First of all, this is my first question on SO - if I made any mistakes, please do not tar and feather me ;)
I have a simple test application to play with the Mitov AudioLab components (www.mitov.com) version 7 in Delphi XE6. On my form, there is a TALWavePlayer, a TALSpeexCompressor, a TALSpeexDecompressor, a TALAudioMixer and a TALAudioOut, building a simple audio processing chain. I can connect the inputs and outputs visually at design time (in the OpenWire view). when I run my test application, I can hear the wave file through the speaker - whithout a single line of code. That's the easy (working) part.
(grrrr... can't post images, would have made things much clearer ;)
Now I disconnect the TALSpeexDecompressor output pin from the TALAudioMixer input pin visually at design time (OpenWire view). I want to replace this same connection in code at run time. (For the sake of simplicity I keep the single input pin and channel of the TALAudioMixer, so they do not need to be created in code).
I tried exactly the same optoins that work to connect other AudioLab components at run time (audio output pin -> audio input pin).
1.) decomp.OutputPin.Connect(mixer.InputPins[0]);
2.) decomp.OutputPin.Connect(mixer.Channels.Items[0].InputPin);
But with the TALSpeexDecompressor, this does not work - there is no signal leaving the decompressor. I do not have the source code of the components, so I cannot debug the application to find out what's going wrong.
Solution:
Stop and then start the wave player again after connecting the decompressor and the mixer dynamically. This somehow solves the issue. I do not know what happens under the hood, but after restarting the TALWavePlayer, the signal leaves the TALSpeexDecompressor and enters the TALAudioMixer. I stumbled over the solution when I set the "filename" property of the TALWavePlayer component in code, not in the property editor. Because of another (default) setting "RestartOnNewFile" = True, the wave player was restarted internally and the signal flow worked.
procedure Tform1.Button1Click(Sender: TObject);
var
channel: TALAudioMixerChannelItem;
begin
channel := mixer.Channels.Add;
waveplayer.Stop;
channel.InputPin.Connect(decomp.OutputPin);
waveplayer.Start;
end;
It is obvious that the AudioLab components can make simple tasks even simpler, but due to the poor documentation in their DocuWiki you have to follow the "try and error" path often, sometimes even for days. Unfortunately my real issue is more complicated than the simple test case I provided. I have an UDP client and server in the chain, so I have no control over the wave player on the client side when I dynamically connect the decompressor to the mixer on the server side. Obviously a deeper knowledge of these components is required, perhaps coming from experience. So this will be my next question here on SO.
Apologies to everyone for the insufficient documentation in the components :-( .
We are working to get a new release in the next 3-4 weeks that will contain again the F1 help, and we are working to make it as complete as possible.
Unfortunately we had to release the 7.0 without documentation in order to have it available on time for the RAD Studio XE6 :-( .
Please contact me directly - mitov#mitov.com so I can help you with the Speex issue, and connecting the pins.
With best regards,
Boian Mitov

How to synchronize audio playback on 2 or more iOS devices?

I would like to write a web application that allows me to sync audio playback of an MP3 down to ~50ms, or close enough that the human ear can't detect the difference.
The idea would be that two or more smartphones could each be paired to a bluetooth speaker, and two or more speakers would play the same audio at the exact same time.
How would you suggest I go about setting this up, both client-side and server-side? I'm planning to use Rails/Ruby for backend, and iOS/obj c for mobile dev.
I had though of the idea of syncing to a global/atomic clock on the server, and having the server provide instructions to clients on when to start playing/jump in to an already playing track. My concern is that, if I want to stream the audio, that it will be impossible to load a song into memory and start playback accurately on the millisecond level.
Thoughts?
The jitter in internet packet delivery will be too large, so forget about syncing over the internet. However you could check the accuracy of NTP which is still used (I guess, I know that older UNIX's used it) by the OS when you switch on automatic date/time in Settings, but my guess is that it won't be good enough either. But perhaps the OS may also use other time sources like GPS; I'm don't know how iOS does it but accuracy within 20ms is not to be expected. You could create experimental app to check it out.
So, what's left is a sync closer to home, meaning between the devices directly. Of course you need to make sure that all devices haves loaded (enough of) the song, and have preloaded it in AVAudioPlayer or whatever you're using, to be able to start playing immediately. (It may actually not be the best idea to use higher level 'AVAudioPlayer` API's as it may give higher delays, and what more important higher jitter, than lower level API's.)
Here are three ideas (one device needs to be master triggering the start play, the others are slaves that are waiting for the trigger):
Use an audio trigger pulse, like a high tone of a defined length and frequency. Then use FFT to recognise this tone.
Connect the devices via GameKit Bluetooth and transmit the trigger on these connections.
Use the iPhone 4+ flash as trigger: flash in a certain pattern. This would require you to sample the video data which is quite doable and can be very fast.
I'm going with a solution that uses an atomic clock for synchronization, and an external service that allows server instructions/messages to be sent to all devices in close sync.

Skip between multiple files while playing audio in iPhone iOS

For a project I need to handle audio in an iPhone app quite special and hope somebody may point me in the right direction.
Lets say you have a fixed set of up to thirty audio files of the same length (2-3 sec, non-compressed). While a que is playing from one audio file it should be able to update parameters that makes the playing continue from another audio file from the same timestamp the previous audiofile ended playing. If the different audio files is different versions of heavely filtered audio it should be possible to "slide" between them an get the impression that you applied the filter directly. The filtering is at the moment not possible to achive in realtime on an iPhone, therefore the prerendered files.
If A B and C is different audio files I like to be able to:
Play A without interruption:
Start AAAAAAAAAAAAA Stop
Or start play A and continue over in B and then C, initiated while playing
Start AAABBBBBBBBCC Stop
Ideally is should be possible to play two er more ques at the same time. Latency is not that important, but the skipping between files should ideally not produce clicks or delays.
I have looked into using Audio Queue Services (which look like hell to dive into) and sniffed on OpenAl. Could anyone give me a ruff overview and a general direction I can spend the next days burried into?
Try using the iOS Audio Unit API, particularly a mixer unit connected to RemoteIO for audio output.
I managed to do this by using FMOD Designer. FMOD (http://www.fmod.org/) is a sound design framework for game development, that supports iOS development. I made a multitrack-event in FMOD Designer with different layers for each sound clip. Add a parameter in the horizontal bar that lets you controll which sound clip to play in realtime. The trick is to let each soundclip continue over the whole bar and controll which sound that is beeing heard by using a volume effect (0-100%) like in the attached picture. In that way you are ensured that skipping between files follow the same timecode. I have tried this successfully with up to thirty layers, but experienced some double playing. This seemed to dissapear if I cut the number down to fifteen.
It should be possible to use iOS Audio Unit API if you are comfortable with this, but for those of us that like the most simple sollution FMOD is quite good :) Thanks to Ellen S for the sollution tip!
Screenshot of the multitrack-event in FMOD Designer:
https://plus.google.com/photos/106278910734599034045/albums/5723469198734595793?authkey=CNSIkbyYw8PM2wE

Resources