I've searched some examples and found this:
var
op: TMCI_Open_Parms;
rp: TMCI_Record_Parms;
sp: TMCI_SaveParms;
begin
// Open
op.lpstrDeviceType := 'waveaudio';
op.lpstrElementName := '';
if mciSendCommand(0, MCI_OPEN, MCI_OPEN_ELEMENT or MCI_OPEN_TYPE, cardinal(#op)) <> 0 then
raise Exception.Create('MCI error');
try
// Record
rp.dwFrom := 0;
rp.dwTo := 10000;
rp.dwCallback := 0;
if mciSendCommand(op.wDeviceID, MCI_RECORD, MCI_TO or MCI_WAIT, cardinal(#rp)) <> 0 then
raise Exception.Create('MCI error. No microphone connected to the computer?');
// Save
sp.lpfilename := PChar(ExtractFilePath(Application.ExeName) + 'test.wav');
if mciSendCommand(op.wDeviceID, MCI_SAVE, MCI_SAVE_FILE or MCI_WAIT, cardinal(#sp)) <> 0 then
raise Exception.Create('MCI error');
finally
mciSendCommand(op.wDeviceID, MCI_CLOSE, 0, 0);
end;
it records only microphone, can I record speakers and microphone simultaniously? or separately?
The ability to do this largely depends on which Windows version are you using.
If you are still using Windows XP you might have "Software mix" or "Stereo out" recoding channels available.
But if you are using Windows Vista or newer these channels are no longer available. Well not without the use of some unofficial sound card drivers.
The main reason for this is that ability to record entire sound card output was invalidating any digital copyright protection for audio files.
So in order to achieve what you need you will have to find some custom sound library which would be able to directly play the music from Youtube mix your microphone with hat and output (record) that into some file.
I think you might be able to achieve this with Bass sound library (http://www.un4seen.com/) but I'm not sure.
Another option would be to directly connect Wave Out line into Line in port using cable and then record contents from Line in instead from microphone. Also make sure to allow your microphone voice to be played over speakers (disabled by default on most sound cards for avoiding possible sound echo).
EDIT: After taking a look at program named Audacity I found out that recording of your computers sound output only works if you chose WASAPI as sound interface.
Looking further about the WASAPI it seems this is new audio interface that has been introduced with Windows Vista. Now I must admit that I haven't known about this before.
So it seems that answer would lie in using WASAPI instead of old MME audio interface.
Quick search on Google does indicate that some people already managed to use WASAPI from Delphi.
Now since I don't have any experience with this new sound API I'm afraid that I can't be of more help to you than recommending you to learn about WASAPI and find some examples for it.
EDIT2: Managed to find a small example of using WASAPI interface in Delphi for Loopback recording. You can get it here:
http://4coder.org/delphi-source-code/547/
Also found a thread on DelphiPraxis about someone making a specially purposed unit for loopback recording with WASAPI in Delphi but since I'm not a member of DelphiPraxis I can't download it and test it.
http://www.delphipraxis.net/183977-wasapi-loopback-audio-capturing.html
Related
I've jumped off the deep end, and have decided to figure out low-latency audio on iOS using Audio Units. I've read as much documentation (from Apple and forums galore) as I can find, and the overall concepts make sense, but I'm still scratching my head on some concepts that I need help with:
I saw somewhere that AU Graphs are deprecated and that I should instead connect Audio Units directly. I'm cool with that... but how? Do I just need to use the Connection property of an Audio Unit to connect it to a source AU, and off I go? Initialize and Start the Units, and watch the magic happen? (cause it doesn't for me...)
What's the best Audio Unit setup to use if I simply want to grab audio from my mic, do some processing to the audio data, and then store that audio data without sending it out to the RemoteIO speaker, bus 0 output? I tried hooking up a GenericOutput AudioUnit to catch the data in a callback without any luck...
That's it. I can provide code when requested, but it's way too late, and this has wiped me out. If there's know easy answer, that's cool. I'll send any code snippets at will. Suffice it to say, I can easily get a simple RemoteIO, mic in, speaker out setup working great. Latency seems non-existant (at least to my ears). I just want to do something with the mic data and store it in memory without it going out to the speaker. Eventually hooking in the eq and mixer would be hip, but one step at a time.
FWIW, I'm coding in Xamarin Forms/C# land, but code examples in Objective C, Swift or whatever is fine. I'm stuck on the concepts, not necessarily the exact code.
THANKS!
Working with audio units without a graph is pretty simple and very flexible. To connect two units, you call AudioUnitSetProperty this way :
AudioUnitConnection connection;
connection.sourceAudioUnit = sourceUnit;
connection.sourceOutputNumber = sourceOutputIndex;
connection.destInputNumber = destinationInputIndex;
AudioUnitSetProperty(
destinationUnit,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Input,
destinationInputIndex,
&connection,
sizeof(connection)
);
Note that it is required for the units connected this way to have their Stream Format set uniformly and that it must be done before their initialization.
Your question mentions Audio Units, and Graphs. As said in the comments, the graph concept has been replaced with the idea of attaching "nodes" to an AVAudioEngine. These nodes then "connect" to other nodes. Connecting nodes creates signal paths and starting the engine makes it all happen. This may be obvious, but I am trying to respond generally here.
You can do this all in Swift or in Objective-C.
Two high level perspectives to consider with iOS audio are the idea of a "host" and that of a "plugin". The host is an app and it hosts plugins. The plugin is usually created as an "app extension" and you can look up audio unit extensions for more about that as needed. You said you have one doing what you want, so this is all explaining the code used in a host
Attach AudioUnit to an AVaudioEngine
var components = [AVAudioUnitComponent]()
let description =
AudioComponentDescription(
componentType: 0,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0
)
components = AVAudioUnitComponentManager.shared().components(matching: description)
.compactMap({ au -> AVAudioUnitComponent? in
if AudioUnitTypes.codeInTypes(
au.audioComponentDescription.componentType,
AudioUnitTypes.instrumentAudioUnitTypes,
AudioUnitTypes.fxAudioUnitTypes,
AudioUnitTypes.midiAudioUnitTypes
) && !AudioUnitTypes.isApplePlugin(au.manufacturerName) {
return au
}
return nil
})
guard let component = components.first else { fatalError("bugs") }
let description = component.audioComponentDescription
AVAudioUnit.instantiate(with: description) { (audioUnit: AVAudioUnit?, error: Error?) in
if let e = error {
return print("\(e)")
}
// save and connect
guard let audioUnit = audioUnit else {
print("Audio Unit was Nil")
return
}
let hardwareFormat = self.engine.outputNode.outputFormat(forBus: 0)
self.engine.attach(au)
self.engine.connect(au, to: self.engine.mainMixerNode, format: hardwareFormat)
}
Once you have your AudioUnit loaded, you can connect your Athe AVAudioNodeTapBlock below, it has more to it since it need to be a binary or something that other host apps that aren't yours can load.
Recording an AVAudioInputNode
(You can replace the audio unit with the input node.)
In an app, you can record audio by creating an AVAudioInputNode or just reference the 'inputNode' property of the AVAudioEngine, which is going to be connected to the system's selected input device(mic, line in, etc) by default
Once you have the input node you want to process the audio of, next "install a tap" on the node. You can also connect your input node to a mixer node and install a tap there.
https://developer.apple.com/documentation/avfoundation/avaudionode/1387122-installtap
func installTap(onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: #escaping AVAudioNodeTapBlock)
The installed tap will basically split your audio stream into two signal paths. It will keep sending the audio to the AvaudioEngine's output device and also send the audio to a function that you define. This function(AVAudioNodeTapBlock) is passed to 'installTap' from AVAudioNode. The AVFoundation subsystem calls the AVAudioNodeTapBlock and passes you the input data one buffer at a time along with the time at which the data arrived.
https://developer.apple.com/documentation/avfoundation/avaudionodetapblock
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer, AVAudioTime) -> Void
Now the system is sending the audio data to a programmable context, and you can do what you want with it.
To use it elsewhere, you can create a separate AVAudioPCMBuffer and write each of the passed in buffers to it in the AVAudioNodeTapBlock.
I'm working with AVAudioplayer and AVAudiosession. I have got an iPad and a audio interface (sound card).
This audio interface has 4 outputs (2 stereo), a lightning cable and it receive energy from the iDevice, works excellent.
Ive coded a simple play() stop() AVAudioplayer that works fine BUT I need to asign specific channel of the audio interface (1-2 & 3-4). My idea is send two audios (A & B) to each output/channel (1-2 or 3-4)
I've read the AVAudioplayer's documentation and it says: channelAssignments is for asign channels to a audioplayer.
The problem is: I've created an AVAudiosession that get the data of the USBport's device plugged (soundcard). And I got:
let route = AVAudioSession.sharedInstance().currentRoute
for port in route.outputs {
if port.portType == AVAudioSessionPortUSBAudio {
let portChannels = port.channels
let sessionOutputs = route.outputs
let dataSource = port.dataSources
dataText.text = String(portChannels) + "\n" + String(sessionOutputs) + "\n" + String(dataSource)
}
}
Log:
outputs
Which data I must to take and use to send the audios with play()?
Wow - I had no idea that AVAudioPlayer had been developed at all since AVPlayer came out in iOS 4. Yet here we are in 2016, and AVAudioPlayer has channelAssignments while the fancy streaming, video playing with subtitles AVPlayer does not.
But I don't think you will be able to play two files A and B through one AVAudioPlayer as each player can only open one file. That leaves
creating two players (player(A) and player(B)) and setting the channelAssignments of each to one half of the audio devices output channels, dealing with the joys of synchronising the players, or
creating a four channel file, AB, and playing it through one player, assigning channelAssignments the full four channels you found above, dealing with the joys of slightly non-standard audio files .
Sanity check: is your session.outputNumberOfChannels returning 4?
Personally, when I do this kind of thing I create a 4 channel remote io audio unit as I've found the higher level APIs cause too much heartache once you do anything a little unusual. I also use AVAudioSessionCategoryMultiRoute because I don't have any > 2 channel sound cards, so I have to cobble headphone jack plus usb sound card to get 4 channels, but you shouldn't need this.
Despite not having procedural output (like remoteIO audio units), you may also be able to use AVAudioEngine to do what you want.
I'm using ShellExecute to launch Adobe Reader. After that I'm redirecting the window to a panel with WinApi.Windows.SetParent. Now when I close my app, Adobe Reader is still active in memory and I have to end the process in the Task Manager before i can open a new session.
My questions are: How do I acces Adobe Reader in my app? How do i talk to it? (Sending messages like close and minimilize) And how do I return it to windows as parent? All of this in Delphi XE5
EDIT:
This is the code I use to set the new parent:
ShellExecute(Handle, nil, PChar('C:\Tool\Temp.pdf'), nil, nil, SW_SHOWNORMAL);
Sleep(500);
wHandle := FindWindow(NIL,'Temp.pdf - Adobe Reader');
WinApi.Windows.SetParent(wHandle, Panel1.Handle);
Here are the answers to my own questions:
I have the handle saved to the wHandle var. Since this is global variable, I can acces it anywhere in my code. E.g. The OnClose event of my form
To 'talk' to Abobe Reader I have to use the SendMessage method and pass the required parameters. I can use wHandle to point to Adobe Reader. The Parent of Adobe Reader is actually no issue.
When I want to return Adobe Reader to Windows as parent, I simply change the NewParent parameter of SetParent from Panel1.Handle to 0.
I got the answer by reading the comments of my question. They all pointed my in the right direction.
I am trying to create a Delphi 6 program with DSPACK that records audio from the PC input devices (Windows XP) and then writes the captured audio to a MS format WAV file. The problem I am having is that I am getting NIL back when I try to get the legacy filter named 'WAV Dest':
CapEnum.SelectGUIDCategory(CLSID_LegacyAmFilterCategory);
filWaveDest.BaseFilter.Moniker := CapEnum.GetMoniker(CapEnum.FilterIndexOfFriendlyName('WAV Dest'));
filWaveDest.BaseFilter.Moniker contains NIL after these calls. How can I correct this since obviously subsequent code that attempts to write the WAV data captured using filWaveDest fails?
Wav Dest is not a standard DirectShow filter. It is an example filter in the SDK. Either build the object or download a copy of the DLL someone else has built.
I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) )
The app needs to be able to accept inputs both from :
1- built-in microphone
2- iPod library
Then filters may be applied to the input sound and the resulting is to be outputed to :
1- Speaker
2- Record to a file
My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different render callbacks ?
If I go with AUGraph do I need : 1 Audio Unit for each input, 1 Audio Unit for the output and 1 Audio Input for each effect/filter ?
And finally if I don't may I only have 1 Audio Unit and reconfigure it in order to select the source/destination ?
Many thanks for your answers ! I'm getting lost with this stuff...
You may indeed use render callbacks if you so wished to but the built in Audio Units are great (and there are things coming that I can't say here yet under NDA etc., I've said too much, if you have access to the iOS 5 SDK I recommend you have a look).
You can implement the behavior you wish without using AUGraph, however it is recommended you do as it takes care of a lot of things under the hood and saves you time and effort.
Using AUGraph
From the Audio Unit Hosting Guide (iOS Developer Library):
The AUGraph type adds thread safety to the audio unit story: It enables you to reconfigure a processing chain on the fly. For example, you could safely insert an equalizer, or even swap in a different render callback function for a mixer input, while audio is playing. In fact, the AUGraph type provides the only API in iOS for performing this sort of dynamic reconfiguration in an audio app.
Choosing A Design Pattern (iOS Developer Library) goes into some detail on how you would choose how to implement your Audio Unit environment. From setting up the audio session, graph and configuring/adding units, writing callbacks.
As for which Audio Units you would want in the graph, in addition to what you already stated, you will want to have a MultiChannel Mixer Unit (see Using Specific Audio Units (iOS Developer Library)) to mix your two audio inputs and then hook up the mixer to the Output unit.
Direct Connection
Alternatively, if you were to do it directly without using AUGraph, the following code is a sample to hook up Audio units together yourself. (From Constructing Audio Unit Apps (iOS Developer Library))
You can, alternatively, establish and break connections between audio
units directly by using the audio unit property mechanism. To do so,
use the AudioUnitSetProperty function along with the
kAudioUnitProperty_MakeConnection property, as shown in Listing 2-6.
This approach requires that you define an AudioUnitConnection
structure for each connection to serve as its property value.
/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus = 0;
AudioUnitElement ioUnitOutputElement = 0;
AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement;
AudioUnitSetProperty (
ioUnitInstance, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
ioUnitOutputElement, // destination element
&mixerOutToIoUnitIn, // connection definition
sizeof (mixerOutToIoUnitIn)
);