I am trying to create a simple driver for my PCI sound card in which I will use the ALSA api. And of course I face a problem: I can't make my driver work.
So here are some details:
As I mentioned my sound card is a PCI device. In order to start my driver I had to stop the originally running snd_intel8x0 driver, as it had occupied the device and it was not accessible(the probe function of my driver was never executed). So I blacklisted the snd_intel8x0 driver (added line in /etc/modporbe.d/blacklist.conf).
From this moment on my driver can be started - the probe function is started.
Unfortunately when snd_intel8x0 driver is blacklisted ALSA api also seems to disappear. I observe the following:
when I start my driver I get these errors in dmesg:
[...] alsa: Unknown symbol snd_card_register (err 0)
[...] alsa: Unknown symbol snd_card_create (err 0)
[...] alsa: Unknown symbol snd_card_free (err 0)
[...] alsa: Unknown symbol snd_device_new (err 0)
in file /proc/kallsyms there are no snd* symbols (if the original driver snd_intel8x0 is running all of the above mentioned snd* functions are available in /proc/kallsyms)
there is folder /proc/asound (if snd_intel8x0 is running asound folder is present)
So my questions:
How can I make my PCI audio card use my driver and not snd_intel8x0?
How do I make ALSA available for my driver?
In general: why does ALSA disappear when snd_intel8x0 is blacklisted?
Thank you in advance
Grts, Nedelin
The driver snd-intel8x0 is for Intel and compatible AC'97 controllers.
If you have such a controller, snd-intel8x0 is the correct driver to use.
If your device does require something new, extend the snd-intel8x0 driver.
If you really want to write a replacement for snd-intel8x0, putting the latter into blacklist.conf is the correct way.
You get "unknown symbol" errors when the modules that are currently loaded and the module you are trying to load are not compatible.
When you recompile ALSA, you should unload all snd* modules before loading a new one.
Related
my application is a media player application,on automating the application i need to control the media volume up and down
i have tried with adb commands in my program but it not worked , can any one please help me on this.
code :
public void devicevolume()throws IOException, InterruptedException{
Process p = Runtime.getRuntime()
.exec("adb - KEYCODE_VOLUME_DOWN");
p.waitFor();
p.destroy();
}
You can give a try through adb commands:
you can call setMasterVolume() with
service call audio <code> i32 <volume>
The codes are version specific. Let's say you want to set volume to 50% on a KitKat device. The command will be:
service call audio 9 i32 50
read ktnr74.blogspot.com/2014/09/… to find out the proper code for your android version
You can use the Android driver and pass the key event for increasing and decreasing the device volume
I've searched some examples and found this:
var
op: TMCI_Open_Parms;
rp: TMCI_Record_Parms;
sp: TMCI_SaveParms;
begin
// Open
op.lpstrDeviceType := 'waveaudio';
op.lpstrElementName := '';
if mciSendCommand(0, MCI_OPEN, MCI_OPEN_ELEMENT or MCI_OPEN_TYPE, cardinal(#op)) <> 0 then
raise Exception.Create('MCI error');
try
// Record
rp.dwFrom := 0;
rp.dwTo := 10000;
rp.dwCallback := 0;
if mciSendCommand(op.wDeviceID, MCI_RECORD, MCI_TO or MCI_WAIT, cardinal(#rp)) <> 0 then
raise Exception.Create('MCI error. No microphone connected to the computer?');
// Save
sp.lpfilename := PChar(ExtractFilePath(Application.ExeName) + 'test.wav');
if mciSendCommand(op.wDeviceID, MCI_SAVE, MCI_SAVE_FILE or MCI_WAIT, cardinal(#sp)) <> 0 then
raise Exception.Create('MCI error');
finally
mciSendCommand(op.wDeviceID, MCI_CLOSE, 0, 0);
end;
it records only microphone, can I record speakers and microphone simultaniously? or separately?
The ability to do this largely depends on which Windows version are you using.
If you are still using Windows XP you might have "Software mix" or "Stereo out" recoding channels available.
But if you are using Windows Vista or newer these channels are no longer available. Well not without the use of some unofficial sound card drivers.
The main reason for this is that ability to record entire sound card output was invalidating any digital copyright protection for audio files.
So in order to achieve what you need you will have to find some custom sound library which would be able to directly play the music from Youtube mix your microphone with hat and output (record) that into some file.
I think you might be able to achieve this with Bass sound library (http://www.un4seen.com/) but I'm not sure.
Another option would be to directly connect Wave Out line into Line in port using cable and then record contents from Line in instead from microphone. Also make sure to allow your microphone voice to be played over speakers (disabled by default on most sound cards for avoiding possible sound echo).
EDIT: After taking a look at program named Audacity I found out that recording of your computers sound output only works if you chose WASAPI as sound interface.
Looking further about the WASAPI it seems this is new audio interface that has been introduced with Windows Vista. Now I must admit that I haven't known about this before.
So it seems that answer would lie in using WASAPI instead of old MME audio interface.
Quick search on Google does indicate that some people already managed to use WASAPI from Delphi.
Now since I don't have any experience with this new sound API I'm afraid that I can't be of more help to you than recommending you to learn about WASAPI and find some examples for it.
EDIT2: Managed to find a small example of using WASAPI interface in Delphi for Loopback recording. You can get it here:
http://4coder.org/delphi-source-code/547/
Also found a thread on DelphiPraxis about someone making a specially purposed unit for loopback recording with WASAPI in Delphi but since I'm not a member of DelphiPraxis I can't download it and test it.
http://www.delphipraxis.net/183977-wasapi-loopback-audio-capturing.html
I'm a newbie to driver development in Linux. I want to trigger a DMA read operation at specified target address, but I have no basic concept about how to do it. Should I write a new driver for my sound card? Or just invoke some APIs(if any) provided by current sound card driver?
I can imagine that what I want looks like this (from LDD3 Ch15),
int dad_transfer(struct dad_dev *dev, int write, void *buffer,
size_t count)
{
dma_addr_t bus_addr;
/* Map the buffer for DMA */
dev->dma_dir = (write ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
dev->dma_size = count;
bus_addr = dma_map_single(&dev->pci_dev->dev, buffer, count,
dev->dma_dir);
dev->dma_addr = bus_addr;
/* Set up the device */
writeb(dev->registers.command, DAD_CMD_DISABLEDMA);
writeb(dev->registers.command, write ? DAD_CMD_WR : DAD_CMD_RD);
writel(dev->registers.addr, cpu_to_le32(bus_addr));
writel(dev->registers.len, cpu_to_le32(count));
/* Start the operation */
writeb(dev->registers.command, DAD_CMD_ENABLEDMA);
return 0;
}
But what should this be, a user-space program or a module? And where can I grub more device-specific details in order to know which and how the registers should be write?
You have several questions buried in here, so I will take them one at a time:
Should I write a new driver or invoke some API function calls?
If the existing driver has such a function accessible from userspace, yes you should use them - they will the easiest option. If they do not already exist, you will have to write a driver because you cannot directly access the kernel's DMA engine from userspace. You need a driver to help you along.
Should this be a userspace program or module?
It would have to be a module so that it can access low-level kernel features. Using your included code as an example, you cannot call "dma_map_single" from userspace or access a PCI device's device structure. You need to be in kernel space to do that, which requires either a driver module or static kernel driver.
Where can I get more device-specific details?
(I assume you meant Grep.) You will have to get a hold of a programmer's guide for the device you want to access. Regular user's manuals won't have the level of detail you need (register addresses, bit patterns, etc) so you may have to contact the manufacturer to get a driver writer's guide. You also may be able to find some examples in the kernel source code. Check http://lxr.free-electrons.com/ for a searchable, up-to-date listing of the entire kernel source. If you look in /drivers/, you may be able to find some examples to get you started.
I need to compress data sent over a secure channel in my iOS app and I was wondering if I could use TLS compression for the same. I am unable to figure out if Apple's TLS implementation, Secure Transport, supports the same.
Does anyone else know if TLS compression is supported in iOS or not?
I was trying to determine if Apple implementation of SSL/TLS did support compression, but I have to say that I am afraid it does not.
At first I was hopeful that having a errSSLPeerDecompressFail error code, there has to be a way to enable the compression. But I could not find it.
The first obvious reason that Apple doesn’t support compression is several wire captures I did from my device (6.1) opening secure sockets in different ports. In all of them the Client Hello packet reported only one compression method: null.
Then I looked at the last available code for libsecurity_ssl available from Apple. This is the implementation from Mac OS X 10.7.5, but something tells me that the iOS one will be very similar, if not the same, but surely it will not be more powerful than the Mac OS X one.
You can find in the file sslHandshakeHello.c, lines 186-187 (SSLProcessServerHello):
if (*p++ != 0) /* Compression */
return unimpErr;
That error code sounds a lot like “if the server sends another compression but null (0), we don’t implement that, so fail”.
Again, the same file, line 325 (SSLEncodeClientHello):
*p++ = 0; /* null compression */
And nothing else around (DEFLATE is the method 1, according to RFC 3749).
Below, lines 469, 476 and 482-483 (SSLProcessClientHello):
compressionCount = *(charPtr++);
...
/* Ignore list; we're doing null */
...
/* skip compression list */
charPtr += compressionCount;
I think it is pretty clear that this implementation only handles the null compression: it is the only one sent in the Client Hello, the only one understood in the Server Hello, and the compression methods are ignored when the Client Hello is received (null must be implemented and offered by every client).
So I think both you and me have to implement an application level compression. Good luck.
I have a Delphi 6 application that uses the DSPACK DirectShow component library. Currently I am getting the error "no combination of intermediate filters could be found" when I attempt to connect the Capture pin on an audio capture device to the Input pin of another filter. I believe I am setting the media formats correctly. I have an error trap and in that trap I query explicitly both pins for the exact media format they are set to in case there is an incongruity. When I do this, both pins come back with the exact same WAV format:
format tag: 1
number of channels: 1
bits per sample: 16
sample rate: 8000
That matches up to what I set both filters to, yet I am getting an error that (usually as far as I know) indicates a format incompatibility. Has anyone run into this error before and knows what I might be doing wrong or what other kinds of tests/inspections I can do?
It turns out the error was being caused by the media format I was returning from my push source audio filter. I had the wrong sub-type and that was triggering the "no combination of intermediate filters could be found" error from DirectShow since the sub-type I was using in my push source filter was incorrect and not compatible with other filters like the Capture filter I was using in my filter graph. See the "UPDATE" note in my thread on media format's for full details:
Correct Media Type settings for a DirectShow filter that delivers Wav audio data?