Sending a program change with a program number (patch number) - audiokit

I have two Roland midi devices that behave the same when I try to send a bank and program change. It always sets it to the first patch of the bank. It won't change the patch I choose in the bank. Pro Logic can, however, switch to different banks.
The following example cause the devices to change to the bank but the program (patch) on the device defaults to the first in that bank and not number 9.
var event = AKMIDIEvent(controllerChange: 0, value: 89, channel: 0)
midiOut.sendEvent(event)
event = AKMIDIEvent(controllerChange: 32, value: 64, channel: 0)
midiOut.sendEvent(event)
event = AKMIDIEvent(programChange: 9, channel: 0)
midiOut.sendEvent(event)
Anyone have experience with sending this MIDI messages?

I was going through the same issue and was about to go crazy. It turns out the Program Change values in various MIDI data specifications, from various vendors, are 1 based. Not 0. Or perhaps it is the AudioKit implementation that is wrong?
So, instead of a programChange value of 9 you should use a value of 8. Here is my code for changing the current instrument on channel 0 to the Bösendorfer grand piano on a Yamaha Clavinova keyboard, where the programChange value in the MIDI data specification is designated as 1.
midiOut.sendControllerMessage(0, value: 108) // MSB sound bank selection
midiOut.sendControllerMessage(32, value: 0) // LSB sound bank selection
midiOut.sendEvent(AKMIDIEvent(programChange: 0, channel: 0)) // Initiate program change based on MSB and LSB selections
While reading various documentation about how MIDI works I also saw some forum posts describing keyboards that expect the LSB bank selection before the MSB bank selection. That is however not my understanding of how MIDI should work, but worth a try if you still cannot make it work with your Roland keyboards.

Related

Wrong POSITION shown for SMF with multiple tempos (using MCI)

Is it not possible to get the proper millisecond POSITION value
in Standard MIDI Files having multiple Tempos?
This line
ret = mciSendString("status MCIMIDI position", 0, 0, 0)
works only if the MIDI file has ONE Tempo setting.
How do you deal with multiple Tempos?

AVAudioEngine reconcile/sync input/output timestamps on macOS/iOS

I'm attempting to sync recorded audio (from an AVAudioEngine inputNode) to an audio file that was playing during the recording process. The result should be like multitrack recording where each subsequent new track is synced with the previous tracks that were playing at the time of recording.
Because sampleTime differs between the AVAudioEngine's output and input nodes, I use hostTime to determine the offset of the original audio and the input buffers.
On iOS, I would assume that I'd have to use AVAudioSession's various latency properties (inputLatency, outputLatency, ioBufferDuration) to reconcile the tracks as well as the host time offset, but I haven't figured out the magic combination to make them work. The same goes for the various AVAudioEngine and Node properties like latency and presentationLatency.
On macOS, AVAudioSession doesn't exist (outside of Catalyst), meaning I don't have access to those numbers. Meanwhile, the latency/presentationLatency properties on the AVAudioNodes report 0.0 in most circumstances. On macOS, I do have access to AudioObjectGetPropertyData and can ask the system about kAudioDevicePropertyLatency, kAudioDevicePropertyBufferSize,kAudioDevicePropertySafetyOffset, etc, but am again at a bit of a loss as to what the formula is to reconcile all of these.
I have a sample project at https://github.com/jnpdx/AudioEngineLoopbackLatencyTest that runs a simple loopback test (on macOS, iOS, or Mac Catalyst) and shows the result. On my Mac, the offset between tracks is ~720 samples. On others' Macs, I've seen as much as 1500 samples offset.
On my iPhone, I can get it close to sample-perfect by using AVAudioSession's outputLatency + inputLatency. However, the same formula leaves things misaligned on my iPad.
What's the magic formula for syncing the input and output timestamps on each platform? I know it may be different on each, which is fine, and I know I won't get 100% accuracy, but I would like to get as close as possible before going through my own calibration process
Here's a sample of my current code (full sync logic can be found at https://github.com/jnpdx/AudioEngineLoopbackLatencyTest/blob/main/AudioEngineLoopbackLatencyTest/AudioManager.swift):
//Schedule playback of original audio during initial playback
let delay = 0.33 * state.secondsToTicks
let audioTime = AVAudioTime(hostTime: mach_absolute_time() + UInt64(delay))
state.audioBuffersScheduledAtHost = audioTime.hostTime
...
//in the inputNode's inputTap, store the first timestamp
audioEngine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (pcmBuffer, timestamp) in
if self.state.inputNodeTapBeganAtHost == 0 {
self.state.inputNodeTapBeganAtHost = timestamp.hostTime
}
}
...
//after playback, attempt to reconcile/sync the timestamps recorded above
let timestampToSyncTo = state.audioBuffersScheduledAtHost
let inputNodeHostTimeDiff = Int64(state.inputNodeTapBeganAtHost) - Int64(timestampToSyncTo)
let inputNodeDiffInSamples = Double(inputNodeHostTimeDiff) / state.secondsToTicks * inputFileBuffer.format.sampleRate //secondsToTicks is calculated using mach_timebase_info
//play the original metronome audio at sample position 0 and try to sync everything else up to it
let originalAudioTime = AVAudioTime(sampleTime: 0, atRate: renderingEngine.mainMixerNode.outputFormat(forBus: 0).sampleRate)
originalAudioPlayerNode.scheduleBuffer(metronomeFileBuffer, at: originalAudioTime, options: []) {
print("Played original audio")
}
//play the tap of the input node at its determined sync time -- this _does not_ appear to line up in the result file
let inputAudioTime = AVAudioTime(sampleTime: AVAudioFramePosition(inputNodeDiffInSamples), atRate: renderingEngine.mainMixerNode.outputFormat(forBus: 0).sampleRate)
recordedInputNodePlayer.scheduleBuffer(inputFileBuffer, at: inputAudioTime, options: []) {
print("Input buffer played")
}
When running the sample app, here's the result I get:
This answer is applicable to native macOS only
General Latency Determination
Output
In the general case the output latency for a stream on a device is determined by the sum of the following properties:
kAudioDevicePropertySafetyOffset
kAudioStreamPropertyLatency
kAudioDevicePropertyLatency
kAudioDevicePropertyBufferFrameSize
The device safety offset, stream, and device latency values should be retrieved for kAudioObjectPropertyScopeOutput.
On my Mac for the audio device MacBook Pro Speakers at 44.1 kHz this equates to 71 + 424 + 11 + 512 = 1018 frames.
Input
Similarly, the input latency is determined by the sum of the following properties:
kAudioDevicePropertySafetyOffset
kAudioStreamPropertyLatency
kAudioDevicePropertyLatency
kAudioDevicePropertyBufferFrameSize
The device safety offset, stream, and device latency values should be retrieved for kAudioObjectPropertyScopeInput.
On my Mac for the audio device MacBook Pro Microphone at 44.1 kHz this equates to 114 + 2404 + 40 + 512 = 3070 frames.
AVAudioEngine
How the information above relates to AVAudioEngine is not immediately clear. Internally AVAudioEngine creates a private aggregate device and Core Audio essentially handles latency compensation for aggregate devices automatically.
During experimentation for this answer I've found that some (most?) audio devices don't report latency correctly. At least that is how it seems, which makes accurate latency determination nigh impossible.
I was able to get fairly accurate synchronization using my Mac's built-in audio using the following adjustments:
// Some non-zero value to get AVAudioEngine running
let startDelay = 0.1
// The original audio file start time
let originalStartingFrame: AVAudioFramePosition = AVAudioFramePosition(playerNode.outputFormat(forBus: 0).sampleRate * startDelay)
// The output tap's first sample is delivered to the device after the buffer is filled once
// A number of zero samples equal to the buffer size is produced initially
let outputStartingFrame: AVAudioFramePosition = Int64(state.outputBufferSizeFrames)
// The first output sample makes it way back into the input tap after accounting for all the latencies
let inputStartingFrame: AVAudioFramePosition = outputStartingFrame - Int64(state.outputLatency + state.outputStreamLatency + state.outputSafetyOffset + state.inputSafetyOffset + state.inputLatency + state.inputStreamLatency)
On my Mac the values reported by the AVAudioEngine aggregate device were:
// Output:
// kAudioDevicePropertySafetyOffset: 144
// kAudioDevicePropertyLatency: 11
// kAudioStreamPropertyLatency: 424
// kAudioDevicePropertyBufferFrameSize: 512
// Input:
// kAudioDevicePropertySafetyOffset: 154
// kAudioDevicePropertyLatency: 0
// kAudioStreamPropertyLatency: 2404
// kAudioDevicePropertyBufferFrameSize: 512
which equated to the following offsets:
originalStartingFrame = 4410
outputStartingFrame = 512
inputStartingFrame = -2625
I may not be able to answer your question, but I believe there is a property not mentioned in your question that does report additional latency information.
I've only worked at the HAL/AUHAL layers (never AVAudioEngine), but in discussions about computing the overall latencies, some audio device/stream properties come up: kAudioDevicePropertyLatency and kAudioStreamPropertyLatency.
Poking around a bit, I see those properties mentioned in the documentation for AVAudioIONode's presentationLatency property (https://developer.apple.com/documentation/avfoundation/avaudioionode/1385631-presentationlatency). I expect that the hardware latency reported by the driver will be there. (I suspect that the standard latency property reports latency for an input sample to appear in the output of a "normal" node, and IO case is special)
It's not in the context of AVAudioEngine, but here's one message from the CoreAudio mailing list that talks a bit about using the low level properties that may provide some additional background: https://lists.apple.com/archives/coreaudio-api/2017/Jul/msg00035.html

Call Directory Extension CallKit does't recognize numbers with more than 9 digits

I'm working with CallKit and developing an app with a Call Directory Extension. I've followed this tutorial and I'm currently test the capability of identify numbers that the user does't have in his contacts and show an ID from my app, but although is working perfectly with numbers of 1 to 9 digits, for example 123456, when I set numbers with 10 or more digits, iOs doesn't recognize the number. After a day and a half of google it, I've have found no information about that. If anyone can help me I'll appreciate it. Thanks in advance.
The method for set the phone numbers for recognize:
private func addAllIdentificationPhoneNumbers(to context: CXCallDirectoryExtensionContext) {
// Retrieve phone numbers to identify and their identification labels from data store. For optimal performance and memory usage when there are many phone numbers,
// consider only loading a subset of numbers at a given time and using autorelease pool(s) to release objects allocated during each batch of numbers which are loaded.
//
// Numbers must be provided in numerically ascending order.
let allPhoneNumbers: [CXCallDirectoryPhoneNumber] = [ 123456789, 1_888_555_5555 ]
let labels = [ "ID test", "Local business" ]
for (phoneNumber, label) in zip(allPhoneNumbers, labels) {
context.addIdentificationEntry(withNextSequentialPhoneNumber: phoneNumber, label: label)
}
}
With this code, when I simulate a call with the number 123456789, iOS shows the tag "ID test" and that's correct, but if I add any digit, for example 0 at the end: 1234567890, iOS does't show anything when I simulate a call. I don't know if I'm missing something.
Well, after a bunch of tests I could made it work. The point was that the phone must contain the full country code and the area code. So for example 00_52_55_4567_8932 877 or +52_55_4567_8932 both will work. But 55_4567_8932 and 4567_8932 will not work. I hope this can help someone else in the future. Thank you all!

SP605 Spartan 6 DDR3 addressing

the following post is quite long, but since I have had trouble making the SP605 board properly interact with the DDR3 for over a month now, hopefully this will be useful to others in the same situation as I find myself in. I am pretty certain it's a simple configuration or conceptual error, but I would be more than happy to have this resolved soon.
=== SCENARIO ===
I have created a USB-UART interface to communicate with the FPGA and control the DDR3. Using the IP generator in ISE, I generated a MIG wrapper and then I designed the memory interface controller. However, I have referenced manuals ug388 and ug416, but I have not been able to have the DDR3 behave as expected.
=== PROBLEM STATEMENT ===
Playing around with the burst lengths for write and read commands, I am able to get data back from the DDR3, yet the addressing scheme does not seem to be correct as data is duplicated in addresses 0 and 1, 2 and 3, 4 and 5, and so forth. Also, whenever I write into address 0, for example, nothing changes. Then, when I write into address 1, both addresses 0 and 1 are updated with the data value I just sent. It seems I am "losing" half of the memory space due to this coupled effect.
=== DDR3 IP CONFIGURATION ===
The setup for the DDR3 using the IP generator – considering the SP605 board scenario – is listed below. In sum, I activated the DDR3 Bank 3 and configured Port0 to be 32-bit bidirectional.
Memory selection:
Enable AXI interface: unchecked
Use extended MCB performance range: unchecked
Memory type for bank 3: DDR3 SDRAM
Memory type for bank 1: none
Options for C3 – DDR3 SDRAM
Frequency: 400 MHz
Memory part: MTJ41J64M16XX-187E
Memory options for C3 – DDR3 SDRAM
Output driver impedance control: RZQ/6
RTT (nominal) – ODT: RZQ/4
Auto self refresh: enabled
Port configuration for C3 – DDR3 SDRAM
Two 32-bit bi-directional and four 32-bit unidirectional ports
Port0: checked
Port1: unchecked
Port2: unchecked
Port3: unchecked
Port4: unchecked
Port5: unchecked
Memory address mapping selection: row-bank-column
FPGA options for C3 – DDR3 SDRAM
Memory interface pin termination: Calibrated input termination
Select RZQ pin location: R7
Select ZIO pin location: W4
Debug signals for memory controller: disable
System clock: differential
=== DATA STRUCTURE ===
From Matlab, I send in a 64-bit command which should write or read the DDR3 based on the address and data provided in this command.
wire [00:00] cmd_instruction = usb_data[63:63]; // ‘0’ = write; ‘1’ = read
wire [27:00] cmd_address = usb_data[62:37]; // 26-bit address
wire [31:00] cmd_data = usb_data[31:00]; // 32-bit data
In ug388, the following can be extracted:
Page 20: The address is 26 bits wide.
C_MEM_ADDR_WIDTH = 13
C_MEM_BANKADDR_WIDTH = 3
C_MEM_NUM_COL_BITS = 10
C_P0_DATA_PORT_SIZE = 32 // 32-bit data ports
C_P0_MASK_SIZE = 4 // 4 bytes = 32 bits (1 mask bit = 1 entire data byte)
Pages 26-27: Command data structure.
pX_cmd_addr[29:0]: 30-bit address, however the last two bits should = “00” since every word (32 bits) is formed by 4 bytes.
pX_cmd_bl[5:0]: Burst length of 1 is obtained by setting this signal to 0.
pX_cmd_instr[2:0]: The only command instructions used are write=”000” and read=”001”.
Page 28: Write data structure.
pX_wr_mask[PX_MASKSIZE-1:0]: 4-bit mask is set to “0000” so that all 4 bytes are always written into the memory.
=== SIGNAL ASSIGNMENTS ===
Using all this information, I assigned my signals in the following manner:
assign p0_mcb_cmd_instr = {2'b00, cmd_instruction};
assign p0_mcb_cmd_addr = {2’d0, cmd_address, 2'd0};
assign p0_mcb_cmd_bl = 6'd0;
assign p0_mcb_wr_data = cmd_data;
assign p0_mcb_wr_mask = 4'd0;
localparam C3_MEM_BURST_LEN = 8;
=== CONCLUSIONS ===
Based on the configuration, does anyone know what the expected behavior of my controller should be?
If any additional information is necessary for clarification, please let me know.
Thanks a lot,
Bruno.

Scapy - retrieving RSSI from WiFi packets

I'm trying to get RSSI or signal strength from WiFi packets.
I want also RSSI from 'WiFi probe requests' (when somebody is searching for a WiFi hotspots).
I managed to see it from kismet logs but that was only to make sure it is possible - I don't want to use kismet all the time.
For 'full time scanning' I'm using scapy. Does anybody know where can I find the RSSI or signal strength (in dBm) from the packets sniffed with scapy? I don't know how is the whole packet built - and there are a lot of 'hex' values which I don't know how to parse/interpret.
I'm sniffing on both interfaces - wlan0 (detecting when somebody connects to my hotspot), and mon.wlan0 (detecting when somebody is searching for hotspots).
Hardware (WiFi card) I use is based on Prism chipset (ISL3886). However test with Kismet was ran on Atheros (AR2413) and Intel iwl4965.
Edit1:
Looks like I need to access somehow information stored in PrismHeader:
http://trac.secdev.org/scapy/browser/scapy/layers/dot11.py
line 92 ?
Anybody knows how to enter this information?
packet.show() and packet.show2() don't show anything from this Class/Layer
Edit2:
After more digging it appears that the interface just isn't set correctly and that's why it doesn't collect all necessary headers.
If I run kismet and then sniff packets from that interface with scapy there is more info in the packet:
###[ RadioTap dummy ]###
version= 0
pad= 0
len= 26
present= TSFT+Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
notdecoded= '8`/\x08\x00\x00\x00\x00\x10\x02\x94\t\xa0\x00\xdb\x01\x00\x00'
...
Now I only need to set the interface correctly without using kismet.
Here is a valuable scapy extension that improves scapy.layers.dot11.Packet's parsing of present not decoded fields.
https://github.com/ivanlei/airodump-iv/blob/master/airoiv/scapy_ex.py
Just use:
import scapy_ex
And:
packet.show()
It'll look like this:
###[ 802.11 RadioTap ]###
version = 0
pad = 0
RadioTap_len= 18
present = Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
Flags = 0
Rate = 2
Channel = 1
Channel_flags= 160
dBm_AntSignal= -87
Antenna = 1
RX_Flags = 0
To summarize:
signal strength was not visible because something was wrong in the way that 'monitor mode' was set (not all headers were passed/parsed by sniffers). This monitor interface was created by hostapd.
now I'm setting monitor mode on interface with airmon-ng - tcpdump, scapy show theese extra headers.
Edited: use scapy 2.4.1+ (or github dev version). Most recent versions now correctly decode the « notdecoded » part
For some reason the packet structure has changed. Now dBm_AntSignal is the first element in notdecoded.
I am not 100% sure of this solution but I used sig_str = -(256 - ord(packet.notdecoded[-2:-1])) to reach first element and I get values that seems to be dBm_AntSignal.
I am using OpenWRT in a TP-Link MR3020 with extroot and Edward Keeble Passive Wifi Monitoring project with some modifications.
I use scapy_ex.py and I had this information:
802.11 RadioTap
version = 0
pad = 0
RadioTap_len= 36
present = dBm_AntSignal+Lock_Quality+b22+b24+b25+b26+b27+b29
dBm_AntSignal= 32
Lock_Quality= 8
If someone still has the same issue, I think I have found the solution:
I believe this is the right cut for the RSSI value:
sig_str = -(256-ord(packet.notdecoded[-3:-2]))
and this one is for the noise level:
noise_str = -(256-ord(packet.notdecoded[-2:-1]))
The fact that it says "RadioTap" suggests that the device may supply Radiotap headers, not Prism headers, even though it has a Prism chipset. The p54 driver appears to be a "SoftMAC driver", in which case it'll probably supply Radiotap headers; are you using the p54 driver or the older prism54 driver?
I have similar problem, I set up the monitor mode with airmon-ng and I can see the dBm level in tcpdump but whenever I try the sig_str = -(256-ord(packet.notdecoded[-4:-3])) I get -256 because the returned value from notdecoded in 0. Packet structure looks like this.
version = 0
pad = 0
len = 36
present = TSFT+Flags+Rate+Channel+dBm_AntSignal+b14+b29+Ext
notdecoded= ' \x08\x00\x00\x00\x00\x00\x00\x1f\x02\xed\x07\x05
.......

Resources