I need to hand assemble 14bit MIDI Pitch Bend values from raw UInt16 values in iOS. I'm wondering if anybody out there has had a chance to come up with an elegant solution? Here's where I'm at - I'll get a chance to test this probably later today, but if I hear back before then, great:
First, some MIDI preliminaries for anybody curious.
MIDI Pitch Bend is broken up into one Status Byte followed by two Data Bytes (it's a 14bit controller), these two Data Bytes are associated with their Status Byte by both leading with a Zero status bit, MIDI Spec has them appearing in the order of MSB -> LSB
(Edit: Update, it's actually Status -> LSB -> MSB )
( ie 1110 0000, 0111 1111, 0111 1111 )
The challenge is how to break up an ARM/Intel 16bit UInt16 into two 7 bit segments on iOS, and have it make sense for MIDI?
Please keep in mind that, because we're dealing with an unsigned integer, a 0 value is NOT neutral pitch bend, but rather full pitch down - where as neutral pitch bend is defined as 8192 - and 16,383 is full pitch up.
So here's my best guess as to how to do this:
UInt16 msbAnd = base10ValueUInt16 & 16256; //clearing out LSB
UInt16 msbAndShift = msbAnd << 1; //shift into leading Byte, with 0 status bit
UInt16 lsbAnd = base10ValueUInt16 & 127; //isolating LSB
UInt16 finalTwoBytePitchWord = msbFinalAndShift | lsbAnd; //make UInt16 word
UInt16 finalTwoBytePitchWordFlipped = CFSwapInt16HostToBig(finalTwoBytePitchWord); //Endian tweak
This code runs fine and seems to create the two Data Bytes with the required zero status bits and flips them around from little endian Intel/ARM which seems to be necessary for MIDI (MIDI is STATUS -> MSB -> LSB ): I can slap on the leading Status Byte with the appropriate MIDI channel later.
So, does this make sense? Has anybody come up with a more elegant solution? ( is there a Library I'm overlooking? ) ... I'll check back in later and also let folks know if this actually worked on the sampler I have to target it at.
Thanks
I think your code is close to right, but it's overly complicated. This question has nothing to do with iOS or endianness or ARM or Intel; it's just plain old C bit-twiddling. If you write the code correctly, it will work on any reasonable platform without modification. You don't need a library; it's only a couple lines of code.
It's best to work with MIDI on a byte-by-byte basis. You want a function that takes a 16-bit unsigned integer (which we'll trust has at most 14 bits worth of value) and returns two single-byte values, one with the most significant bits, one with the least significant bits.
Later on, when you send the message, you assemble the bytes in the appropriate order. According to the specification, pitch wheel messages are three bytes: STATUS, then LSB, then MSB. You have them backwards in your question!
The least-significant 7 bits are easy: just mask off those bits from the original value. The most-significant 7 bits are similar: mask off the next higher 7 bits from the original value, then shift them down.
It doesn't matter whether the 16-bit integers are little-endian or big-endian in memory on your machine; the compiler takes care of that.
Here's a function and a test tool.
#include <stdio.h>
#include <stdint.h> // for C standard uint8_t and uint16_t
// or, if you prefer, use unsigned char and unsigned short, or Byte and UInt16;
// they'll all work, although some are more portable than others
void encode14BitValue(uint16_t value, uint8_t *out_msb, uint8_t *out_lsb)
{
uint16_t mask = 0x007F; // low 7 bits on
// "(1 << 7) - 1" is arguably clearer
*out_lsb = value & mask;
*out_msb = (value & (mask << 7)) >> 7;
}
int main(int argc, const char * argv[])
{
typedef struct {
uint16_t in;
uint8_t expected_msb;
uint8_t expected_lsb;
} test_case;
test_case cases[] = {
{ 0x0000, 0x00, 0x00 },
{ 0x0001, 0x00, 0x01 },
{ 0x0002, 0x00, 0x02 },
{ 0x0004, 0x00, 0x04 },
{ 0x0008, 0x00, 0x08 },
{ 0x0009, 0x00, 0x09 },
{ 0x000F, 0x00, 0x0F },
{ 0x0010, 0x00, 0x10 },
{ 0x0011, 0x00, 0x11 },
{ 0x001F, 0x00, 0x1F },
{ 0x0020, 0x00, 0x20 },
{ 0x0040, 0x00, 0x40 },
{ 0x0070, 0x00, 0x70 },
{ 0x007F, 0x00, 0x7F },
{ 0x0080, 0x01, 0x00 },
{ 0x0081, 0x01, 0x01 },
{ 0x008F, 0x01, 0x0F },
{ 0x0090, 0x01, 0x10 },
{ 0x00FF, 0x01, 0x7F },
{ 0x0100, 0x02, 0x00 },
{ 0x0200, 0x04, 0x00 },
{ 0x0400, 0x08, 0x00 },
{ 0x0800, 0x10, 0x00 },
{ 0x1000, 0x20, 0x00 },
{ 0x1FFF, 0x3F, 0x7F },
{ 0x2000, 0x40, 0x00 },
{ 0x2001, 0x40, 0x01 },
{ 0x3FFF, 0x7F, 0x7F },
};
int passed = 1;
for (int i = 0, c = sizeof(cases) / sizeof(cases[0]); i < c; i++) {
uint8_t msb, lsb;
encode14BitValue(cases[i].in, &msb, &lsb);
if (cases[i].expected_msb != msb || cases[i].expected_lsb != lsb) {
printf("failed: 0x%04hX expected 0x%02hhX 0x%02hhX got 0x%02hhX 0x%02hhX\n", cases[i].in, cases[i].expected_msb, cases[i].expected_lsb, msb, lsb);
passed = 0;
}
}
return passed ? 0 : 1;
}
In your code, trying to pack the two bytes of result into one 16-bit integer just adds confusion. I don't know why you're doing that, since you're going to have to extract individual bytes again, whenever you send the MIDI anywhere else. That's where any worries about endianness come up, since your packing and unpacking code have to agree. You might as well not bother. I bet your code was incorrect, but your error in swapping MSB and LSB compensated for it.
Related
I want to use PayMaya EMV Merchant Presented QR Code Specification for Payment Systems everything is good except CRC i don't understand how to generate this code.
that's all exist about it ,but i still can't understand how to generate this .
The checksum shall be calculated according to [ISO/IEC 13239] using the polynomial '1021' (hex) and initial value 'FFFF' (hex). The data over which the checksum is calculated shall cover all data objects, including their ID, Length and Value, to be included in the QR Code, in their respective order, as well as the ID and Length of the CRC itself (but excluding its Value).
Following the calculation of the checksum, the resulting 2-byte hexadecimal value shall be encoded as a 4-character Alphanumeric Special value by converting each nibble to an Alphanumeric Special character.
Example: a CRC with a two-byte hexadecimal value of '007B' is included in the QR Code as "6304007B".
This converts a string to its UTF-8 representation as a sequence of bytes, and prints out the 16-bit Cyclic Redundancy Check of those bytes (CRC-16/CCITT-FALSE).
int crc16_CCITT_FALSE(String data) {
int initial = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
Uint8List bytes = Uint8List.fromList(utf8.encode(data));
for (var b in bytes) {
for (int i = 0; i < 8; i++) {
bool bit = ((b >> (7-i) & 1) == 1);
bool c15 = ((initial >> 15 & 1) == 1);
initial <<= 1;
if (c15 ^ bit) initial ^= polynomial;
}
}
return initial &= 0xffff;
}
The CRC for ISO/IEC 13239 is this CRC-16/ISO-HDLC, per the notes in that catalog. This implements that CRC and prints the check value 0x906e:
import 'dart:typed_data';
int crc16ISOHDLC(Uint8List bytes) {
int crc = 0xffff;
for (var b in bytes) {
crc ^= b;
for (int i = 0; i < 8; i++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc ^ 0xffff;
}
void main() {
Uint8List msg = Uint8List.fromList([0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39]);
print("0x" + crc16ISOHDLC(msg).toRadixString(16));
}
I'm creating a Swift app in Xcode that sends a command to a BLE adapter in order to make the LED's connected to it change to a different colour.
As I've established from a reply to a previous post on SO, I have to send command in terms of hex integers in an array. I'm using the following code in order to do this:
let bytes : [UInt8] = [ 0x52, 0x13, 0x00, 0x56, 0xFF, 0x00, 0x00, 0x00, 0xAA ]
let data = NSData(bytes: bytes, length: bytes.count)
Therefore, this requires a UInt8 form as suggested above.
However, I'm trying to use sliders as colour pickers on my Swift app in order to set the R, G, and B colours of the LED strip connected to the BLE receiver. In order to do this I have created three sliders for R, G and B respectively, setting the minimum value of each to 0 and the max to 255 (since 255 converts to FF in hex). I'm then using the following function to convert these to hex form for me to implement in the command above.
func colorToHex(input: Int) -> UInt8 {
var st = NSString(format: "%2X", input)
return st
}
The problem with this is the fact that I must return a UInt8 value back again. Since 'st' is an NSString, Xcode throws an error of 'NSString not convertible to UInt8'.
I'm fairly new to Swift. The question here is, how do I get the function to return a UInt8 value how do I get it to form a UInt8 value?
Any help would be greatly appreciated!
There is no need to use NSString or Int. If redSlider is your UISlider with minimum value 0 and maximum value 255 then you can just compute
let redByte = UInt8(redSlider.value)
and use that in your bytes array:
var bytes : [UInt8] = [ 0x52, 0x13, 0x00, 0x56, 0xFF, 0x00, 0x00, 0x00, 0xAA ]
bytes[0] = redByte // Assuming that the first array element is for red.
Just
func colorToHex(input: Int) -> UInt8 {
return UInt8(input % (Int(UInt8.max) + 1))
}
NSString(format: "%2X", colorToHex(25)) // "19"
NSString(format: "%2X", colorToHex(254)) // "FE"
NSString(format: "%2X", colorToHex(255)) // "FF"
NSString(format: "%2X", colorToHex(256)) // "0"
If I were you, I will use NSString(format: "%0x", colorToHex(25)) // "19"
In your case you have space, if the number has one symbol
I am working at printing an image to a thermal printer. The image needs to be converted into an unsigned char buffer. For example
unsigned char buffer[10]={0x55,0x66,0x77,0x88,0x44, 0x1B,0x58,0x31,0x15,0x1D}
So far I can covert the image to a black and white version and then loop through it to get each hexadecimal value as an NSString stored in an NSMutableArray. Like below when outputted in the NSLog
( 0x00, 0x00, 0x00, 0x38, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 )
I need to be able to loop through the array and retrieve each hex value stored as a string and convert it into an unsigned char so I can add it to my buffer to print
Byte buffer[hexStringArray count];
for (int i=0;i=[hexStringArray count];i++)//add print content
{
buffer[i] = [hexStringArray objectAtIndex:i] // this from string to hex to unsigned char;
}
[sessionController writeData:[NSData dataWithBytes:buffer length:[hexStringArray count]];
How can I convert an NSString hex value into an actual hex, which can then be converted into an unsigned char.
Either there's a serious bug in Apple's MIDI synthesis code, or I'm doing something wrong. Here's my understanding of it. When you send a pitch bend MIDI command, the range of the bend is -8192 to 8191, transposed up to 0. (So the actual range is 0 to 16383.) This number is split up into two 7-bit fields, so really what this means is that you have 128 values of coarse control and 128 values of fine control.
Here's a pitch bend sample that I wrote, similar to the commands in Apple's LoadPresetDemo.
// 'ratio' is the % amount to bend in current pitch range, from -1.0 to 1.0
// 'note' is the MIDI note to bend
NSUInteger bendValue = 8191 + 1 + (8191 * ratio);
NSUInteger bendMSB = (bendValue >> 7) & 0x7F;
NSUInteger bendLSB = bendValue & 0x7F;
UInt32 noteNum = note;
UInt32 noteCommand = kMIDIMessage_PitchBend << 4 | 0;
OSStatus result = MusicDeviceMIDIEvent(self.samplerUnit, noteCommand, noteNum, bendMSB, bendLSB);
When the bendMSB (coarse control) changes, the pitch bends just fine. But when the bendLSB (fine control) changes, nothing happens. In other words, it seems that Apple's MIDI synth is ignoring the LSB, meaning that the note bends only in ugly-sounding discrete chunks.
Here's another way of doing the same thing:
// 'ratio' is the % amount to bend in current pitch range, from -1.0 to 1.0
AudioUnitParameterValue bendValue = 63 + 1 + (63 * ratio); // this is a CGFloat under the hood
AudioUnitSetParameter(self.samplerUnit,
kAUGroupParameterID_PitchBend,
kAudioUnitScope_Group,
0,
bendValue,
0);
This exhibits identical behavior to the previous example. What's extra-funny about this way of doing things is that the documentation for kAUGroupParameterID_PitchBend specifies that the value range should be -8192 to 8191, which totally doesn't work. The actual range appears to be 0 to 127, and the floating point (fine control) gets ignored.
Finally, if you make the following call to adjust the pitch bend range:
// 'semitones' is the number of semitones (100 cents) to set the pitch bend range to
// 'cents' is the additional number of cents to set the pitch bend range to
UInt32 status = 0xB0 | 0;
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x64, 0x00, 0); // RPN pitch bend range.
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x65, 0x00, 0);
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x06, semitones, 0); // Data entry MSB
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x26, cents, 0); // Data entry LSB (optional)
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x64, 0x7F, 0); // RPN reset
MusicDeviceMIDIEvent(self.samplerUnit, status, 0x65, 0x7F, 0);
Can you guess what happens? That's right, the LSB message gets ignored and the pitch wheel range only changes by the provided number of semitones.
What's going on here? Is this an Apple bug or am I missing something? (A setup parameter, perhaps?) Or maybe it's not a bug at all? Maybe Apple's synth just doesn't have that level of detail by design? Is that sort of thing legal by the MIDI standard?! Help!
EDIT:
When the pitch bend range is set to 40 semitones, each coarse change makes an audible difference. When the pitch bend range is set to 10 semitones, only every second coarse change makes a difference. At 2 semitones (the default), it takes 4 or more coarse changes to make a difference.
In other words, not only is the LSB apparently ignored, but there also seems to be a minimum # of cents for the pitch to change. Can either of these limitations be fixed? And if not, are there any software synth frameworks for iOS with higher bend resolution?
Hmm... maybe applying kAudioUnitSubType_Varispeed or kAudioUnitSubType_NewTimePitch will yield better results...
Your pitch bend message is incorrect. Instead of this:
UInt32 noteCommand = kMIDIMessage_PitchBend << 4 | 0;
OSStatus result = MusicDeviceMIDIEvent(self.samplerUnit, noteCommand, noteNum, bendMSB, bendLSB);
do this:
UInt32 bendCommand = kMIDIMessage_PitchBend << 4 | 0;
OSStatus result = MusicDeviceMIDIEvent(self.samplerUnit, bendCommand, bendLSB, bendMSB, 0);
The note value isn't part of the pitch bend command. (Also, I changed the name of the variable noteCommand to bendCommand to more accurately reflect its purpose.)
In LoadPresetDemo I added a property to MainViewController.m:
#property (readwrite) NSInteger bendValue;
and this code:
- (void)sendBendValue:(NSInteger)bendValue {
//bendValue in the range [-8192, 8191]
const UInt32 bendCommand = kMIDIMessage_PitchBend << 4 | 0;
bendValue += 8192;
UInt32 bendMSB = (bendValue >> 7) & 0x7F;
UInt32 bendLSB = bendValue & 0x7F;
NSLog(#"MSB=%d, LSB=%d", (unsigned int)bendMSB, (unsigned int)bendLSB);
OSStatus result = MusicDeviceMIDIEvent(self.samplerUnit, bendCommand, bendLSB, bendMSB, 0);
NSAssert (result == noErr, #"Unable to send pitch bend message. Error code: %d '%.4s'", (int) result, (const char *)&result);
}
- (IBAction)bendDown:(id)sender {
self.bendValue = MAX(-8192, self.bendValue - 0x20);
[self sendBendValue:self.bendValue];
}
- (IBAction)bendCenter:(id)sender {
self.bendValue = 0;
[self setBendRange:50 cents:0];
[self sendBendValue:self.bendValue];
}
- (IBAction)bendUp:(id)sender {
self.bendValue = MIN(8191, self.bendValue + 0x20);
[self sendBendValue:self.bendValue];
}
-(void)setBendRange:(UInt32)semitones cents:(UInt32)cents {
MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x64, 0, 0);
MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x65, 0, 0);
MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x06, semitones, 0);
MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x26, cents, 0);
//The following two lines are not really necessary. They only matter if additional controller 0x06 or 0x26 messages are sent
//MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x64, 0x7F, 0);
//MusicDeviceMIDIEvent(self.samplerUnit, 0xB0, 0x65, 0x7F, 0);
}
I created three buttons and assigned them to bendDown:, bendCenter:, and bendUp:.
Run the program and press the bendCenter button. Then, with the trombone sound selected, press and hold the "Mid Note" button. While holding that down, press the bendUp or bendDown buttons. I can hear changes in pitch when the LSB changes and the MSB stays the same.
I am working on an OS project and I am just wondering how a pointer is stored in memory? I understand that a pointer is 4 bytes, so how is the pointer spread amongst the 4 bytes?
My issue is, I am trying to store a pointer to a 4 byte slot of memory. Lets say the pointer is 0x7FFFFFFF. What is stored at each of the 4 bytes?
The way that pointer is stored is same as any other multi-byte values. The 4 bytes are stored according to the endianness of the system. Let's say the address of the 4 bytes is below:
Big endian (most significant byte first):
Address Byte
0x1000 0x7F
0x1001 0xFF
0x1002 0xFF
0x1003 0xFF
Small endian (least significant byte first):
Address Byte
0x1000 0xFF
0x1001 0xFF
0x1002 0xFF
0x1003 0x7F
Btw, 4 byte address is 32-bit system. 64-bit system has 8 bytes addresses.
EDIT:
To reference each individual part of the pointer, you need to use pointer. :)
Say you have:
int i = 0;
int *pi = &i; // say pi == 0x7fffffff
int **ppi = π // from the above example, int ppi == 0x1000
Simple pointer arithmetic would get you the pointer to each byte.
You should read up on Endianness. Normally you wouldn't work with just one byte of a pointer at a time, though, so the order of the bytes isn't relevant.
Update: Here's an example of making a fake pointer with a known value and then printing out each of its bytes:
#include <stdio.h>
int main(int arc, char* argv[]) {
int *p = (int *) 0x12345678;
unsigned char *cp = (unsigned char *) &p;
int i;
for (i = 0; i < sizeof(p); i++)
printf("%d: %.2x\n", i, cp[i]);
return 0;
}