How to List all OpenGL ES Compatible PixelBuffer Formats - ios

Is there a way to list all CVPixelBuffer formats for CVPixelBufferCreate() that will not generate error -6683: kCVReturnPixelBufferNotOpenGLCompatible when used with CVOpenGLESTextureCacheCreateTextureFromImage()?
This lists all the supported CVPixelBuffer formats for CVPixelBufferCreate(), but does not guarantee that CVOpenGLESTextureCacheCreateTextureFromImage() will not return the error above.
I guess my desired list should be a subset of this one.

Based on the answer from Adi Shavit above, here's a full code snippet (Objective-C) that you can use to print all the current OpenGL ES compatible pixel formats:
+ (void)load {
#autoreleasepool {
printf("Core Video Supported Pixel Format Types:\n");
CFArrayRef pixelFormatDescriptionsArray = CVPixelFormatDescriptionArrayCreateWithAllPixelFormatTypes(kCFAllocatorDefault);
for (CFIndex i = 0; i < CFArrayGetCount(pixelFormatDescriptionsArray); i++) {
CFNumberRef pixelFormatFourCC = (CFNumberRef)CFArrayGetValueAtIndex(pixelFormatDescriptionsArray, i);
if (pixelFormatFourCC != NULL) {
UInt32 value;
CFNumberGetValue(pixelFormatFourCC, kCFNumberSInt32Type, &value);
NSString * pixelFormat;
if (value <= 0x28) {
pixelFormat = [NSString stringWithFormat:#"Core Video Pixel Format Type 0x%02x", (unsigned int)value];
} else {
pixelFormat = [NSString stringWithFormat:#"Core Video Pixel Format Type (FourCC) %c%c%c%c", (char)(value >> 24), (char)(value >> 16), (char)(value >> 8), (char)value];
}
CFDictionaryRef desc = CVPixelFormatDescriptionCreateWithPixelFormatType(kCFAllocatorDefault, (OSType)value);
CFBooleanRef OpenGLESCompatibility = (CFBooleanRef)CFDictionaryGetValue(desc, kCVPixelFormatOpenGLESCompatibility);
printf("%s: Compatible with OpenGL ES: %s\n", pixelFormat.UTF8String, (OpenGLESCompatibility != nil && CFBooleanGetValue(OpenGLESCompatibility)) ? "YES" : "NO");
}
}
printf("End Core Video Supported Pixel Format Types.\n");
}
}
You can put this snippet anywhere in a Objective-C category or class to print which pixel format are compatible and which aren't.
For completeness purposes, here are all the pixel formats currently compatible with OpenGL ES as of iOS 10.2.1:
L565
5551
L555
2vuy
yuvs
yuvf
L008
L016
2C08
2C16
BGRA
420v
420f
420e
411v
411f
422v
422f
444v
444f
L00h
L00f
2C0h
2C0f
RGhA
RGfA

Answering myself... after further study of this link.
The function CVPixelFormatDescriptionArrayCreateWithAllPixelFormatTypes() provides all the supported formats on a particular device (and OS). Once the fourCC value is found as shown in the link, you can use CVPixelFormatDescriptionCreateWithPixelFormatType() to get a dictionary with various fields describing each format.
This looks something like this:
UInt32 value;
CFNumberGetValue(pixelFormatFourCC, kCFNumberSInt32Type, &value);
auto desc = CVPixelFormatDescriptionCreateWithPixelFormatType(kCFAllocatorDefault, (OSType)value);
auto OpenGLCompatibility = (CFBooleanRef)CFDictionaryGetValue(desc, kCVPixelFormatOpenGLCompatibility);
auto OpenGLESCompatibility = (CFBooleanRef)CFDictionaryGetValue(desc, kCVPixelFormatOpenGLESCompatibility);
If OpenGLESCompatibility is true then this format is OpenGL ES compatible.

Related

Correct decoding opus

I had a need to transmit sound over the network and for this I chose libraries "PortAudio" and "Opus". I am new to working with sound and therefore i don’t know many thing.I am new to working with sound and therefore i don’t know many things, but i read the documentation and looked at some examples, but i still have some problems with encoding/decoding with Opus. I do not understand how to correctly restore the original encoded PСM.I have some sequence of actions:
Some consts
const int FRAMES_PER_BUFFER = 960;
const int SAMPLE_RATE = 48000;
int NUM_CHANNELS = 2;
int totalFrames = 2 * SAMPLE_RATE; /* Record for a few seconds. */
int numSamples = totalFrames * 2;
int numBytes = numSamples * sizeof(float);
float *sampleBlock = nullptr;
int bytesOfPacket = 0;
unsigned char *packet = nullptr;
I get PСM to sampleBlock
paError = Pa_ReadStream(**&stream, sampleBlock, totalFrames);
if (paError != paNoError) {
cout << "PortAudio error : " << Pa_GetErrorText(paError) << endl;
std::system("pause");
}
Encoding sampleBlock
OpusEncoder *encoder;
int error;
int size;
encoder = opus_encoder_create(SAMPLE_RATE, NUM_CHANNELS, OPUS_APPLICATION_VOIP, &error);
size = opus_encoder_get_size(NUM_CHANNELS);
encoder = (OpusEncoder *)malloc(size);
packet = new unsigned char[480];
error = opus_encoder_init(encoder, SAMPLE_RATE, NUM_CHANNELS, OPUS_APPLICATION_VOIP);
if (error == -1) {
return -1;
}
bytesOfPacket = opus_encode_float(encoder, sampleBlock, FRAMES_PER_BUFFER, packet, 480);
opus_encoder_destroy(encoder);
Ok, i received a encoded packet to Opus
Decoding
OpusDecoder *decoder;
int error;
int size;
decoder = opus_decoder_create(SAMPLE_RATE, NUM_CHANNELS, &error);
size = opus_decoder_get_size(NUM_CHANNELS);
decoder = (OpusDecoder *)malloc(size);
error = opus_decoder_init(decoder, SAMPLE_RATE, NUM_CHANNELS);
opus_decode_float(decoder, packet, bytesOfPacket, sampleBlock, 480, 0);
opus_decoder_destroy(decoder);
Here i am trying to decode the Opus back to the PCM and save the result to the sampleBlock
Playing the sound
paError = Pa_WriteStream(**&stream, sampleBlock, totalFrames);
if (paError != paNoError) {
cout << "PortAudio error : " << Pa_GetErrorText(paError) << endl;
std::system("pause");
}
I get silence. I don't really understand the subtleties in working with sound since i am new to this business. Help please understand what is wrong.
As for your settings you're encoding 20ms of audio per opus_encode_float call. I don't see any iteration over this call so I suppose you don't hear anything because you encode only 20ms of audio. You should pass to opus_encode_float 20ms worth of samples with your sampleBlock pointer incrementing it through the whole buffer x times.
Try to encode more audio and remember that you have to add some sort of framing to decode it. You cannot just feed the whole buffer to the decoder. You should feed the decoder one time for each encoder call with the same data that each encoder call outputs.
Damiano

How to determine if an image is a progressive JPEG (interlaced) in Objective-C

I need to determine in Objective-C if an image, that needs to be downloaded and shown, is a progressive JPEG. I did some search but didn't find any obvious way. Can you help?
The JPEG Format contain several markers.
The Progressive Markers are
\xFF\xC2
\xFF\xDA
If you find these in the file, then you're dealing with progressive JPEGs.
After doing some more research I found some more info on the way it could be done in iOS.
Following what I read in Technical Q&A QA1654 - Accessing image properties with ImageIO I did it like this:
//Example of progressive JPEG
NSString *imageUrlStr = #"http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg";
CFURLRef url = (__bridge CFURLRef)[NSURL URLWithString:imageUrlStr];
CGImageSourceRef myImageSource = CGImageSourceCreateWithURL(url, NULL);
CFDictionaryRef imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(myImageSource,0, NULL);
The imagePropertiesDictionary dictionary looks like this for a progressive JPEG:
Printing description of imagePropertiesDictionary:
{
ColorModel = RGB;
Depth = 8;
PixelHeight = 1280;
PixelWidth = 1024;
"{JFIF}" = {
DensityUnit = 0;
IsProgressive = 1;
JFIFVersion = (
1,
0,
1
);
XDensity = 1;
YDensity = 1;
};
}
IsProgressive = 1 means it's a progressive/interlaced JPEG. You can check the value like this:
CFDictionaryRef JFIFDictionary = (CFDictionaryRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyJFIFDictionary);
CFBooleanRef isProgressive = (CFBooleanRef)CFDictionaryGetValue(JFIFDictionary, kCGImagePropertyJFIFIsProgressive);
if(isProgressive == kCFBooleanTrue)
NSLog(#"It's a progressive JPEG");

Core Audio: Float32 to SInt16 conversion artefacts

I am converting from the following format:
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
_stereoGraphStreamFormat.mFormatID = kAudioFormatLinearPCM;
_stereoGraphStreamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
_stereoGraphStreamFormat.mBytesPerPacket = four_bytes_per_float;
_stereoGraphStreamFormat.mFramesPerPacket = 1;
_stereoGraphStreamFormat.mBytesPerFrame = four_bytes_per_float;
_stereoGraphStreamFormat.mChannelsPerFrame = 2;
_stereoGraphStreamFormat.mBitsPerChannel = eight_bits_per_byte * four_bytes_per_float;
_stereoGraphStreamFormat.mSampleRate = 44100;
to the following format:
interleavedAudioDescription.mFormatID = kAudioFormatLinearPCM;
interleavedAudioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger;
interleavedAudioDescription.mChannelsPerFrame = 2;
interleavedAudioDescription.mBytesPerPacket = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mFramesPerPacket = 1;
interleavedAudioDescription.mBytesPerFrame = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
interleavedAudioDescription.mSampleRate = 44100;
Using the following code:
int32_t availableBytes = 0;
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
// If we have no data in the buffer, we simply return
if (availableBytes <= 0)
{
return;
}
// ========== Non-Interleaved to Interleaved (Plus Samplerate Conversion) =========
// Get the number of frames available
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
pcmOutputBuffer->mBuffers[0].mDataByteSize = frames * interleavedAudioDescription.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) { .self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = availableBytes };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
// ========== Buffering Of Interleaved Samples =========
// If we got converted frames back from the converter, we want to add it to a separate buffer
if (frames > 0)
{
// Make sure we have enough space in the buffer to store the new data
TPCircularBufferHead(&pcmCircularBuffer, &availableBytes);
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
// Add the newly converted data to the buffer
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, frames * interleavedAudioDescription.mBytesPerFrame);
}
else
{
printf("No Space in Buffer\n");
}
}
However I am getting the following output:
It should be a perfect sine wave, however as you can see it is not.
I have been working on this for days now and just can’t seem to find where it is going wrong.
Can anyone see something that I might be missing?
Edit:
Buffer initialisation:
TPCircularBuffer pcmCircularBuffer;
static SInt16 pcmOutputBuf[BUFFER_SIZE];
pcmOutputBuffer = (AudioBufferList*)malloc(sizeof(AudioBufferList));
pcmOutputBuffer->mNumberBuffers = 1;
pcmOutputBuffer->mBuffers[0].mNumberChannels = 2;
pcmOutputBuffer->mBuffers[0].mData = pcmOutputBuf;
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
Complex input data proc:
static OSStatus complexInputDataProc(AudioConverterRef inAudioConverter,
UInt32 *ioNumberDataPackets,
AudioBufferList *ioData,
AudioStreamPacketDescription **outDataPacketDescription,
void *inUserData) {
struct complexInputDataProc_t *arg = (struct complexInputDataProc_t*)inUserData;
BroadcastingServices::MP3Encoder *self = (BroadcastingServices::MP3Encoder*)arg->self;
if ( arg->byteLength <= 0 )
{
*ioNumberDataPackets = 0;
return 100; //kNoMoreDataErr;
}
UInt32 framesAvailable = arg->byteLength / self->interleavedAudioDescription.mBytesPerFrame;
if (*ioNumberDataPackets > framesAvailable)
{
*ioNumberDataPackets = framesAvailable;
}
ioData->mBuffers[0].mData = arg->sourceL;
ioData->mBuffers[0].mDataByteSize = arg->byteLength;
ioData->mBuffers[1].mData = arg->sourceR;
ioData->mBuffers[1].mDataByteSize = arg->byteLength;
arg->byteLength = 0;
return noErr;
}
I see a few things that raise a red flag.
1) as mentioned in a comment above, the fact that you are overwriting availableBytes for the left input with that from the right:
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
If the two input streams are changing asynchronously to this code then most certainly you have a race condition.
2) Truncation errors: availableBytes is not necessarily a multiple of bytes per frame. If not then the following bit of code could cause you to consume more bytes than you actually converted.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
...
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
...
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
3) If the output buffer is not ready to consume all of the input you just throw the input buffer away. That happens in this code.
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
...
}
else
{
printf("No Space in Buffer\n");
}
I'd be really curious if your seeing the print output.
Here's is how I would suggest doing it. It's going to be pseudo-codeish since I don't have anything necessary to compile and test it.
int32_t availableBytesInL = 0;
int32_t availableBytesInR = 0;
int32_t availableBytesOut = 0;
// figure out how many bytes are available in each buffer.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytesInL);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytesInR);
TPCircularBufferHead(&pcmCircularBuffer, &availableBytesOut);
// figure out how many full frames are available
UInt32 framesInL = availableBytesInL / mInputFormat.mBytesPerFrame;
UInt32 framesInR = availableBytesInR / mInputFormat.mBytesPerFrame;
UInt32 framesOut = availableBytesOut / interleavedAudioDescription.mBytesPerFrame;
// figure out how many frames to process this time.
UInt32 frames = min(min(framesInL, framesInL), framesOut);
if (frames == 0)
return;
int32_t bytesConsumed = frames * mInputFormat.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) {
.self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = bytesConsumed };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
int32_t bytesProduced = frames * interleavedAudioDescription.mBytesPerFrame;
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), bytesConsumed);
TPCircularBufferConsume(inputBufferR(), bytesConsumed);
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, bytesProduced);
Basically what I've done here is to figure out up front how many frames should be processed making sure I'm only processing as many frames as the output buffer can handle. If it were me I'd also add some checking for buffer underruns on the output and buffer overruns on the input. Finally, I'm not exactly sure of the semantics of AudioConverterFillComplexBuffer wrt the frame parameter that is passing in and out. It's conceivable that the # frames out would be less or more than the number of frames in. Although, since your not doing sample rate conversion that's probably not going to happen. I've attempted to account for that condition by assigning bytesProduced after the conversion.
Hope this helps. If not you have 2 other clues. One is that the drop outs are periodic and two is that the size of the drop outs looks to be about the same. If you can figure out how many samples each are then you can look for those numbers in your code.
I don't think your output buffer, pcmCircularBuffer, is big enough.
Try replacing
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
with
TPCircularBufferInit(&pcmCircularBuffer, sizeof(pcmOutputBuf));
Even if that is the solution, I think there are some problems with your code. I don't know exactly what you're doing, I guess encoding mp3 (which by itself is an uphill battle on iOS, why not use hardware AAC?), but unless you have realtime demands on both input and output, why use ring buffers at all? Also, I recommend using units to visually catch type frame/byte size mismatches: e.g. BUFFER_SIZE_IN_FRAMES
If it's not the solution, then I want to see the sine generator.

How to read Arduino float values on OSX with Bluetooth LE (BLE mini module)

The SimpleControls example of the Red Bear Labs BLE Mini module (https://github.com/RedBearLab/iOS/tree/master/Examples/SimpleControls_OSX) enables to send analog readings (e.g. temperature sensor) from an Arduino to iOS / OSX with following Arduino code:
uint16_t value = analogRead(ANALOG_IN_PIN)
BLEMini_write(0x0B);
BLEMini_write(value >> 8);
BLEMini_write(value);
However, I tried to convert the raw analog readings (e.g. 162) into actual temperature reading (e.g. degree celsius / 27.15) and transmit the conversion to iOS / OSX, but on OSX I just read strange values (e.g. 13414). The Arduino code I used is following:
int reading = analogRead(ANALOG_IN_PIN);
float voltage = reading * 5.0;
float temp = (voltage - 0.5) * 100;
int tempINT = temp;
uint16_t value = tempINT;
BLEMini_write(0x0B);
BLEMini_write(value >> 8);
BLEMini_write(value);
The code-part of the OSX-app is following:
-(void) bleDidReceiveData:(unsigned char *)data length:(int)length
{
NSLog(#"Length: %d", length);
// parse data, all commands are in 3-byte
for (int i = 0; i < length; i+=3)
{
NSLog(#"0x%02X, 0x%02X, 0x%02X", data[i], data[i+1], data[i+2]);
if (data[i] == 0x0A) // Digital In data
{
if (data[i+1] == 0x01)
lblDigitalIn.stringValue = #"HIGH";
else
lblDigitalIn.stringValue = #"LOW";
}
else if (data[i] == 0x0B) // Analog In data
{
UInt16 Value;
Value = data[i+2] | data[i+1] << 8;
lblAnalogIn.stringValue = [NSString stringWithFormat:#"%d", Value];
}
}
}
It seems that the problem are "float" or converted "int" values and if someone could help me to solve this problem I would be really happy!
All characteristic data is just bytes. Once a characteristic's data has been read it is up to the central app to convert the data to an appropriate format (as described by the peripheral's manufacturer or some characteristics also contain a format descriptor which will describe out to format its data.)

Determine Number of Frames in a Core Audio AudioBuffer

I am trying to access the raw data for an audio file on the iPhone/iPad. I have the following code which is a basic start down the path I need. However I am stumped at what to do once I have an AudioBuffer.
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:urlAsset error:nil];
AVAssetReaderTrackOutput *assetReaderOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[urlAsset tracks] objectAtIndex:0] outputSettings:nil];
[assetReader addOutput:assetReaderOutput];
[assetReader startReading];
CMSampleBufferRef ref;
NSArray *outputs = assetReader.outputs;
AVAssetReaderOutput *output = [outputs objectAtIndex:0];
int y = 0;
while (ref = [output copyNextSampleBuffer]) {
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
SInt16 *frames = audioBuffer.mData;
for(int i = 0; i < 24000; i++) { // This sometimes crashes
Float32 currentFrame = frames[i] / 32768.0f;
}
}
}
Essentially I don't know how to tell how many frames each buffer contains so I can't reliably extract the data from them. I am new to working with raw audio data so I'm open to any suggestions in how to best read the mData property of the AudioBuffer struct. I also haven't done much with void pointers in the past so help with that in this context would be great too!
audioBuffer.mDataByteSize tells you the size of the buffer. Did you know this? Just incase you didn't you can't have looked at the declaration of struct AudioBuffer. You should always look at the header files as well as the docs.
For the mDataByteSize to make sense you must know the format of the data. The count of output values is mDataByteSize/sizeof(outputType). However, you seem confused about the format - you must have specified it somewhere. First of all you treat it as a 16bit signed int
SInt16 *frames = audioBuffer.mData
then you treat it as 32 bit float
Float32 currentFrame = frames[i] / 32768.0f
inbetween you assume that there are 24000 values, of course this will crash if there aren't exactly 24000 16bit values. Also, you refer to the data as 'frames' but what you really mean is samples. Each value you call 'currentFrame' is one sample of the audio. 'Frame' would typically refer to a block of samples like .mData
So, assuming the data format is 32bit Float (and please note, i have no idea if it is, it could be 8 bit int, or 32bit Fixed for all i know)
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
int bufferSize = audioBuffer.mDataByteSize / sizeof(Float32);
Float32 *frame = audioBuffer.mData;
for( int i=0; i<bufferSize; i++ ) {
Float32 currentSample = frame[i];
}
}
Note, sizeof(Float32) is always 4, but i left it in to be clear.

Resources