Using zlib deflateBound() dynamically - dynamic-memory-allocation

I have been looking at how to use deflateBound() dynamically and have not found exactly what I am looking for.
I looked at the manual for zlip, examples included in library and found these here:
Compression Libraries for ARM Cortex M3/4
Determine compressed/uncompressed buffer size for Z-lib in C
zlib, deflate: How much memory to allocate?
What I am missing is how to allocate after calling deflateBound().
E.g. this looks like it will cause problems
z_stream defstream;
uint8_t *outBuf=NULL;
uint32_t outLen=0;
defstream.zalloc = Z_NULL;
defstream.zfree = Z_NULL;
defstream.opaque = Z_NULL;
defstream.avail_in = (uInt)inLen;
defstream.next_in = (Bytef *)inBuf:
defstream.avail_out = (uInt)0;
defstream.next_out = (Bytef *)outBuf;
deflateInit(&defstream, Z_DEFAULT_COMPRESSION);
uint32_t estimateLen = deflateBound(&defstream, inLen);
outBuf = malloc(estimateLen);
defstream.avail_out = (uInt)estimateLen;
deflate(&defstream, Z_FINISH);
deflateEnd(&defstream);
I see realloc being mentioned, does this mean starting with an initial (probably too small) buffer is recommended?
z_stream defstream;
uint8_t *outBuf=NULL;
uint32_t outLen=100;
outBuf = malloc(outLen);
defstream.zalloc = Z_NULL;
defstream.zfree = Z_NULL;
defstream.opaque = Z_NULL;
defstream.avail_in = (uInt)inLen;
defstream.next_in = (Bytef *)inBuf:
defstream.avail_out = (uInt)outLen;
defstream.next_out = (Bytef *)outBuf;
deflateInit(&defstream, Z_DEFAULT_COMPRESSION);
uint32_t estimateLen = deflateBound(&defstream, inLen);
outBuf = realloc(outBufestimateLen);
defstream.avail_out = (uInt)estimateLen;
deflate(&defstream, Z_FINISH);
deflateEnd(&defstream);
Being an embedded system, I am trying to keep things simple.
Update 02/08/2019
the following code works (note Mark's fix):
static uint8_t *compressBuffer(char *inBuf, uint32_t *outLen)
{
uint32_t inLen = strlen(inBuf)+1; // +1 so the null terminator will get encoded
uint8_t *outBuf = NULL;
int result;
uint32_t tmpLen=0;
// initialize zlib
z_stream defstream;
defstream.zalloc = Z_NULL;
defstream.zfree = Z_NULL;
defstream.opaque = Z_NULL;
defstream.avail_in = inLen;
defstream.next_in = (Bytef *)inBuf;
defstream.avail_out = 0;
defstream.next_out = (Bytef *)outBuf;
if ((result = deflateInit(&defstream, Z_DEFAULT_COMPRESSION)) == Z_OK)
{
// calculate actual output length and update structure
uint32_t estimateLen = deflateBound(&defstream, inLen);
outBuf = malloc(estimateLen+10);
if (outBuf != NULL)
{
// update zlib configuration
defstream.avail_out = (uInt)estimateLen;
defstream.next_out = (Bytef *)outBuf;
// do the compression
deflate(&defstream, Z_FINISH);
tmpLen = (uint8_t*)defstream.next_out - outBuf;
}
}
// do the followimg regardless of outcome to leave in a good place
deflateEnd(&defstream);
// return whatever we have
*outLen = tmpLen;
return outBuf;
}

In both examples you are not setting next_out after the deflateBound(). You need defstream.next_out = (Bytef *)outBuf; after the malloc().
You do not need to do a realloc().

Related

Creating Texture with initial data DirectX 11

I am trying to implement Variable Rate Shading in the app based on DirectX 11.
I am doing it this way:
UINT dwRtWidth = 2560;
UINT dwRtHeight = 1440;
D3D11_TEXTURE2D_DESC srcDesc;
ZeroMemory(&srcDesc, sizeof(srcDesc));
int sri_w = dwRtWidth / NV_VARIABLE_PIXEL_SHADING_TILE_WIDTH;
int sri_h = dwRtHeight / NV_VARIABLE_PIXEL_SHADING_TILE_HEIGHT;
srcDesc.Width = sri_w;
srcDesc.Height = sri_h;
srcDesc.ArraySize = 1;
srcDesc.Format = DXGI_FORMAT_R8_UINT;
srcDesc.SampleDesc.Count = 1;
srcDesc.SampleDesc.Quality = 0;
srcDesc.Usage = D3D11_USAGE_DEFAULT; //Optional
srcDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; //Optional
srcDesc.CPUAccessFlags = 0;
srcDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialData;
UINT* data = (UINT*)malloc(sri_w * sri_h * sizeof(UINT));
for (int i = 0; i < sri_w * sri_h; i++)
data[i] = (UINT)0;
initialData.pSysMem = data;
initialData.SysMemPitch = sri_w;
//initialData.SysMemSlicePitch = 0;
HRESULT hr = s_device->CreateTexture2D(&srcDesc, &initialData, &pShadingRateSurface);
if (FAILED(hr))
{
LOG("Texture not created");
LOG(std::system_category().message(hr));
}
else
LOG("Texture created");
When I try to create texture with initial data, it is not being created and HRESULTS gives message: 'The parameter is incorrect'. Doesn't say which one.
When I create texture without initial data it's created successfully.
What's wrong with the initial data? I also tried to use unsigned char instead of UINT as it has 8 bits, but result was the same, texture was not created.
Please help.
Aftr some time I found a solution to the problem. I needed to add a line:
srcDesc.MipLevels = 1;
With this change the texture was finally created with initial data

iOS (OC) compression_decode_buffer() Returns a null value

I have a tool class. The tool class has written a method of decompression of lz4, but the decompression is controlled, and I don't know what is wrong (libcompression.tbd and #include "compression.h" both have). Below is the code:
+ (NSData *)getDecompressedData:(NSData *)compressed
{
size_t dst_buffer_size = 168*217;
uint8_t *dst_buffer = (uint8_t *)malloc(dst_buffer_size);
uint8_t *src_buffer = (uint8_t *)malloc(compressed.length);
size_t compressResultLength = compression_decode_buffer(dst_buffer, dst_buffer_size, src_buffer, dst_buffer_size, NULL, COMPRESSION_LZ4);
NSData *decompressed = [[NSData alloc] initWithBytes:dst_buffer length:compressResultLength];
return decompressed;
}
CompressResultLength this value is 0
I chose this algorithm incorrectly, COMPRESSION_LZ4 should not be selected, COMPRESSION_LZ4_RAW should be selected, the size of the target and the size of the source data were also wrong before the application. I will send the correct code below:
+ (NSData *)getDecompressedData:(NSData *)compressed{
size_t destSize = 217*168;
uint8_t *destBuf = malloc(sizeof(uint8_t) * destSize);
const uint8_t *src_buffer = (const uint8_t *)[compressed bytes];
size_t src_size = compressed.length;
size_t decompressedSize = compression_decode_buffer(destBuf, destSize, src_buffer, src_size,
NULL, COMPRESSION_LZ4_RAW);
MyLog(#"after decompressed. length = %d",decompressedSize) ;
NSData *data = [NSData dataWithBytesNoCopy:destBuf length:decompressedSize freeWhenDone:YES];
return data;}

Core Audio: Float32 to SInt16 conversion artefacts

I am converting from the following format:
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
_stereoGraphStreamFormat.mFormatID = kAudioFormatLinearPCM;
_stereoGraphStreamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
_stereoGraphStreamFormat.mBytesPerPacket = four_bytes_per_float;
_stereoGraphStreamFormat.mFramesPerPacket = 1;
_stereoGraphStreamFormat.mBytesPerFrame = four_bytes_per_float;
_stereoGraphStreamFormat.mChannelsPerFrame = 2;
_stereoGraphStreamFormat.mBitsPerChannel = eight_bits_per_byte * four_bytes_per_float;
_stereoGraphStreamFormat.mSampleRate = 44100;
to the following format:
interleavedAudioDescription.mFormatID = kAudioFormatLinearPCM;
interleavedAudioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger;
interleavedAudioDescription.mChannelsPerFrame = 2;
interleavedAudioDescription.mBytesPerPacket = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mFramesPerPacket = 1;
interleavedAudioDescription.mBytesPerFrame = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
interleavedAudioDescription.mSampleRate = 44100;
Using the following code:
int32_t availableBytes = 0;
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
// If we have no data in the buffer, we simply return
if (availableBytes <= 0)
{
return;
}
// ========== Non-Interleaved to Interleaved (Plus Samplerate Conversion) =========
// Get the number of frames available
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
pcmOutputBuffer->mBuffers[0].mDataByteSize = frames * interleavedAudioDescription.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) { .self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = availableBytes };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
// ========== Buffering Of Interleaved Samples =========
// If we got converted frames back from the converter, we want to add it to a separate buffer
if (frames > 0)
{
// Make sure we have enough space in the buffer to store the new data
TPCircularBufferHead(&pcmCircularBuffer, &availableBytes);
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
// Add the newly converted data to the buffer
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, frames * interleavedAudioDescription.mBytesPerFrame);
}
else
{
printf("No Space in Buffer\n");
}
}
However I am getting the following output:
It should be a perfect sine wave, however as you can see it is not.
I have been working on this for days now and just can’t seem to find where it is going wrong.
Can anyone see something that I might be missing?
Edit:
Buffer initialisation:
TPCircularBuffer pcmCircularBuffer;
static SInt16 pcmOutputBuf[BUFFER_SIZE];
pcmOutputBuffer = (AudioBufferList*)malloc(sizeof(AudioBufferList));
pcmOutputBuffer->mNumberBuffers = 1;
pcmOutputBuffer->mBuffers[0].mNumberChannels = 2;
pcmOutputBuffer->mBuffers[0].mData = pcmOutputBuf;
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
Complex input data proc:
static OSStatus complexInputDataProc(AudioConverterRef inAudioConverter,
UInt32 *ioNumberDataPackets,
AudioBufferList *ioData,
AudioStreamPacketDescription **outDataPacketDescription,
void *inUserData) {
struct complexInputDataProc_t *arg = (struct complexInputDataProc_t*)inUserData;
BroadcastingServices::MP3Encoder *self = (BroadcastingServices::MP3Encoder*)arg->self;
if ( arg->byteLength <= 0 )
{
*ioNumberDataPackets = 0;
return 100; //kNoMoreDataErr;
}
UInt32 framesAvailable = arg->byteLength / self->interleavedAudioDescription.mBytesPerFrame;
if (*ioNumberDataPackets > framesAvailable)
{
*ioNumberDataPackets = framesAvailable;
}
ioData->mBuffers[0].mData = arg->sourceL;
ioData->mBuffers[0].mDataByteSize = arg->byteLength;
ioData->mBuffers[1].mData = arg->sourceR;
ioData->mBuffers[1].mDataByteSize = arg->byteLength;
arg->byteLength = 0;
return noErr;
}
I see a few things that raise a red flag.
1) as mentioned in a comment above, the fact that you are overwriting availableBytes for the left input with that from the right:
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
If the two input streams are changing asynchronously to this code then most certainly you have a race condition.
2) Truncation errors: availableBytes is not necessarily a multiple of bytes per frame. If not then the following bit of code could cause you to consume more bytes than you actually converted.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
...
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
...
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
3) If the output buffer is not ready to consume all of the input you just throw the input buffer away. That happens in this code.
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
...
}
else
{
printf("No Space in Buffer\n");
}
I'd be really curious if your seeing the print output.
Here's is how I would suggest doing it. It's going to be pseudo-codeish since I don't have anything necessary to compile and test it.
int32_t availableBytesInL = 0;
int32_t availableBytesInR = 0;
int32_t availableBytesOut = 0;
// figure out how many bytes are available in each buffer.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytesInL);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytesInR);
TPCircularBufferHead(&pcmCircularBuffer, &availableBytesOut);
// figure out how many full frames are available
UInt32 framesInL = availableBytesInL / mInputFormat.mBytesPerFrame;
UInt32 framesInR = availableBytesInR / mInputFormat.mBytesPerFrame;
UInt32 framesOut = availableBytesOut / interleavedAudioDescription.mBytesPerFrame;
// figure out how many frames to process this time.
UInt32 frames = min(min(framesInL, framesInL), framesOut);
if (frames == 0)
return;
int32_t bytesConsumed = frames * mInputFormat.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) {
.self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = bytesConsumed };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
int32_t bytesProduced = frames * interleavedAudioDescription.mBytesPerFrame;
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), bytesConsumed);
TPCircularBufferConsume(inputBufferR(), bytesConsumed);
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, bytesProduced);
Basically what I've done here is to figure out up front how many frames should be processed making sure I'm only processing as many frames as the output buffer can handle. If it were me I'd also add some checking for buffer underruns on the output and buffer overruns on the input. Finally, I'm not exactly sure of the semantics of AudioConverterFillComplexBuffer wrt the frame parameter that is passing in and out. It's conceivable that the # frames out would be less or more than the number of frames in. Although, since your not doing sample rate conversion that's probably not going to happen. I've attempted to account for that condition by assigning bytesProduced after the conversion.
Hope this helps. If not you have 2 other clues. One is that the drop outs are periodic and two is that the size of the drop outs looks to be about the same. If you can figure out how many samples each are then you can look for those numbers in your code.
I don't think your output buffer, pcmCircularBuffer, is big enough.
Try replacing
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
with
TPCircularBufferInit(&pcmCircularBuffer, sizeof(pcmOutputBuf));
Even if that is the solution, I think there are some problems with your code. I don't know exactly what you're doing, I guess encoding mp3 (which by itself is an uphill battle on iOS, why not use hardware AAC?), but unless you have realtime demands on both input and output, why use ring buffers at all? Also, I recommend using units to visually catch type frame/byte size mismatches: e.g. BUFFER_SIZE_IN_FRAMES
If it's not the solution, then I want to see the sine generator.

GSSendEvent - Inject Touch Event iOS

I want to inject touch event in iPhone. I get the coordinates of touch event via network socket. GSSendEvent seems to be good choice. However, it needs GSEventRecord as one of the inputs.
Does anyone know how to prepare GSEventRecord? I prepared it based on some examples but the app crashes after GSSendEvent call.
Appreciate any help.
-(void) handleMouseEventAtPoint:(CGPoint) point
{
static mach_port_t port_;
// structure of touch GSEvent
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} ;
struct GSTouchEvent *touchEvent = (struct GSTouchEvent *) malloc(sizeof(struct GSTouchEvent));
bzero(touchEvent, sizeof(touchEvent));
// set up GSEvent
touchEvent->record.type = kGSEventHand;
touchEvent->record.windowLocation = point;
touchEvent->record.timestamp = GSCurrentEventTimestamp();
touchEvent->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
touchEvent->handInfo.type = getHandInfoType(0, 1);
touchEvent->handInfo.pathInfosCount = 1;
bzero(&touchEvent->handInfo.pathInfos[0], sizeof(GSPathInfo));
touchEvent->handInfo.pathInfos[0].pathIndex = 1;
touchEvent->handInfo.pathInfos[0].pathIdentity = 2;
touchEvent->handInfo.pathInfos[0].pathProximity = 1 ? 0x03 : 0x00;
touchEvent->handInfo.pathInfos[0].pathLocation = point;
port_ = GSGetPurpleSystemEventPort();
GSSendEvent((GSEventRecord*)touchEvent ,port_);
}
static GSHandInfoType getHandInfoType(int touch_before, int touch_now){
if (!touch_before) {
return (GSHandInfoType) kGSHandInfoType2TouchDown;
}
if (touch_now) {
return (GSHandInfoType) kGSHandInfoType2TouchChange;
}
return (GSHandInfoType) kGSHandInfoType2TouchFinal;
}
Only tested on iOS 6
You are actually on the right track. The problem is you have to figure out what values you should assign to these variables.
First of all, you need to import GraphicsServices.h. Then, you can try the following code with the port which you can get from How to find the purple port for the front most application in IOS 5 and above?.
I am not an iOS expert and Apple doesn't provide any documentation so I can't explain much what's going on here. (It happens to work fine for me.)
Anyway, you can play with it using xcode debug mode to see what happens under the hood.
struct GSTouchEvent * touchEvent = (struct GSTouchEvent*) &gsTouchEvent;
bzero(touchEvent, sizeof(touchEvent));
touchEvent->record.type = kGSEventHand;
touchEvent->record.subtype = kGSEventSubTypeUnknown;
touchEvent->record.location = point;
touchEvent->record.windowLocation = point;
touchEvent->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
touchEvent->record.timestamp = GSCurrentEventTimestamp();
touchEvent->record.window = winRef;
touchEvent->record.senderPID = 919;
bzero(&touchEvent->handInfo, sizeof(GSHandInfo));
bzero(&touchEvent->handInfo.pathInfos[0], sizeof(GSPathInfo));
GSHandInfo touchEventHandInfo;
touchEventHandInfo._0x5C = 0;
touchEventHandInfo.deltaX = 0;
touchEventHandInfo.deltaY = 0;
touchEventHandInfo.height = 0;
touchEventHandInfo.width = 0;
touchEvent->handInfo = touchEventHandInfo;
touchEvent->handInfo.type = handInfoType;
touchEvent->handInfo.deltaX = 1;
touchEvent->handInfo.deltaY = 1;
touchEvent->handInfo.pathInfosCount = 0;
touchEvent->handInfo.pathInfos[0].pathIndex = 1;
touchEvent->handInfo.pathInfos[0].pathIdentity = 2;
touchEvent->handInfo.pathInfos[0].pathProximity = (handInfoType == kGSHandInfoTypeTouchDown || handInfoType == kGSHandInfoTypeTouchDragged || handInfoType == kGSHandInfoTypeTouchMoved) ? 0x03: 0x00;
touchEvent->handInfo.x52 = 1;
touchEvent->handInfo.pathInfos[0].pathLocation = point;
touchEvent->handInfo.pathInfos[0].pathWindow = winRef;
GSEventRecord* record = (GSEventRecord*) touchEvent;
record->timestamp = GSCurrentEventTimestamp();
GSSendEvent(record, port);
To use this code, you have to call it multiple times. For one tap, there are touch-down, touch-drag and then touch-up.
Also note that pathProximity is 0 when touch is up.
As far as I remember, the winRef doesn't matter.
Hope this helps.
Edit: From Bugivore's comment, the problem is:
The way I allocated touchEvent via malloc was wrong. It should be done as EntryLevelDev showed - "static uint8_t handJob[sizeof(GSEventRecord) + sizeof(GSHandInfo) + sizeof(GSPathInfo)];"
Answer from EntryLevelDev is correct, but some of the value is not so important. I got the below codes from somewhere else and have done some try and errors, here is my codes(worked for till latested ios6).
And is anyone working on this for IOS7 now? I could not get it to work. see my post here:With GSCopyPurpleNamedPort(appId) in GraphicsServices deprecated in IOS7, what is the alternative approach?
static int prev_click = 0;
if (!click && !prev_click)
{
//which should never enter
NSLog(#"***error, postHandEvent cancel");
return;
}
CGPoint location = CGPointMake(x, y);
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} * event = (struct GSTouchEvent*) &touchEvent;
bzero(touchEvent, sizeof(touchEvent));
event->record.type = kGSEventHand;
event->record.windowLocation = location;
event->record.timestamp = GSCurrentEventTimestamp();
//NSLog(#"Timestamp GSCurrentEventTimestamp: %llu",GSCurrentEventTimestamp());
event->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
event->handInfo.type = getHandInfoType(prev_click, click);
//must have the following line
event->handInfo.x52 = 1;
//below line is for ios4
//event->handInfo.pathInfosCount = 1;
bzero(&event->handInfo.pathInfos[0], sizeof(GSPathInfo));
event->handInfo.pathInfos[0].pathIndex = 2;
//following 2 lines, they are by default
event->handInfo.pathInfos[0].pathMajorRadius = 1.0;
event->handInfo.pathInfos[0].pathPressure = 1.0;
//event->handInfo.pathInfos[0].pathIdentity = 2;
event->handInfo.pathInfos[0].pathProximity = click ? 0x03 : 0x00;
//event->handInfo.pathInfos[0].pathProximity = action;
event->handInfo.pathInfos[0].pathLocation = location;
// send GSEvent
GSEventRecord *event1 = (GSEventRecord*) event;
sendGSEvent(event1);

EXC_BAD_ACCESS on iOS 6 but not 5 when changing bytes of a CFDataRef

I have an application that applies various filters to an image. It works great on iOS 5 but crashes on 6. Below is a sample of where it's crashing:
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4)
{
if(filter == filterCurve){
int r = i;
int g = i+1;
int b = i+2;
int red = m_PixelBuf[r];
int green = m_PixelBuf[g];
int blue = m_PixelBuf[b];
m_PixelBuf[r] = SAFECOLOR(red); // <==== EXC_BAD_ACCESS (code = 2)
m_PixelBuf[g] = SAFECOLOR(green);
m_PixelBuf[b] = SAFECOLOR(blue);
}
}
Notice the bad access point when I try to assign a value back to m_PixelBuf. Anybody have any idea why this is occuring? What in iOS 6 would cause this?
This solves the problem: http://www.iphonedevsdk.com/forum/iphone-sdk-development/108072-exc_bad_access-in-ios-6-but-not-in-ios-5.html
In iOS 6 you need to use CFDataCreateMutableCopy() (instead of CGDataProviderCopyData()), followed by CFDataGetMutableBytePtr() (instead of CFDataGetBytePtr()) if you're going to be manipulating the data's bytes directly.
This is the url where you find new class which works with ios 6:https://github.com/kypselia/ios-image-filters/blob/6ef9a937a931f32dd0b7b5e5bbdca6cce2f690dc/Classes/ImageFilter.m

Resources