Getting unexpected Pixels from Raw Image - ios

I am trying to catch R, G and B from some pixels on a game scene. For this I have created a Bitmap image in Black & White.
This image is first loaded on Init(), afterwards, every sprite movement is checked for it is really an available spot.
The thing is that I am getting unexpected data at R, G and B. I tried two Bitmap images (8bit and 24bit). They both have only black and white pixels. But the r, g and b keep telling me these pixels are any other color. I think that the "no_of_channels" should be 3, as I am not working with the alpha channel, right? Any ideas?
App.h
// background mask
UIImage* bgmask;
CGImageRef aCGImageRef;
CFDataRef rawData;
UInt8 * bgmaskbuf;
Init():
// BG Mask
bgmask = [UIImage imageNamed:#"mask.bmp"];
aCGImageRef = bgmask.CGImage;
rawData = CGDataProviderCopyData(CGImageGetDataProvider(aCGImageRef));
bgmaskbuf = (UInt8 *) CFDataGetBytePtr(rawData);
Method to check Pixel's data:
-(BOOL) checkPixel: (CGFloat)x : (CGFloat)y{
BOOL result = FALSE;
//int length = CFDataGetLength(rawData);
//for(int i=0; i<length; i+=3)
//{
// int r = bgmaskbuf[i];
// int g = bgmaskbuf[i+1];
// int b = bgmaskbuf[i+2];
// NSLog(#"Ptr: %d, R: %d, G: %d, B: %d", i, r, g, b);
//}
int no_of_channels = 3;
int image_width = SCREEN_WIDTH();
unsigned long row_stride = image_width * no_of_channels; // 960 bytes in this case
unsigned long x_offset = x * no_of_channels;
/* assuming RGB byte order (as opposed to BGR) */
row_stride * (int)y + x_offset
int r = bgmaskbuf[next_pixel];
int g = bgmaskbuf[next_pixel + 1];
int b = bgmaskbuf[next_pixel + 2];
NSLog(#"Ptr: %d, R: %d, G: %d, B: %d",next_pixel r, g, b);
if((r==0)&&(g==0)&&(b==0)){
result = TRUE;
}
return result;
}
How to fix this?
Thanks.
Following this question:
Here's what I've made to try to solve this:
At pixel check I try to run every pixel inside:
int length = CFDataGetLength(rawData);
for(int i=0; i<length; i+=3)
{
int r = bgmaskbuf[i];
int g = bgmaskbuf[i+1];
int b = bgmaskbuf[i+2];
NSLog(#"Ptr: %d, R: %d, G: %d, B: %d", i, r, g, b);
}
Length is 786432, which makes sense (1024 * 768 pixels). I can see/read all of the pixels, in total, 2359296 bytes (R + G + B).
Now, what is weird is that, when dealing with user's touch and movements, data buffer index such as 793941 gives me EXC_BAD_ACCESS, at address 0x13200555.
This happens when I try to read it like:
row_stride * (int)y + x_offset
int r = bgmaskbuf[next_pixel];
int g = bgmaskbuf[next_pixel + 1];
int b = bgmaskbuf[next_pixel + 2];
bgmaskbuf starts at 0x13240000.
So, address range from 0x13240000 through 0x13480000 should be readable.
But I have just read this same address a while ago!

You will need to check some values. The row stride may not actually just be the image width and the number of channels. They like padding rows to keep them on boundaries. You should be able to get that information from the image. To check you could see if checkpixel works properly on the top/bottom row(some images are also in memory upside down) to see if the values are correct.

What really worked for me:
Saved the bitmap image as 1 bit only (the best and most simple way to do this is Ms Paint, I couldn't find a Mac App).
The generated mask was indeed rotated 180 degrees from the screen image.
For this I used only 1 channel:
-(BOOL) checkPixel: (CGFloat)x : (CGFloat)y{
BOOL result = FALSE;
int no_of_channels = 1;
int image_width = SCREEN_WIDTH();
unsigned long row_stride = image_width * no_of_channels; // 960 bytes in this case
unsigned long x_offset = x * no_of_channels;
row_stride * (int)y + x_offset
int pixie = bgmaskbuf[next_pixel];
if(pixie==0)){
result = TRUE;
}
Instead of code rotating the mask, I thought that Image Editing easier =)
Thanks to you all!

Related

Reading pixels from UIImage results in BAD_ACCESS

I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.

Copy cv::Mat into CMSampleBufferRef

How can I copy cv::Mat data back into the sampleBuffer?
My scenario as follow :
I create a cv::Mat from pixelBuffer for landmark detection and add the landmarks to cv::Mat image data. I'd like to copy this cv::Mat into the sample buffer to be shown with the landmark.
Is this possible ?
I achieved this with dlib but need to know how to do it with cv::mat:
char *baseBuffer = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
img.reset();
long position = 0;
while (img.move_next()) {
dlib::bgr_pixel& pixel = img.element();
long bufferLocation = position * 4; //(row * width + column) * 4;
char b = baseBuffer[bufferLocation];
char g = baseBuffer[bufferLocation + 1];
char r = baseBuffer[bufferLocation + 2];
dlib::bgr_pixel newpixel(b, g, r);
pixel = newpixel;
position++;
}
I am answering my own question.
First thing, you need to access the pixel data of cv::mat Image, I followed this great solution
Then you need to copy pixel into the buffer starting from the basebuffer. Following code should help you achieve this :
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
char *baseBuffer = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
long position = 0;
uint8_t* pixelPtr = (uint8_t*)targetImage.data;
int cn = targetImage.channels();
cv::Scalar_<uint8_t> rgbPixel;
for(int i = 0; i < targetImage.rows; i++)
{
for(int j = 0; j < targetImage.cols; j++)
{
long bufferLocation = position * 4;
rgbPixel.val[0] = pixelPtr[i*targetImage.cols*cn + j*cn + 0]; // B
rgbPixel.val[1] = pixelPtr[i*targetImage.cols*cn + j*cn + 1]; // G
rgbPixel.val[2] = pixelPtr[i*targetImage.cols*cn + j*cn + 2]; // R
baseBuffer[bufferLocation] = rgbPixel.val[2];
baseBuffer[bufferLocation + 1] = rgbPixel.val[1];
baseBuffer[bufferLocation + 2] = rgbPixel.val[0];
position++;
}
}
Some things to take note of
make sure you CVPixelBufferLockBaseAddress and
CVPixelBufferUnlockBaseAddress before and after the operation. I
am doing this on CV_8UC3, you might want to check your cv::mat
type.
I haven't done the performance analysis but I am getting smooth output with this.

How tu put B, G and R component value straight into a pixel of cv::Mat? [duplicate]

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}

OpenCL :Access proper index by using globalid(.)

Hi,
I am coding in OpenCL.
I am converting a "C function" having 2D array starting from i=1 and j=1 .PFB .
cv::Mat input; //Input :having some data in it ..
//Image input size is :input.rows=288 ,input.cols =640
cv::Mat output(input.rows-2,input.cols-2,CV_32F); //Output buffer
//Image output size is :output.rows=286 ,output.cols =638
This is a code Which I want to modify in OpenCL:
for(int i=1;i<output.rows-1;i++)
{
for(int j=1;j<output.cols-1;j++)
{
float xVal = input.at<uchar>(i-1,j-1)-input.at<uchar>(i-1,j+1)+ 2*(input.at<uchar>(i,j-1)-input.at<uchar>(i,j+1))+input.at<uchar>(i+1,j-1) - input.at<uchar>(i+1,j+1);
float yVal = input.at<uchar>(i-1,j-1) - input.at<uchar>(i+1,j-1)+ 2*(input.at<uchar>(i-1,j) - input.at<uchar>(i+1,j))+input.at<uchar>(i-1,j+1)-input.at<uchar>(i+1,j+1);
output.at<float>(i-1,j-1) = xVal*xVal+yVal*yVal;
}
}
...
Host code :
//Input Image size is :input.rows=288 ,input.cols =640
//Output Image size is :output.rows=286 ,output.cols =638
OclStr->global_work_size[0] =(input.cols);
OclStr->global_work_size[1] =(input.rows);
size_t outBufSize = (output.rows) * (output.cols) * 4;//4 as I am copying all 4 uchar values into one float variable space
cl_mem cl_input_buffer = clCreateBuffer(
OclStr->context, CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR ,
(input.rows) * (input.cols),
static_cast<void *>(input.data), &OclStr->returnstatus);
cl_mem cl_output_buffer = clCreateBuffer(
OclStr->context, CL_MEM_WRITE_ONLY| CL_MEM_USE_HOST_PTR ,
(output.rows) * (output.cols) * sizeof(float),
static_cast<void *>(output.data), &OclStr->returnstatus);
OclStr->returnstatus = clSetKernelArg(OclStr->objkernel, 0, sizeof(cl_mem), (void *)&cl_input_buffer);
OclStr->returnstatus = clSetKernelArg(OclStr->objkernel, 1, sizeof(cl_mem), (void *)&cl_output_buffer);
OclStr->returnstatus = clEnqueueNDRangeKernel(
OclStr->command_queue,
OclStr->objkernel,
2,
NULL,
OclStr->global_work_size,
NULL,
0,
NULL,
NULL
);
clEnqueueMapBuffer(OclStr->command_queue, cl_output_buffer, true, CL_MAP_READ, 0, outBufSize, 0, NULL, NULL, &OclStr->returnstatus);
kernel Code :
__kernel void Sobel_uchar (__global uchar *pSrc, __global float *pDstImage)
{
const uint cols = get_global_id(0)+1;
const uint rows = get_global_id(1)+1;
const uint width= get_global_size(0);
uchar Opsoble[8];
Opsoble[0] = pSrc[(cols-1)+((rows-1)*width)];
Opsoble[1] = pSrc[(cols+1)+((rows-1)*width)];
Opsoble[2] = pSrc[(cols-1)+((rows+0)*width)];
Opsoble[3] = pSrc[(cols+1)+((rows+0)*width)];
Opsoble[4] = pSrc[(cols-1)+((rows+1)*width)];
Opsoble[5] = pSrc[(cols+1)+((rows+1)*width)];
Opsoble[6] = pSrc[(cols+0)+((rows-1)*width)];
Opsoble[7] = pSrc[(cols+0)+((rows+1)*width)];
float gx = Opsoble[0]-Opsoble[1]+2*(Opsoble[2]-Opsoble[3])+Opsoble[4]-Opsoble[5];
float gy = Opsoble[0]-Opsoble[4]+2*(Opsoble[6]-Opsoble[7])+Opsoble[1]-Opsoble[5];
pDstImage[(cols-1)+(rows-1)*width] = gx*gx + gy*gy;
}
Here I am not able to get the output as expected.
I am having some questions that
My for loop is starting from i=1 instead of zero, then How can I get proper index by using the global_id() in x and y direction
What is going wrong in my above kernel code :(
I am suspecting there is a problem in buffer stride but not able to further break my head as already broke it throughout a day :(
I have observed that with below logic output is skipping one or two frames after some 7/8 frames sequence.
I have added the screen shot of my output which is compared with the reference output.
My above logic is doing partial sobelling on my input .I changed the width as -
const uint width = get_global_size(0)+1;
PFB
Your suggestions are most welcome !!!
It looks like you may be fetching values in (y,x) format in your opencl version. Also, you need to add 1 to the global id to replicate your for loops starting from 1 rather than 0.
I don't know why there is an unused iOffset variable. Maybe your bug is related to this? I removed it in my version.
Does this kernel work better for you?
__kernel void simple(__global uchar *pSrc, __global float *pDstImage)
{
const uint i = get_global_id(0) +1;
const uint j = get_global_id(1) +1;
const uint width = get_global_size(0) +2;
uchar Opsoble[8];
Opsoble[0] = pSrc[(i-1) + (j - 1)*width];
Opsoble[1] = pSrc[(i-1) + (j + 1)*width];
Opsoble[2] = pSrc[i + (j-1)*width];
Opsoble[3] = pSrc[i + (j+1)*width];
Opsoble[4] = pSrc[(i+1) + (j - 1)*width];
Opsoble[5] = pSrc[(i+1) + (j + 1)*width];
Opsoble[6] = pSrc[(i-1) + (j)*width];
Opsoble[7] = pSrc[(i+1) + (j)*width];
float gx = Opsoble[0]-Opsoble[1]+2*(Opsoble[2]-Opsoble[3])+Opsoble[4]-Opsoble[5];
float gy = Opsoble[0]-Opsoble[4]+2*(Opsoble[6]-Opsoble[7])+Opsoble[1]-Opsoble[5];
pDstImage[(i-1) + (j-1)*width] = gx*gx + gy*gy ;
}
I am a bit apprehensive about posting an answer suggesting optimizations to your kernel, seeing as the original output has not been reproduced exactly as of yet. There is a major improvement available to be made for problems related to image processing/filtering.
Using local memory will help you out by reducing the number of global reads by a factor of eight, as well as grouping the global writes together for potential gains with the single write-per-pixel output.
The kernel below reads a block of up to 34x34 from pSrc, and outputs a 32x32(max) area of the pDstImage. I hope the comments in the code are enough to guide you in using the kernel. I have not been able to give this a complete test, so there could be changes required. Any comments are appreciated as well.
__kernel void sobel_uchar_wlocal (__global uchar *pSrc, __global float *pDstImage, __global uint2 dimDstImage)
{
//call this kernel 1-dimensional work group size: 32x1
//calculates 32x32 region of output with 32 work items
const uint wid = get_local_id(0);
const uint wid_1 = wid+1; // corrected for the calculation step
const uint2 gid = (uint2)(get_group_id(0),get_group_id(1));
const uint localDim = get_local_size(0);
const uint2 globalTopLeft = (uint2)(localDim * gid.x, localDim * gid.y); //position in pSrc to copy from/to
//dimLocalBuff is used for the right and bottom edges of the image, where the work group may run over the border
const uint2 dimLocalBuff = (uint2)(localDim,localDim);
if(dimDstImage.x - globalTopLeft.x < dimLocalBuff.x){
dimLocalBuff.x = dimDstImage.x - globalTopLeft.x;
}
if(dimDstImage.y - globalTopLeft.y < dimLocalBuff.y){
dimLocalBuff.y = dimDstImage.y - globalTopLeft.y;
}
int i,j;
//save region of data into local memory
__local uchar srcBuff[34][34]; //34^2 uchar = 1156 bytes
for(j=-1;j<dimLocalBuff.y+1;j++){
for(i=x-1;i<dimLocalBuff.x+1;i+=localDim){
srcBuff[i+1][j+1] = pSrc[globalTopLeft.x+i][globalTopLeft.y+j];
}
}
mem_fence(CLK_LOCAL_MEM_FENCE);
//compute output and store locally
__local float dstBuff[32][32]; //32^2 float = 4096 bytes
if(wid_1 < dimLocalBuff.x){
for(i=0;i<dimLocalBuff.y;i++){
float gx = srcBuff[(wid_1-1)+ (i - 1)]-srcBuff[(wid_1-1)+ (i + 1)]+2*(srcBuff[wid_1+ (i-1)]-srcBuff[wid_1+ (i+1)])+srcBuff[(wid_1+1)+ (i - 1)]-srcBuff[(wid_1+1)+ (i + 1)];
float gy = srcBuff[(wid_1-1)+ (i - 1)]-srcBuff[(wid_1+1)+ (i - 1)]+2*(srcBuff[(wid_1-1)+ (i)]-srcBuff[(wid_1+1)+ (i)])+srcBuff[(wid_1-1)+ (i + 1)]-srcBuff[(wid_1+1)+ (i + 1)];
dstBuff[wid][i] = gx*gx + gy*gy;
}
}
mem_fence(CLK_LOCAL_MEM_FENCE);
//copy results to output
for(j=0;j<dimLocalBuff.y;j++){
for(i=0;i<dimLocalBuff.x;i+=localDim){
srcBuff[i][j] = pSrc[globalTopLeft.x+i][globalTopLeft.y+j];
}
}
}

Converting a 24-bit PNG image to an array of GLubytes

I'd like to do the following:
Read RGB color values from a 24 bit PNG image
Average the RGB values and store them into an array of Glubytes.
I have provided my function that I was hoping would perform these 2 steps.
My function returns an array of Glubytes, however all elements have a value of 0.
So im guessing im reading the image data incorrectly.
What am i going wrong in reading the image? (perhaps my format is incorrect).
Here is my function:
+ (GLubyte *) LoadPhotoAveragedIndexPNG:(UIImage *)image numPixelComponents: (int)numComponents
{
// Load an image and return byte array.
CGImageRef textureImage = image.CGImage;
if (textureImage == nil)
{
NSLog(#"LoadPhotoIndexPNG: Failed to load texture image");
return nil;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
GLubyte *indexedData = (GLubyte *)malloc(texWidth * texHeight);
GLubyte *rawData = (GLubyte *)malloc(texWidth * texHeight * numComponents);
CGContextRef textureContext = CGBitmapContextCreate(
rawData,
texWidth,
texHeight,
8,
texWidth * numComponents,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(textureContext,
CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight),
textureImage);
CGContextRelease(textureContext);
int rawDataLength = texWidth * texHeight * numComponents;
for (int i = 0, j = 0; i < rawDataLength; i += numComponents)
{
GLubyte b = rawData[i];
GLubyte g = rawData[i + 1];
GLubyte r = rawData[i + 2];
indexedData[j++] = (r + g + b) / 3;
}
return indexedData;
}
Here is the test image im loading (RGB colorspace in PNG format):
Do check with some logging if the parameters b,g and r are producing normal values in the last for loop. Where you made a mistake is indexedData[j++] = (r + g + b) / 3; those 3 parameters are sizeof 1 byte and you can not sum them up like that. Use a larger integer, typecast them and typecast the result back to array. (You are most likely getting overflow)
Apart from your original problem there's a major problem here (maybe even related)
for (int i = 0, j = 0; i < rawDataLength; i += numComponents)
{
GLubyte b = rawData[i];
GLubyte g = rawData[i + 1];
GLubyte r = rawData[i + 2];
indexedData[j++] = (r + g + b) / 3;
}
Namely the expression
(r + g + b)
This expression will be performed on GLubyte sized integer operations. If the sum of r+g+b is larger than the type GLubyte can hold it will overflow. Whenever you're processing data through intermediary variables (good style!) choose the variable types large enough to hold the largest value you can encounter. Another method was casting the expression like
indexedData[j++] = ((uint16_t)r + (uint16_t)g + (uint16_t)b) / 3;
But that's cumbersome to read. Also if you're processing integers of a known size, use the types found in stdint.h. You know, that you're expecting 8 bits per channel. Also you can use the comma operator in the for increment clause
uint8_t *indexedData = (GLubyte *)malloc(texWidth * texHeight);
/* ... */
for (int i = 0, j = 0; i < rawDataLength; i += numComponents, j++)
{
uint16_t b = rawData[i];
uint16_t g = rawData[i + 1];
uint16_t r = rawData[i + 2];
indexedData[j] = (r + g + b) / 3;
}

Resources