I am using the following to turn image data from the camera into a UIImage. In order to save some time, and memory, I'd like to crop the image data before I turn it into an UIImage.
Ideally I pass in a cropRect, and get back a cropped UIImage. However, since the camera output could be sized differently based on whether I am using a photo or video preset, I may not know what dimensions to use for the cropRect. I could use a cropRect, similar to the focus or exposure points, that uses a CGPoint between (0,0) and (1,1) and do similarly for the CGSizeof the cropRect. Or I can get the dimensions of the sampleBuffer, before I call the following, and pass in an appropriate cropRect. I'd like some advice as to which I should use.
I also would like to know how best to crop in order not to have to create an entire UIImage and then crop it back down. Typically, I am only interested in keeping about 10-20% of the pixels. I assume I have to iterate through the pixels, and start copying the cropRect into a different pixel buffer, until I have all the pixels I want.
And keep in mind that there is possible rotation happening according to orientation.
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer orientation:(UIImageOrientation) orientation
{
// Create a UIImage from sample buffer data
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:(CGFloat)1.0 orientation:orientation];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
In summary:
Should I pass in a cropRect which specifies a rect between (0,0,0,0) and (1,1,1,1) or do I pass in a cropRect that specifies exact pixel locations like (50,50,100,100)?
How best do I crop the pixel buffer?
I think you should use pixel as cropRect, as you have to convert the float-values to pixel-values at least at some point.
The following code is not tested, but should give you the idea.
CGRect cropRect = CGRectMake(50, 50, 100, 100); // cropRect
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(cropRect.size); // create context for image storage, use cropRect as size
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
// iterate over the pixels in cropRect
for(int y = cropRect.origin.y, yDest = 0; y<CGRectGetMaxY(cropRect); y++, yDest++) {
for(int x = cropRect.origin.x, xDest = 0; x<CGRectGetMaxX(cropRect); x++, xDest++) {
int offset = bytesPerPixel*((w*y)+x); // offset calculation in cropRect
int offsetDest = bytesPerPixel*((cropRect.size.width*yDest)+xDest); // offset calculation for destination image
for (int i = 0; i<bytesPerPixel; i++) {
data[offsetDest+i] = buffer[offset+i];
}
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Related
My use case is that a user takes a photo of themself on their phone, and uploads it to an image hosting service as a JPEG. Other uses can then download that image, and that image is then mapped to a metal texture for use in a game.
My issue is that if i download that image and simply display it in a UIImageView, it looks correct, but when I take the downloaded image and transform it into a metal texture it gets mirrored and rotated 90 degrees clockwise. I understand the image getting mirrored is due to metal having a different coordinate system but I don't understand the rotation issues. When I print the details for the image that has been passed into my function it has all the same orientation details as the UIImageView that is displaying correctly, so I have no idea where the issue is. Attached is my function that gives me my MTLTexture.
- (id<MTLTexture>) createTextureFromImage:(UIImage*) image device:(id<MTLDevice>) device
{
image =[UIImage imageWithCGImage:[image CGImage]
scale:[image scale]
orientation: UIImageOrientationLeft];
NSLog(#"orientation and size and stuff %ld %f %f", (long)image.imageOrientation, image.size.width, image.size.height);
CGImageRef imageRef = image.CGImage;
size_t width = self.view.frame.size.width;
size_t height = self.view.frame.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// NSLog(#"%# %u", colorSpace, alphaInfo);
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | alphaInfo;
// NSLog(#"bitmap info %u", bitmapInfo);
CGContextRef context = CGBitmapContextCreate( NULL, width, height, bitsPerComponent, (bitsPerPixel / 8) * width, colorSpace, bitmapInfo);
if( !context )
{
NSLog(#"Failed to load image, probably an unsupported texture type");
return nil;
}
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage);
MTLPixelFormat format = MTLPixelFormatRGBA8Unorm;
MTLTextureDescriptor *texDesc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:format
width:width
height:height
mipmapped:NO];
id<MTLTexture> texture = [device newTextureWithDescriptor:texDesc];
[texture replaceRegion:MTLRegionMake2D(0, 0, width, height)
mipmapLevel:0
withBytes:CGBitmapContextGetData(context)
bytesPerRow:4 * width];
return texture;
}
In Metal coordinates are reversed. However, you now have a much simpler way to load textures with MTKTextureLoader:
import MetalKit
let textureLoader = MTKTextureLoader(device: device)
let texture: MTLTexture = textureLoader.newTextureWithContentsOfURL(filePath, options: nil)
This will create a new texture for you with the appropriate coordinates using the image located at filePath. If you don't want to use a NSURL you also have the newTextureWithData and newTextureWithCGImage options.
I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate
I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
I followed along with this tutorial (http://www.bit-101.com/blog/?p=1861) and noticed that upon saving the same image multiple times, the quality slowly degraded.
Aside from the memory leaks, what's going wrong here? It should be pulling 4 bytes (rgba) for each pixel. Where's the loss if each pixel is accounted for?
----------------- EDIT -----------------
I'm saving a new image from the pixel data each time there's a vertex position transformation, then load this altered image into my texture buffer, and reset the vertex/index buffers. That way I can keep my changes persistent and ultimately make a less choppy warp. See my other SO question: OpenGL ES 2.0 Vertex Transformation Algorithms
----------------- EDIT -----------------
Before
After
Here's the code from the tutorial:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
Every time you render the altered image, it is (necessartly) being resampled — that is, converted to a bitmap where the original pixels (texels) align with the screen grid other than on a 1:1 basis. This is necessarily lossy, in that you have lost some of the detail of the original image, so you will get worse results if you distort that image again, compared to transforming the original image with different parameters.
I have some problems with access to camera images (or even images from photalbum).
After resizing the UIImage (i tested several different resize methods, they all lead to the same error) I want to access every individual pixel for handing over to a complex algorithm.
The problem is, that there is often a bytesPerRow value that doesn't match to the image size (eg. width*4) when accessing raw pixel data with CGImageGetDataProvider -> resulting in a EXC_BAD_ACCESS error.
Maybe we have an iOS bug here…
Nonetheless, here is the code:
// UIImage capturedImage from Camera
CGImageRef capturedImageRef = capturedImage.CGImage;
// getting bits per component from capturedImage
size_t bitsPerComponentOfCapturedImage = CGImageGetBitsPerComponent(capturedImageRef);
CGImageAlphaInfo alphaInfoOfCapturedImage = CGImageGetAlphaInfo(capturedImageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// calculate new size from interface data.
// with respect to aspect ratio
// ...
// newWidth = XYZ;
// newHeight = XYZ;
CGContextRef context = CGBitmapContextCreate(NULL, newWidth, newHeight, bitsPerComponentOfCapturedImage,0 , colorSpace, alphaInfoOfCapturedImage);
// I also tried to make use getBytesPerRow for CGBitmapContextCreate resulting in the same error
// if image was rotated
if(capturedImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -newHeight, 0.0f);
}
// draw on new context with new size
CGContextDrawImage(context, CGRectMake(0, 0, newWidth, newHeight), capturedImage.CGImage);
CGImageRef scaledImage=CGBitmapContextCreateImage(context);
// release
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
theImage = [UIImage imageWithCGImage: scaledImage];
CGImageRelease(scaledImage);
After that, I want to access the scaled image by
CGImageRef imageRef = theImage.CGImage;
NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
unsigned char *pixels = (unsigned char *)[data bytes];
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
NSLog(#"bytesPerRow: %f ", (float)bytesPerRow);
NSLog(#"Image width: %f ", (float)width);
NSLog(#"Image height: %f ", (float)height);
// manipulate the individual pixels
for(int i = 0; i < [data length]; i += 4) {
// accessing (float) pixels[i];
// accessing (float) pixels[i+1];
// accessing (float) pixels[i+2];
}
So for example when I access an image with 511x768 Pixel and scale that down to 290x436 I get following output:
Image width: 290.000000
Image height: 436.000000
bitsPerComponent: 8.000000
bitsPerPixel: 32.000000
bytesPerRow: 1184.000000
and you can clearly see that the bytesPerRow (althought choosen automatically by cocoa) does not match to the image width.
I would love to see any help
Using iOS SDK 4.3 on Xcode 4
You are ignoring possible line-padding, hence receiving invalid results. Add the following code and replace your loop;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
//calculate the padding just to see what is happening
size_t padding = bytesPerRow - (width * bytesPerPixel);
size_t offset = 0;
// manipulate the individual pixels
while (offset < [data length])
{
for (size_t x=0; x < width; x += bytesPerPixel)
{
// accessing (float) pixels[offset+x];
// accessing (float) pixels[offset+x+1];
// accessing (float) pixels[offset+x+2];
}
offset += bytesPerRow;
};
Addendum: Underlying reasoning for the row-padding is optimizing the memory access for individual rows to 32bit boundaries. It is indeed very common and is done for optimization purposes.