Replace specific color in CGBitmapContext - ios

How can I replace specific color(RGB value) in CGBitmapContext that has already drawn?
Is there any easy way?
Thanks in advance.

You'll want to get a pointer to the pixels and information about their format by doing something like this:
// This assumes the data is RGBA format, 8-bits per channel.
// You'll need to verify that by calling CGBitmapContextGetBitsPerPixel (), etc.
typedef struct RGBA8 {
UInt8 red;
UInt8 green;
UInt8 blue;
UInt8 alpha;
} RGBA8;
RGBA8* pixels = CGBitmapContextGetData (context);
UInt32 height = CGBitmapContextGetHeight (context);
UInt32 width = CGBitmapContextGetWidth (context);
UInt32 rowBytes = CGBitmapContextGetBytesPerRow (context);
UInt32 x, y;
for (y = 0; y < height; y++)
{
RGBA8* currentRow = (RGBA8*)((UInt8*)pixels + y * rowBytes);
for (x = 0; x < width; x++)
{
if ((currentRow->red == replaceRed) && (currentRow->green == replaceGreen) &&
(currentRow->blue == replaceBlue) && (currentRow->alpha == replaceAlpha))
{
currentRow->red = newRed;
currentRow->green = newGreen;
currentRow->blue = newBlue;
currentRow->alpha = newAlpha;
}
currentRow++;
}
}

Related

How can I set the stride of an Image properly?

While converting from double[,] to Bitmap,
Bitmap image = ImageDataConverter.ToBitmap(new double[,]
{
{ .11, .11, .11, },
{ .11, .11, .11, },
{ .11, .11, .11, },
});
the routine gives
data.Stride == 4
Where does this value come from?
Since the double[,] is 3x3, stride should be 5. Right?
How can I fix this not only for this one, but also for any dimension?
Relevant Source Code
public class ImageDataConverter
{
public static Bitmap ToBitmap(double[,] input)
{
int width = input.GetLength(0);
int height = input.GetLength(1);
Bitmap output = Grayscale.CreateGrayscaleImage(width, height);
BitmapData data = output.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.WriteOnly,
output.PixelFormat);
int pixelSize = System.Drawing.Image.GetPixelFormatSize(output.PixelFormat) / 8;
int offset = data.Stride - width * pixelSize;
double Min = 0.0;
double Max = 255.0;
unsafe
{
byte* address = (byte*)data.Scan0.ToPointer();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
double v = 255 * (input[x, y] - Min) / (Max - Min);
byte value = unchecked((byte)v);
for (int c = 0; c < pixelSize; c++, address++)
{
*address = value;
}
}
address += offset;
}
}
output.UnlockBits(data);
return output;
}
}
Don't know how you arrived at 5.
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary.
Link
Since it's 3x3, 3 rounded up to four-byte boundary is 4.

Reading pixels from UIImage results in BAD_ACCESS

I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.

Full range of Hues: HSV to RGB color conversion for OpenCV

The following code runs without exception on iOS (Xcode-v6.2 and openCV-v3.0beta). But for some reason the image the function returns is "black" !
The code is adapted from this link ! I tried to replace the oldish "IplImage*" by more modern "cv::Mat" matrices. Does anybody know if my function still has a mistake or why it would return a completely "black" image instead of a colored image in HSV-format.
By the way, the reason I would want to use this function [instead of cvtColor(cv_src, imgHSV, cv::COLOR_BGR2HSV)] is that I would like to get 0-255 range of Hue-values's (...since OpenCV only allows Hues up to 180 instead of 255).
// Create a HSV image from the RGB image using the full 8-bits, since OpenCV only allows Hues up to 180 instead of 255.
cv::Mat convertImageRGBtoHSV(cv::Mat imageRGB) {
float fR, fG, fB;
float fH, fS, fV;
const float FLOAT_TO_BYTE = 255.0f;
const float BYTE_TO_FLOAT = 1.0f / FLOAT_TO_BYTE;
// Create a blank HSV image
cv::Mat imageHSV(imageRGB.rows, imageRGB.cols, CV_8UC3);
int rowSizeHSV = (int)imageHSV.step; // Size of row in bytes, including extra padding.
char *imHSV = (char*)imageHSV.data; // Pointer to the start of the image pixels.
if (imageRGB.depth() == 8 && imageRGB.channels() == 3) {
std::vector<cv::Mat> planes(3);
cv::split(imageRGB, planes);
cv::Mat R = planes[2];
cv::Mat G = planes[1];
cv::Mat B = planes[0];
for(int y = 0; y < imageRGB.rows; ++y)
{
// get pointers to each row
cv::Vec3b* row = imageRGB.ptr<cv::Vec3b>(y);
// now scan the row
for(int x = 0; x < imageRGB.cols; ++x)
{
// Get the RGB pixel components. NOTE that OpenCV stores RGB pixels in B,G,R order.
cv::Vec3b pixel = row[x];
int bR = pixel[2];
int bG = pixel[1];
int bB = pixel[0];
// Convert from 8-bit integers to floats.
fR = bR * BYTE_TO_FLOAT;
fG = bG * BYTE_TO_FLOAT;
fB = bB * BYTE_TO_FLOAT;
// Convert from RGB to HSV, using float ranges 0.0 to 1.0.
float fDelta;
float fMin, fMax;
int iMax;
// Get the min and max, but use integer comparisons for slight speedup.
if (bB < bG) {
if (bB < bR) {
fMin = fB;
if (bR > bG) {
iMax = bR;
fMax = fR;
}
else {
iMax = bG;
fMax = fG;
}
}
else {
fMin = fR;
fMax = fG;
iMax = bG;
}
}
else {
if (bG < bR) {
fMin = fG;
if (bB > bR) {
fMax = fB;
iMax = bB;
}
else {
fMax = fR;
iMax = bR;
}
}
else {
fMin = fR;
fMax = fB;
iMax = bB;
}
}
fDelta = fMax - fMin;
fV = fMax; // Value (Brightness).
if (iMax != 0) { // Make sure it's not pure black.
fS = fDelta / fMax; // Saturation.
float ANGLE_TO_UNIT = 1.0f / (6.0f * fDelta); // Make the Hues between 0.0 to 1.0 instead of 6.0
if (iMax == bR) { // between yellow and magenta.
fH = (fG - fB) * ANGLE_TO_UNIT;
}
else if (iMax == bG) { // between cyan and yellow.
fH = (2.0f/6.0f) + ( fB - fR ) * ANGLE_TO_UNIT;
}
else { // between magenta and cyan.
fH = (4.0f/6.0f) + ( fR - fG ) * ANGLE_TO_UNIT;
}
// Wrap outlier Hues around the circle.
if (fH < 0.0f)
fH += 1.0f;
if (fH >= 1.0f)
fH -= 1.0f;
}
else {
// color is pure Black.
fS = 0;
fH = 0; // undefined hue
}
// Convert from floats to 8-bit integers.
int bH = (int)(0.5f + fH * 255.0f);
int bS = (int)(0.5f + fS * 255.0f);
int bV = (int)(0.5f + fV * 255.0f);
// Clip the values to make sure it fits within the 8bits.
if (bH > 255)
bH = 255;
if (bH < 0)
bH = 0;
if (bS > 255)
bS = 255;
if (bS < 0)
bS = 0;
if (bV > 255)
bV = 255;
if (bV < 0)
bV = 0;
// Set the HSV pixel components.
uchar *pHSV = (uchar*)(imHSV + y*rowSizeHSV + x*3);
*(pHSV+0) = bH; // H component
*(pHSV+1) = bS; // S component
*(pHSV+2) = bV; // V component
}
}
}
return imageHSV;
}
The cv::Mat M.depth() of a CV_8UC3-type matrix does unfortunately not return 8 - but instead it returns 0
Please have a look at the file "type_c.h"
#define CV_8U 0
#define CV_CN_SHIFT 3
#define CV_MAKETYPE(depth,cn) (CV_MAT_DEPTH(depth) + (((cn)-1) << CV_CN_SHIFT))
#define CV_8UC3 CV_MAKETYPE(CV_8U,3)
depth() doesn't return the actual bit depth but the number symbol that represents the depth !!
After replacing to the following line - it all works !! (i.e. replacing .depth() by .type() in the if-statement...)
if (imageHSV.type() == CV_8UC3 && imageHSV.channels() == 3) {...}

Create ColorCube CIFilter

I want to create ColorCube CIFilter for my app and i found documentation on apple site here https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_filer_recipes/ci_filter_recipes.html .
Also i post code here,
**//Allocate memory **
**const unsigned int size = 64;**
**float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);**
float rgb[3], hsv[3], *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
rgbToHSV(rgb, hsv);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgb[0] * alpha;
c[1] = rgb[1] * alpha;
c[2] = rgb[2] * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
i want to know what they take size=64 wand what the mean of that bold line in code?
Any help appreciated...

How to check if a uiimage is blank? (empty, transparent)

which is the best way to check whether a UIImage is blank?
I have this painting editor which returns a UIImage; I don't want to save this image if there's nothing on it.
Try this code:
BOOL isImageFlag=[self checkIfImage:image];
And checkIfImage method:
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = NO;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = YES;
break;
}
}
free(imageData);
return imageExist;
}
You will have to add OpenGLES framework and import this in the .m file:
#import <OpenGLES/ES1/gl.h>
One idea would be to call UIImagePNGRepresentation to get an NSData object then compare it with a pre-defined 'empty' version - ie: call:
- (BOOL)isEqualToData:(NSData *)otherData
to test?
Not tried this on large data; might want to check performance, if your image data is quite large, otherwise if it's small it is probably just like calling memcmp() in C.
Something along these lines:
Create a 1 px square CGContext
Draw the image so it fills the context
Test the one pixel of the context to see if it contains any data. If it's completely transparent, consider the picture blank
Others may be able to add more details to this answer.
Here's a solution in Swift that does not require any additional frameworks.
Thanks to answers in a related question here:
Get Pixel Data of ImageView from coordinates of touch screen on xcode?
func imageIsEmpty(_ image: UIImage) -> Bool {
guard let cgImage = image.cgImage,
let dataProvider = cgImage.dataProvider else
{
return true
}
let pixelData = dataProvider.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let imageWidth = Int(image.size.width)
let imageHeight = Int(image.size.height)
for x in 0..<imageWidth {
for y in 0..<imageHeight {
let pixelIndex = ((imageWidth * y) + x) * 4
let r = data[pixelIndex]
let g = data[pixelIndex + 1]
let b = data[pixelIndex + 2]
let a = data[pixelIndex + 3]
if a != 0 {
if r != 0 || g != 0 || b != 0 {
return false
}
}
}
}
return true
}
I'm not at my Mac, so I can't test this (and there are probably compile errors). But one method might be:
//The pixel format depends on what sort of image you're expecting. If it's RGBA, this should work
typedef struct
{
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} MyPixel_T;
UIImage *myImage = [self doTheThingToGetTheImage];
CGImageRef myCGImage = [myImage CGImage];
//Get a bitmap context for the image
CGBitmapContextRef *bitmapContext =
CGBitmapContextFreate(NULL, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage),
CGImageGetBitsPerComponent(myCGImage), CGImageGetBytesPerRow(myCGImage),
CGImageGetColorSpace(myCGImage), CGImageGetBitmapInfo(myCGImage));
//Draw the image into the context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage)), myCGImage);
//Get pixel data for the image
MyPixel_T *pixels = CGBitmapContextGetData(bitmapContext);
size_t pixelCount = CGImageGetWidth(myCGImage) * CGImageGetHeight(myCGImage);
for(size_t i = 0; i < pixelCount; i++)
{
MyPixel_T p = pixels[i];
//Your definition of what's blank may differ from mine
if(p.red > 0 && p.green > 0 && p.blue > 0 && p.alpha > 0)
return NO;
}
return YES;
I just encountered the same problem. Solved it by checking the dimensions:
Swift example:
let image = UIImage()
let height = image.size.height
let width = image.size.height
if (height > 0 && width > 0) {
// We have an image
} else {
// ...and we don't
}

Resources