iPhone 6 compare two UIColors - ios

I have two colors here is a log of the instances:
(lldb) po acolor
UIDeviceRGBColorSpace 0.929412 0.133333 0.141176 1
(lldb) po hexColor
UIDeviceRGBColorSpace 0.929412 0.133333 0.141176 1
I have this code that works for iPhone 4s 5 but not for iPhone 6:
if ([acolor isEqual:hexColor])
{
// other code here.
}
As additional I create this acolor from pixel of image:
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
But seems there is some differences for values between iPhone 4 and iPhone 6:
for example if I print red on iPhone 6 it has:
(lldb) po red
0.92941176470588238
and for iPhone 4:
(lldb) po red
0.929411768
The values are different because of 32 and 64 bits architecture as I think.
But as we see at colors they seems have right values and seems they rounded. but it never compare it with success. So acolor is not equal hexColor. Never. only for iPhone 4 and 5.
Of course I can use float type instead of CGFloat. But just noticed that rounding seems work and UIColors has the same values for different devices. But comparing does not work.
I get hexColor using this methods:
+ (UIColor *)colorWithHexString:(NSString *)hexString
{
const char *cStr = [hexString cStringUsingEncoding:NSASCIIStringEncoding];
long x = strtol(cStr, NULL, 16);
return [UIColor colorWithHex:(UInt32)x];
}
+ (UIColor *)colorWithHex:(UInt32)col {
unsigned char r, g, b;
b = col & 0xFF;
g = (col >> 8) & 0xFF;
r = (col >> 16) & 0xFF;
return [UIColor colorWithRed:(float)r/255.0f
green:(float)g/255.0f
blue:(float)b/255.0f
alpha:1];
}

Did you try to round both to the same decimal count and then compare them?
For example what happens if you round both 9 decimals?
float rounded = roundf (original * 1000000000) / 1000000000.0;
This piece of code is adapted from alastair's post in this question: Round a float value to two post decimal positions

Related

Removing R from RGB Color

Is there a way to remove the red channel of an RGB pixel in a way such that in the resulting picture the red color goes to white not to black? I need to distinguish between red color and blue/black color, but in different light the RGB value varies. If I simply remove the R channel, darker red colors become black and I want the opposite result.
Thanks!
If I understand you correctly -
You need to normalize the red channel value and then use it as a mixing value:
mix = R / 255
Then mix white with the normal color minus the red channel using the mix factor:
Original-red White
R' = 0 + 255 * mix
G' = G * (1 - mix) + 255 * mix
B' = B * (1 - mix) + 255 * mix
Just notice it will kill the yellow and magenta colors as well as these of course uses the red channel, and the more red the more white is mixed in.
You should be able to get around this using the CMYK color-model, or a combination of both, so you can separate out all the main components. Then override the mix with f.ex. the yellow/magenta components from CMYK.
The mixing process should be the same as described though.
Conceptual demo
var ctx = c.getContext("2d");
var img = new Image;
img.onload = function() {
c.width = img.width;
c.height = img.height;
ctx.drawImage(this, 0, 0);
var idata = ctx.getImageData(0,0,c.width,c.height),
data = idata.data, len = data.length, i, mix;
/* mix = R / 255
R = 0 + 255 * mix
G = G * (1 - mix) + 255 * mix
B = B * (1 - mix) + 255 * mix
*/
for(i = 0; i < len; i+= 4) {
mix = data[i] / 255; // mix using red
data[i ] = 255 * mix; // red channel
data[i+1] = data[i+1] * (1 - mix) + 255 * mix; // green channel
data[i+2] = data[i+2] * (1 - mix) + 255 * mix; // blue channel
}
ctx.putImageData(idata,0,0);
};
img.crossOrigin = "";
img.src = "//i.imgur.com/ptOPQZx.png";
document.body.appendChild(img)
<h4>Red removed + to white</h4><canvas id=c></canvas><h4>Original:</h4>

UIView alpha value issues

I have a small method that I am calling to draw stars progressively as a game moves on. Here is the code:`
-(void)stars{
for (int i = 0; i < (ScoreNumber * 3); i++){
int starX = ((arc4random() % (320 - 0 + 1)) + 0);
int starY = ((arc4random() % (640 - 0 + 1)) + 0);
int starSize = ((arc4random() % (1 - 0 + 1)) + 1);
UIView *stars = [[UIView alloc] initWithFrame:CGRectMake(starX,starY, starSize, starSize)];
stars.alpha = (i / 5);
stars.backgroundColor = [UIColor whiteColor];
[self.view addSubview:stars];
}
}
The stars do show but each iteration through the loop it bugs out another UIImageView (main character) and resets it's position. Also the alpha values appear to not work at all and it appears to only use the value of 1 (full showing). Any advice (for a new programmer) would be appreciated.
i is an integer in this case so the result will always be rounded to the nearest whole number. 0 while i < 5. Otherwise 1, 2, 3, etc. Instead you might want:
stars.alpha = (CGFloat)i / 5.0;
Although alpha will still be 1.0 or more after i >= 5.
Maybe you meant something like:
stars.alpha = 0.20 + (CGFloat)((i % 5) / 5.0;
That will give your stars alpha values between 0.2 and 1.0.
The problem is that only the first 5 stars will have an alpha less than one:
-(void)stars{
for (int i = 0; i < (ScoreNumber * 3); i++){
int starX = ((arc4random() % (320 - 0 + 1)) + 0);
int starY = ((arc4random() % (640 - 0 + 1)) + 0);
int starSize = ((arc4random() % (1 - 0 + 1)) + 1);
UIView *stars = [[UIView alloc] initWithFrame:CGRectMake(starX,starY, starSize, starSize)];
stars.alpha = (i / 5); // ONCE THIS IS 5 (LIKELY WON'T TAKE LONG), ALPHA WILL BE 1 FOR ALL YOUR STARS
stars.backgroundColor = [UIColor whiteColor];
[self.view addSubview:stars];
}
}
Also, if a star is added to the superview on top of a current star and its alpha is actually less than 1, it will appear like it has more alpha than it actually does.
One fix might be to change 5 to a something bigger, like 25 or 50. It's hard to know what would be appropriate without knowing how big ScoreNumber can be.
Edit:
Also, just realized another problem: you're dividing an int by an int, so alpha will be an int (not what you want). If you change the 5 to 5.0 (or 25.0 or 50.0), you'll get a float.
Hope it helps!

Generate ordered colors programmatically

How to generate ordered color range programmatically from one color to another?
If we have this color range we need to cover:
0 0 255 255 255
179 255 0 255 58
255 255 0 0 0
Those are blue, light blue, green, yellow, orange-reddish.
So far i have found a bunch of questions about generating random color range in HSV color scheme. I need ordered RGB and evenly distributed color range.
My previous solution was:
NSInteger input = (510.0f / 100.0f) * progressAmount;
input < 256 ? (_redColor = input) : (_redColor = 255);
input > 255 ? (_blueColor = 255 - (input - 255)) : (_blueColor = 255);
_indicatorColor = [UIColor colorWithRed:(CGFloat) _redColor / 255.0f green:0.0f blue:(CGFloat) _blueColor / 255.0f alpha:1.0f];
but now i need colors from more complex color range. Not just three like i had.
look at the image of rgb space and may be you will find some solution how to make simple way between desired colors.
I have duplicated functionality from CGGradient.
Wrote this function to get color from specific range.
// function: get color depending on min and max range for progress control
// param progress: current progress in progress range; eg. with range [-20, 60], progress can be any value in between
// param colors: array of float values representing color RGBA components in [0, 255] range
// param locationRange: array of float values representing amount of color for two colors
// param numElements: numer of colors and number of locations
// note: min and max range can be a combination of positive and negative values: [-20, 50], [0, 100], [30, 90], [-60, -30]
+ (UIColor*) colorForProgress:(CGFloat)progress lowerRange:(CGFloat)lowerRange upperRange:(CGFloat)upperRange colorRGBAArray:(CGFloat*)colors locationRange:(CGFloat*)locationRange numElements:(NSUInteger)numElements
{
NSAssert(colors != NULL, #"color array is NULL");
NSAssert(locationRange != NULL, #"numElements is NULL");
#ifdef DEBUG
for (int i = 0; i < numElements; i++)
{
NSAssert2(locationRange[i] >= 0.0f && locationRange[i] <= 1.0f, #"locationRange %d %6.2f not in range [0.0, 1.0]", i, locationRange[i]);
}
#endif
UIColor *resultColor;
// convert user range to local range
// normalize range to [0-n]
CGFloat rangeNormalized = upperRange - lowerRange;
NSLog(#"rangeNormalized: %6.2f", rangeNormalized);
// normalize input to range [0, range_normalized_max]
// r = progress - lowerRange
CGFloat progressNormalized = progress - lowerRange;
NSLog(#"progressNormalized: %6.2f", progressNormalized);
// map normalized range to [0, 100] percent
CGFloat progressPercent = (100.0f / rangeNormalized) * progressNormalized;
NSLog(#"progressPercent: %6.2f", progressPercent);
// map normalized progress to [0.0, 1.0]
CGFloat progressLocal = progressPercent / 100.0f;
NSLog(#"progress_01: %6.2f", progressLocal);
NSAssert(progressLocal >= 0.0f && progressLocal <= 1.0f, #"progress_01 not in range [0.0, 1.0]");
// find two colors for range_s
CGFloat b1 = 0, b2 = 0, *color1 = NULL, *color2 = NULL;
if (progressLocal < 1.0f)
{
for (int i = 0; i < numElements - 1; i++) // ingore last
{
if (progressLocal >= locationRange[i] && progressLocal < locationRange[i+1])
{
b1 = locationRange[i];
b2 = locationRange[i+1];
color1 = colors + (4 * i); // iterate colors
color2 = colors + (4 * (i+1));
break;
}
}
} else if (progressLocal == 1.0f)
{
b1 = locationRange[numElements - 2];
b2 = locationRange[numElements - 1];
color1 = colors + (4 * (numElements - 2));
color2 = colors + (4 * (numElements - 1));
}
NSLog(#"b1: %6.2f, b2: %6.2f", b1, b2);
NSLog(#"color1: %6.2f, %6.2f, %6.2f, %6.2f", color1[0], color1[1], color1[2], color1[3]);
NSLog(#"color2: %6.2f, %6.2f, %6.2f, %6.2f", color2[0], color2[1], color2[2], color2[3]);
CGFloat localRange = b2 - b1;
NSLog(#"localRange: %6.2f", localRange);
CGFloat localAmount = progressLocal - b1;
NSLog(#"localAmount: %6.2f", localAmount);
// colors
CGFloat newColors[4];
// each color component in both colors
for (int i = 0; i < 3; i++)
{
printf("\n");
NSLog(#"color1[%d]: %6.2f", i, color1[i]);
NSLog(#"color2[%d]: %6.2f", i, color2[i]);
newColors[i] = color1[i];
NSLog(#"newColors[%d]: %6.2f", i, newColors[i]);
CGFloat color_range = color1[i] > color2[i] ? (color1[i] - color2[i]) : (color2[i] - color1[i]);
NSLog(#"color_range: %6.2f", color_range);
// color_range map localRange
CGFloat incr = color_range / localRange;
NSLog(#"incr: %6.2f", incr);
CGFloat result = incr * localAmount;
if (color1[i] > color2[i])
{
result = -result;
}
NSLog(#"result: %6.2f", result);
newColors[i] += result;
NSLog(#"newColors[%d]: %6.2f", i, newColors[i]);
}
resultColor = [UIColor colorWithRed:newColors[0] / 255.0f green:newColors[1] / 255.0f blue:newColors[2] / 255.0f alpha:1.0f];
printUIColor(resultColor);
return resultColor;
}
I'll leave this question unanswered for some time in case anyone proposes better solution.

Implementing Ordered Dithering (24 bit RGB to 3 bit per channel RGB)

I'm writing an image editing programme, and I need functionality to dither any arbitrary 24-bit RGB image (I've taken care of loading it with CoreGraphics and such) to an image with 3 bit colour channels, then displaying it. I've set up my matrices and such, but I've not got any results from the code below besides a simple pattern that is applied to the image:
- (CGImageRef) ditherImageTo16Colours:(CGImageRef)image withDitheringMatrixType:(SQUBayerDitheringMatrix) matrix {
if(image == NULL) {
NSLog(#"Image is NULL!");
return NULL;
}
unsigned int imageWidth = CGImageGetWidth(image);
unsigned int imageHeight = CGImageGetHeight(image);
NSLog(#"Image size: %u x %u", imageWidth, imageHeight);
CGContextRef context = CGBitmapContextCreate(NULL,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image); // draw it
CGImageRelease(image); // get rid of the image, we don't want it anymore.
unsigned char *imageData = CGBitmapContextGetData(context);
unsigned char ditheringModulusType[0x04] = {0x02, 0x03, 0x04, 0x08};
unsigned char ditheringModulus = ditheringModulusType[matrix];
unsigned int red;
unsigned int green;
unsigned int blue;
uint32_t *memoryBuffer;
memoryBuffer = (uint32_t *) malloc((imageHeight * imageWidth) * 4);
unsigned int thresholds[0x03] = {256/8, 256/8, 256/8};
for(int y = 0; y < imageHeight; y++) {
for(int x = 0; x < imageWidth; x++) {
// fetch the colour components, add the dither value to them
red = (imageData[((y * imageWidth) * 4) + (x << 0x02)]);
green = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 1]);
blue = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 2]);
if(red > 36 && red < 238) {
red += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(green > 36 && green < 238) {
green += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(blue > 36 && blue < 238) {
blue += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
}
// memoryBuffer[(y * imageWidth) + x] = (0xFF0000 + ((x >> 0x1) << 0x08) + (y >> 2));
memoryBuffer[(y * imageWidth) + x] = find_closest_palette_colour(((red & 0xFF) << 0x10) | ((green & 0xFF) << 0x08) | (blue & 0xFF));
}
}
//CGContextRelease(context);
context = CGBitmapContextCreate(memoryBuffer,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
NSLog(#"Created context from buffer: %#", context);
CGImageRef result = CGBitmapContextCreateImage(context);
return result;
}
Note that find_closest_palette_colour doesn't do anything besides returning the original colour right now for testing.
I'm trying to implement the example pseudocode from Wikipedia, and I don't really get anything out of that right now.
Anyone got a clue on how to fix this up?
Use the code that I have provided here: https://stackoverflow.com/a/17900812/342646
This code converts the image to a single-channel gray-scale first. If you want the dithering to be done on a three-channel image, you can just split your image into three channels and call the function three times (once per channel).

How is a sepia tone created?

What are the basic operations needed to create a sepia tone? My reference point is the perl imagemagick library, so I can easily use any basic operation. I've tried to quantize (making it grayscale), colorize, and then enhance the image but it's still a bit blurry.
Sample code of a sepia converter in C# is available in my answer here: What is wrong with this sepia tone conversion algorithm?
The algorithm comes from this page, each input pixel color is transformed in the following way:
outputRed = (inputRed * .393) + (inputGreen *.769) + (inputBlue * .189)
outputGreen = (inputRed * .349) + (inputGreen *.686) + (inputBlue * .168)
outputBlue = (inputRed * .272) + (inputGreen *.534) + (inputBlue * .131)
If any of these output values is greater than 255, you simply set it
to 255. These specific values are the values for sepia tone that are
recommended by Microsoft.
This is in C#, however, the basic concepts are the same. You will likely be able to convert this into perl.
private void SepiaBitmap(Bitmap bmp)
{
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
System.Drawing.Imaging.PixelFormat.Format32bppRgb);
IntPtr ptr = bmpData.Scan0;
int numPixels = bmpData.Width * bmp.Height;
int numBytes = numPixels * 4;
byte[] rgbValues = new byte[numBytes];
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, numBytes);
for (int i = 0; i < rgbValues.Length; i += 4)
{
rgbValues[i + 2] = (byte)Math.Min((.393 * red) + (.769 * green) + (.189 * (blue)), 255.0); //red
rgbValues[i + 1] = (byte)Math.Min((.349 * red) + (.686 * green) + (.168 * (blue)), 255.0); //green
rgbValues[i + 0] = (byte)Math.Min((.272 * red) + (.534 * green) + (.131 * (blue)), 255.0); //blue
if ((rgbValues[i + 2]) > 255)
{
rgbValues[i + 2] = 255;
}
if ((rgbValues[i + 1]) > 255)
{
rgbValues[i + 1] = 255;
}
if ((rgbValues[i + 0]) > 255)
{
rgbValues[i + 0] = 255;
}
}
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, numBytes);
this.Invalidate();
bmp.UnlockBits(bmpData);
}
It's easy if you use the imagemagic command line.
http://www.imagemagick.org/script/convert.php
Use the "-sepia-tone threshold" argument when converting.
Strangely enough, the PerlMagick API doesn't seem to include a method for doing this directly:
http://www.imagemagick.org/script/perl-magick.php
...and no reference to any Sepia method.
Take a look at how it's implemented in the AForge.NET library, the C# code is here.
The basics seem to be
transform it to the YIQ color space
modify it
transform back to RGB
The full alrogithm is in the source code, plus the RGB -> YIQ and YIQ -> RGB transformations are explained.

Resources