iOS: Finding the total difference between 2 color's rgb values? - ios

I was wondering if there is an existing math algorithm, that could help me get to the solution I need.
I have an input color:
(229, 67, 99).
I have 10 predefined colors and I want to find the color that closely matches the input color.
So far I have 3 NSArrays (reds, blues, greens). In each array I have 10 values. I'm currently using binary search to round my input color's values to the closest values that exist in each array.
For now I've gotten: r = 230, g = 126, b = 34 but this together doesn't make a predefined color, instead it makes a new color that is not predefined.
So what I did is I found the 3 predefined colors that have one of those values (the color closest to the red value, the color closest to the green value, and the color closest to the blue value.
A. (230, 126, 34)
B. (142, 68, 173)
C. (39, 174, 96)
The problem is that from my predefined colors, none of those 3 happen to be the closest. In fact overall its closes to another predefined color:
D. (231, 76, 60)
The absolute difference from all these colors are
A. 1, 59, 65
B. 87, 1, 74
C. 190, 107, 2
D. 2, 9, 3
Total Difference: (A=125, B=162, C=299, D=14)
So my question is... is there an existing algorithm to arrive at the smallest total difference? What's the best way to approach it?
I know how to get to 14, but my question is how can I compare total differences between all predefined colors?

Related

How to extract colors into RGB values from a color checker chart

I have a color checker chart I would like to extract values from each square, and export a csv with cell (range row) information along with RGB (0-255) values for each color.
Example photos are below:
24 color example
large color example
How would I do this in a simple way?
Thank you for any and all help or suggestions.
So from the 24 image example my idea for output would be
A1 165, 42, 42 (brown example)
B1 255, 206, 177 (orange example)
Tried a few things online, with CV2 and pillows and had no success.

detecting wheel color and print it out in the console

I have images of wheels and I need to print out the color of the wheel from the image.The colors to be detected are Black, Gray, Silver, White, Bronze, Blue, Red and Green.
I started by defining color boundaries and detecting color ranges and displaying the values on the console. But now, I'm looking to print out the wheel color only, and I can not take the highest pixel count value, because the highest value will be for image background.
The wheels are always large and are located in the center. They are always wheels, and the background is always solid. It can't be patterned or striped or random, and the color is either white or gray only.
import numpy as np
import cv2
import sys
image = cv2.imread('/home/devop/Pictures/11.jpg')
color_boundaries = {
"Red": ([0, 0, 255], [127, 0, 255]),
"Green": ([0 , 255 , 0], [0 , 255 , 0]),
"Blue": ([255, 38, 0], [255, 38, 0]),
"White": ([255 , 255 , 255],[255 , 255 , 255]),
"Black": ([0 , 0 , 0],[0 , 0 , 0]),
"Bronze": ([205, 127, 50], [205, 127, 50]),
"Gray": ([160, 160, 160], [160, 160, 160]),
"Silver": ([192 , 192 , 192],[192 , 192 , 192])
}
for color_name, (lower, upper) in color_boundaries.items():
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = np.uint8)
upper = np.array(upper, dtype = np.uint8)
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
if mask.any():
print(f"{color_name}: {mask.sum()}")
What I got after executing the program :
White: 50822520
Black: 1020
Gray: 8925
Silver: 11985
Sample Image :
To get a robust solution I'd suggest to avoid verifying all pixels and concentrate on the approach when you use background color (which is supposed to be solid) to isolate the area of the interest. Here is the concept:
use some insights to identify the the background
isolate the wheel area
do the required analyzis
Sounds simple and actually it is:
unless wheels are designed for a bumpy ride you can use any corner of the image to identify the background
use color thresholding to get everything but the background
(optional step) you might want to adjust threshold in a way secondary parts are excluded
verify colors within identified area (target color will dominate), use delta E to find the closest one (also check this)
Here is a quick PoC:
1. Get the background
2. Color-based segmentation
3. Optional tuning
Step 4 seems to be pretty straightforward.

How can be series, a bunch of signals, not the single signal input of sequential data be classified by several groups of classification with DL4J?

I have 60 signals sequences samples with length 200 each labeled by 6 label groups, each label is marked with one of 10 values. I'd like to get prediction in each label group on each label when feeding the 200-length or even shorter sample to the network.
I tried to build own network based on https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/seqclassification/UCISequenceClassificationExample.java example, which, however, provides the label padding. I use no padding for the label and I'm getting exception like this:
Exception in thread "main" java.lang.IllegalStateException: Sequence lengths do not match for RnnOutputLayer input and labels:Arrays should be rank 3 with shape [minibatch, size, sequenceLength] - mismatch on dimension 2 (sequence length) - input=[1, 200, 12000] vs. label=[1, 1, 10]
In fact, it is a requirement for the labels to have a time dimension what is 200-long for the features the same as features are. So here I have to do some kind of techniques like zeroes padding in all 6 labels channel. On other hand, the input was wrong, I put all 60*200 there, however it should be [1, 200, 60] there while 6 labels are [1, 200, 10] each.
The thing under the question is in which part of 200-length label I should place the real label value [0], [199] or may be place labels to the typical parts of the signals they are associated with? My trainings that should check this is still in progress. What kind of padding is better? Zeroes padding or the label value padding? Still not clear and can't google out paper explaining what is the best.

can not detect color in opencv inrange? [duplicate]

currently I'm making an app where user will detect green colors. I use this photo for testing:
My problem is that I can not detect any green pixel. Before I worked with blue color and everything worked fine. Now I can't detect anything though I tried different combinations of RGB. I wanted to know whether it's problem with green or my detection range, so I made an image in paint using (0, 255, 0) and it worked. Why it can't see this circle then? I use this code for detection:
Core.inRange(hsv_image, new Scalar([I change this value]), new Scalar(60, 255, 255), ultimate_blue);
It could have been that I set wrong Range, but I use Photoshop to get color of one of green pixels and convert RGB value of it into HSV. Yet it doesn't work. It don't detect even pixel that I've sampled. What's wrong? Thanks in advance.
Using Miki's answer:
Green color is HSV space has H = 120 and it's in range [0, 360].
OpenCV halves the H values to fit the range [0,255], so H value instead of being in range [0, 360], is in range [0, 180].
S and V are still in range [0, 255].
As a consequence, the value of H for green is 60 = 120 / 2.
You upper and lower bound should be:
// sensitivity is a int, typically set to 15 - 20
[60 - sensitivity, 100, 100]
[60 + sensitivity, 255, 255]
UPDATE
Since your image is quite dark, you need to use a lower bound for V. With these values:
sensitivity = 15;
[60 - sensitivity, 100, 50] // lower bound
[60 + sensitivity, 255, 255] // upper bound
the resulting mask would be like:
You can refer to this answer for the details.

Corona SDK setFillColor not coloring when mix colors

Im just a new user of Corona SDK and Im following some exercises of a book. I tried to create a rectangle and color it, but if i put setFillColor(255,0,0) or put 255 in green or blue it works. The problem is when i try to mix colors like setFillColor(100,129,93) it just paints a white rectangle.
This is my main.lua:
rect_upperBackground = display.newRect(150, 150, 100, 50)
rect_upperBackground:setFillColor(49, 49, 49)
According to the documentation, setFillColor requires colors in the range of [0, 1] as opposed to [0, 255]. So for example, you might try this instead.
rect_upperBackground:setFillColor(100 / 255, 129 / 255, 93 / 255)
rect_upperBackground:setFillColor(0.4, 0.2, 0.5)
object:setFillColor() used to use values 0-255 but in the latest release of the SDK they changed that to 0-1 so they could handle larger color values. (Because 0-1 is a larger range than 0-255, you know.)
That means all books, video tutorials, etc., created before mid-November are wrong.
You'll also need to watch for object:setReferencePoint() because that has been deprecated. You'll now need to use object.anchorX and object.anchorY (defaults to the center of the object so if that's what you want, no tweaking needed).
Here's an article someone wrote explaining three big changes you'll need to watch out for:
http://www.develephant.net/3-things-you-need-to-know-about-corona-sdk-graphics-2-0/
Those changes are as of build 2013.2076 of Corona SDK.

Resources