How to extract colors into RGB values from a color checker chart - opencv

I have a color checker chart I would like to extract values from each square, and export a csv with cell (range row) information along with RGB (0-255) values for each color.
Example photos are below:
24 color example
large color example
How would I do this in a simple way?
Thank you for any and all help or suggestions.
So from the 24 image example my idea for output would be
A1 165, 42, 42 (brown example)
B1 255, 206, 177 (orange example)
Tried a few things online, with CV2 and pillows and had no success.

Related

How can I convert bounding box pixels of an image to white, and the background to black?

I have a set of images similar to this one:
And for each image, I have a text file with bounding box regions expressed in normalized pixel values, YOLOv5 format (a text document with rows of type: class, x_center, y_center, width, height). Here's an example:
3 0.1661542727623449 0.6696164480452673 0.2951388888888889 0.300925925925926
3 0.41214353459362196 0.851908114711934 0.2719907407407405 0.2961837705761321
I'd like to obtain a new dataset of masked images, where the bounding box area from the original image gets converted into white pixels, and the rest gets converted into black pixels. This would be and example of the output image:
I'm sure there is a way to do this in PIL (Pillow) in Python, but I just can't seem to find a way.
Would somebody be able to help?
Kindest thanks!
so here's the answer:
import os
import numpy as np
from PIL import Image
label=open(os.path.join(labPath, filename), 'r')
lines=label.read().split('\n')
square=np.zeros((1152,1152))
for line in lines:
if line!='':
line=line.split() #line: class, x, y, w, h
left=int((float(line[1])-0.5*float(line[3]))*1152 )
bot=int((float(line[2])+0.5*float(line[4]))*1152)
top=int(bot-float(line[4])*1152)
right=int(left+float(line[3])*1152)
square[top:bot, left:right]=255
square_img = Image.fromarray(square)
square_img=square_img.convert("L")
Let me know if you have any questions!

Color contrast formula ? (ImageMagick)

I reduced an image to 12 colors, annotated with some text (here it's color saturation, sorted) :
The text color is %[pixel:p{10,10}*2] (background *2) (I made a little script that I can share if you're interested).
As you can see, text is not very readable (contrast) in all cases (colors). Is there a smarter formula than a simple linear scaling to make text pop in all/most cases ?
As per Fred's suggestion, using luminosity is much better. Using black & white text depending on luminosity > 56 or not :
And for a not colorful image :
The text represents L component of HSL value. Notice the change from black to white when value crosses 56.

detecting wheel color and print it out in the console

I have images of wheels and I need to print out the color of the wheel from the image.The colors to be detected are Black, Gray, Silver, White, Bronze, Blue, Red and Green.
I started by defining color boundaries and detecting color ranges and displaying the values on the console. But now, I'm looking to print out the wheel color only, and I can not take the highest pixel count value, because the highest value will be for image background.
The wheels are always large and are located in the center. They are always wheels, and the background is always solid. It can't be patterned or striped or random, and the color is either white or gray only.
import numpy as np
import cv2
import sys
image = cv2.imread('/home/devop/Pictures/11.jpg')
color_boundaries = {
"Red": ([0, 0, 255], [127, 0, 255]),
"Green": ([0 , 255 , 0], [0 , 255 , 0]),
"Blue": ([255, 38, 0], [255, 38, 0]),
"White": ([255 , 255 , 255],[255 , 255 , 255]),
"Black": ([0 , 0 , 0],[0 , 0 , 0]),
"Bronze": ([205, 127, 50], [205, 127, 50]),
"Gray": ([160, 160, 160], [160, 160, 160]),
"Silver": ([192 , 192 , 192],[192 , 192 , 192])
}
for color_name, (lower, upper) in color_boundaries.items():
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = np.uint8)
upper = np.array(upper, dtype = np.uint8)
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
if mask.any():
print(f"{color_name}: {mask.sum()}")
What I got after executing the program :
White: 50822520
Black: 1020
Gray: 8925
Silver: 11985
Sample Image :
To get a robust solution I'd suggest to avoid verifying all pixels and concentrate on the approach when you use background color (which is supposed to be solid) to isolate the area of the interest. Here is the concept:
use some insights to identify the the background
isolate the wheel area
do the required analyzis
Sounds simple and actually it is:
unless wheels are designed for a bumpy ride you can use any corner of the image to identify the background
use color thresholding to get everything but the background
(optional step) you might want to adjust threshold in a way secondary parts are excluded
verify colors within identified area (target color will dominate), use delta E to find the closest one (also check this)
Here is a quick PoC:
1. Get the background
2. Color-based segmentation
3. Optional tuning
Step 4 seems to be pretty straightforward.

Parameter to isolate frames with colored lines

I'm writing a code that should detect frames in a video that have colored lines. I'm new to openCV and would like to know if I should evaluate saturation, entropy, RBG intensity, etc. The lines, as shown in the pictures, come in every color and density. When black and white, but they are all the same color inside a given frame. Any advice?
Regular frame:
Example 1:
Example 2:
You can use something like this to get the mean Saturation and see that it is lower for your greyscale image and higher for your colour ones:
#!/usr/bin/env python3
import cv2
# Open image
im =cv2.imread('a.png',cv2.IMREAD_UNCHANGED)
# Convert to HSV
hsv=cv2.cvtColor(im,cv2.COLOR_BGR2HSV)
# Get mean Saturation - I use index "1" because Hue is index "0" and Value is index "2"
meanSat = hsv[...,1].mean()
Results
first image (greyish): meanSat = 78
second image (blueish): meanSat = 162
third image (redish): meanSat = 151
If it is time-critical, I guess you could just calculate for a small extracted patch since the red/blue lines are all over the image anyway.

iOS: Finding the total difference between 2 color's rgb values?

I was wondering if there is an existing math algorithm, that could help me get to the solution I need.
I have an input color:
(229, 67, 99).
I have 10 predefined colors and I want to find the color that closely matches the input color.
So far I have 3 NSArrays (reds, blues, greens). In each array I have 10 values. I'm currently using binary search to round my input color's values to the closest values that exist in each array.
For now I've gotten: r = 230, g = 126, b = 34 but this together doesn't make a predefined color, instead it makes a new color that is not predefined.
So what I did is I found the 3 predefined colors that have one of those values (the color closest to the red value, the color closest to the green value, and the color closest to the blue value.
A. (230, 126, 34)
B. (142, 68, 173)
C. (39, 174, 96)
The problem is that from my predefined colors, none of those 3 happen to be the closest. In fact overall its closes to another predefined color:
D. (231, 76, 60)
The absolute difference from all these colors are
A. 1, 59, 65
B. 87, 1, 74
C. 190, 107, 2
D. 2, 9, 3
Total Difference: (A=125, B=162, C=299, D=14)
So my question is... is there an existing algorithm to arrive at the smallest total difference? What's the best way to approach it?
I know how to get to 14, but my question is how can I compare total differences between all predefined colors?

Resources