I did an object detection using opencv by loading pre-trained MobileNet SSD model. from this post.
It reads a video and detects objects without any problem. But I would like to use readNet (or readFromDarknet) instead of readNetFromCaffe
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
because I have pre-trained weights and cfg file of my own objects only in Darknet framework. Therefore I simply changed readNetFromCaffe into readNet in above post and got an error:
Traceback (most recent call last):
File "people_counter.py", line 124, in <module>
for i in np.arange(0, detections.shape[2]):
IndexError: tuple index out of range
Here detections is an output from
blob = cv2.dnn.blobFromImage(frame, 1.0/255.0, (416, 416), True, crop=False)
net.setInput(blob)
detections = net.forward()
Its shape is (1, 1, 100, 7) tuple (when using readNetFromCaffe).
I was kinda expecting it wouldn't work just by changing the model. Then I decided to look for an object detector code where readNet was used and I found it here. I read through the code and found the same lines as follows:
blob = cv2.dnn.blobFromImage(image, scale, (416,416), (0,0,0), True, crop=False)
net.setInput(blob)
outs = net.forward(get_output_layers(net))
Here, the shape of outs is (1, 845, 6) list. But in order for me to be able to use it right away (here), outs should be of the same size with detections. I've come up to this part and have no clue about how I should proceed.
If something isn't clear, I just need help to use readNet (or readFromDarknet) instead of readNetFromCaffe in this post
If we look at the code closely we can see that everying is dependent on the outputs of detections, line 121, and we should tweak its outputs to match them with the outs of this, line 63. After spending almost a day, I came to a reasonable (not the perfect) solution. Basically, it is all about output blobs of readNetFromCaffe and readFromDarknet, because they output a blob with a shape 1x1xNx7 and NxC, respectively. Here Ns are the number of detections, but with different size vectors, namely, N in 1x1xNx7 is is a number of detections and an every detection is a vector of values
[batchId, classId, confidence, left, top, right, bottom] and N in NxC a number of
detected objects and C is a number of classes + 4 where the first 4 numbers are [center_x, center_y, width, height]. After analyzing these, we may replace (124-130 lines)
for i in np.arange(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > args["confidence"]:
idx = int(detections[0, 0, i, 1])
if CLASSES[idx] != "person":
continue
box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
(startX, startY, endX, endY) = box.astype("int")
with equivalent lines
for i in np.arange(0, detections.shape[0]):
scores = detections[i][5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > args["confidence"]:
idx = int(classId)
if CLASSES[idx] != "person":
continue
center_x = int(detections[i][0] * 416)
center_y = int(detections[i][1] * 416)
width = int(detections[i][2] * 416)
height = int(detections[i][3] * 416)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
right = width + left - 1
bottom = height + top - 1
box = [left, top, width, height]
(startX, startY, endX, endY) = box
This way we can keep track of "person" class using Darknet's cfg and weights and count them up/down with a visualiation line.
Again, there might be some other more simpler ways of tracking the detections of Darknet weights file, but this works for this particular case.
A reference:
more about blobs output by readNetFromCaffe and readFromDarknet
Related
my problem is that i wan my program to say just the new object that appear in front of the camera and not keep saying and repeating the object it sees, like i want it to say 'person' just one time when it sees a person , and when something new came into the frame like a cup then it will say 'cup' just one time, I have been told that to do that i should catch the last object in the last frame in a variable so it will be the thing that will be spoken . but it is not clear to me how to do that in code , i am using opencv, yolov3, pyttsx3
import cv2
import numpy as np
import pyttsx3
net = cv2.dnn.readNet('yolov3-tiny.weights', 'yolov3-tiny.cfg')
classes = []
with open("coco.names.txt", "r") as f:
classes = f.read().splitlines()
cap = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_PLAIN
colors = np.random.uniform(0, 255, size=(100, 3))
while True:
_, img = cap.read()
height, width, _ = img.shape
blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), (0,0,0), swapRB=True, crop=False)
net.setInput(blob)
output_layers_names = net.getUnconnectedOutLayersNames()
layerOutputs = net.forward(output_layers_names)
boxes = []
confidences = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.2:
center_x = int(detection[0]*width)
center_y = int(detection[1]*height)
w = int(detection[2]*width)
h = int(detection[3]*height)
x = int(center_x - w/2)
y = int(center_y - h/2)
boxes.append([x, y, w, h])
confidences.append((float(confidence)))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.2, 0.4)
if len(indexes)>0:
for i in indexes.flatten():
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
confidence = str(round(confidences[i],2))
color = colors[i]
cv2.rectangle(img, (x,y), (x+w, y+h), color, 2)
cv2.putText(img, label + " " + confidence, (x, y+20), font, 2, (255,255,255), 2)
engine = pyttsx3.init()
engine.say(label)
engine.runAndWait()
cv2.imshow('Image', img)
key = cv2.waitKey(1)
if key==27:
break
cap.release()
cv2.destroyAllWindows()
There are whole PhD dissertations about this. A not just a few. That is a almost whole research domain by itself.
You need to track object. And yolo works on images, not sequences of images. From yolo point of view, there is no relation at all between what happens in one frame and what happens in another. Frames could be random, unrelated images, grabbed from a photo collection, it would be the same.
So you need a second stage after yolo, way more complicated, to track objects.
And since yolo is not a sure thing (nor any other CNN detectors), that second stage cannot be a naive updating of a dictionary of detected object. Some object will be detected only in half of the frames, and you need to track them anyway. Some other (for example 2 pedestrians walking alongside) could be confused, because pedestrian B at frame t+1 is at the same position as pedestrian A was at frame t. And yet, you want to track them
I can't just give you some code it an answer. It is way bigger than what fits in a Q&A like this.
But I can point you to one, often used in conjonction with YOLO, algorithm to tracks objects, that is SORT, whose one implematation is this one.
You can find easily many description of how you can articulate YOLO+SORT. Like this one (a very recent one. YOLO+SORT was already a classic way before 2020, so I don't know what new this specific article did in 2022. Maybe nothing, all articles are not new, even if they should be. I just picked one randomly)
Or, since nowadays people often like videos instead of written documentation, you can watch this tutorial
But, well, how ever you do to learn this, you'll see that it is bigger than just asking on SO.
Problem Statement: I have generated a video from data to images of ANSYS Simulation of vortices formed due to flat plate plunging. The video contains vortices (in simpler terms blobs), which are ever evolving (dissociating and merging).
Sample video
Objective: The vortices are needed to be identified and labelled such that the consistency of the label is maintained. If a certain vortex has been given a label in the previous frame its label remains the same. If it dissociates the larger component (parent) should retain the same label whereas the smaller component gets a new label. If it merges then the label of larger of the two should be given to them.
Attempt: I have written a function which detects object boundary (contour detection) and then finds the center of the identiied contour. Further mapping the centroid to the one closest in the next frame provided the distance is less than certain threshold.
Attempted tracking video
Tracking algorithm:
import math
class EuclideanDistTracker:
def __init__(self):
# Store the center positions of the objects
self.center_points = {}
# Keep the count of the IDs
# each time a new object id detected, the count will increase by one
self.id_count = 0
def update(self, objects_rect):
# Objects boxes and ids
objects_bbs_ids = []
# Get center point of new object
for rect in objects_rect:
x, y, w, h = rect
cx = (x + x + w) // 2
cy = (y + y + h) // 2
# Find out if that object was detected already
same_object_detected = False
for id, pt in self.center_points.items():
dist = math.hypot(cx - pt[0], cy - pt[1])
if dist < 20: # Threshold
self.center_points[id] = (cx, cy)
print(self.center_points)
objects_bbs_ids.append([x, y, w, h, id])
same_object_detected = True
break
# New object is detected we assign the ID to that object
if same_object_detected is False:
self.center_points[self.id_count] = (cx, cy)
objects_bbs_ids.append([x, y, w, h, self.id_count])
self.id_count += 1
# Clean the dictionary by center points to remove IDS not used anymore
new_center_points = {}
for obj_bb_id in objects_bbs_ids:
_, _, _, _, object_id = obj_bb_id
center = self.center_points[object_id]
new_center_points[object_id] = center
# Update dictionary with IDs not used removed
self.center_points = new_center_points.copy()
return objects_bbs_ids
Implementing tracking algorithm to the sample video:
import cv2
import numpy as np
from tracker import *
# Create tracker object
tracker = EuclideanDistTracker()
cap = cv2.VideoCapture("Video Source")
count = 0
while True:
ret, frame = cap.read()
print("\n")
if(count != 0):
print("Frame Count: ", count)
frame = cv2.resize(frame, (0, 0), fx = 1.5, fy = 1.5)
height, width, channels = frame.shape
# 1. Object Detection
hsvFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
sensitivity = 20
white_lower = np.array([0,0,255-sensitivity])
white_upper = np.array([255,sensitivity,255])
white_mask = cv2.inRange(hsvFrame, white_lower, white_upper)
# Morphological Transform, Dilation
kernal = np.ones((3, 3), "uint8")
c = 1
contours_w, hierarchy_w = cv2.findContours(white_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
detections = []
for contour_w in contours_w:
area = cv2.contourArea(contour_w)
if area > 200 and area < 100000:
cv2.drawContours(frame, [contour_w], -1, (0, 0, 0), 1)
x,y,w,h = cv2.boundingRect(contour_w)
detections.append([x, y, w, h])
# 2. Object Tracking
boxes_ids = tracker.update(detections)
for box_id in boxes_ids:
x, y, w, h, id = box_id
cv2.putText(frame, str(id), (x - 8, y + 8), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 0), 2)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 0), 1)
cv2.imshow("Frame", frame)
count += 1
key = cv2.waitKey(0)
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
Problem: I was able to implement continuous labelling but the problem is that the objective of retaining label for the parent vortex is not maintained. (As seen from time (t) = 0s to 9s, the largest vortex is given a label = 3, whereas from t = 9s it is given label = 9. I want that to remain as label = 3 in Attempted tracking video). Any suggestions would be helpful and guide if I am on the right track or should I use some other standard tracking algorithm or any deep learning algorithm.
PS: Sorry for the excessive text but stackoverflow denied me putting just link of the codes.
I've tried reading the bilinear interpolation on the Wikipedia page https://en.wikipedia.org/wiki/Bilinear_interpolation and I have implemented one of the algorithms and I wanted to know if I'm doing it right or not.
This is the algorithm I implemented from the page:
This what I tried to implement in code:
for(int i = 0; i < w; i++) {
for( int j = 0; j < h; j++) {
new_img(i,j,0) = old_img(i,j,0)*(1-i)(1-j) + old_img(i+1,j,0)*i*(1-j) + old_img(i,j+1)*(1-i)*j + old_img(i+1,j+1,0)*i*j;
}
}
Is that how to implement it?
The bilinear interpolation deals with upsampling an imgae. It try to estimate the value of an array (e.g. an image) between the known values. For example, if i have an image and I want to estimate its value in the location (10.5, 24.5), the value will be a weighted average of the values in the four neighbors: (10,24), (10,25), (11,24), (11,25) which are known. The formula is the equation you posted in your question.
This is the python code:
scale_factor = 0.7
shape = np.array(image.shape[:2])
new_shape = np.ceil(shape * scale_factor).astype(int)
grid = np.meshgrid(np.linspace(0, 1, new_shape[1]), np.linspace(0, 1, new_shape[0]))
grid_mapping = [i * s for i, s in zip(grid, shape[::-1])]
resized_image = np.zeros(tuple(new_shape))
for x_new_im in range(new_shape[1]):
for y_new_im in range(new_shape[0]):
mapping = [grid_mapping[i][y_new_im, x_new_im] for i in range(2)]
x_1, y_1 = np.floor(mapping).astype(int)
try:
resized_image[y_new_im, x_new_im] = do_interpolation(f=image[y_1: y_1 + 2, x_1: x_1 + 2], x=mapping[0] - x_1, y=mapping[1] - y_1)
except ValueError:
pass
def do_interpolation(f, x, y):
return np.array([1 - x, x]).dot(f).dot([[1 - y], [y]])
Explanation:
new_shape = np.ceil(shape * scale_factor).astype(int) - calculate the new image shape after rescale.
grid = np.meshgrid(np.linspace(0, 1, new_shape[1]), np.linspace(0, 1, new_shape[0]))
grid_mapping = [i * s for i, s in zip(grid, shape[::-1])] - generate a x,y grid, when x goes from 0 to the width of the resized image and y goes from 0 to the hight of the new image. Now, we have the inverse mapping from the new image to the original one.
In the for loop we look at every pixel in the resized image and where is its source in the original image. The source will not be an integer but float (a fraction).
Now we take the four pixels in the original image that are around the mapped x and y. For example, if the mapped x,y is in (17.3, 25.7) we will take the four values of the original image in the pixels: (17, 25), (17, 26), (18, 25), (18, 26). Then we will apply the equation you bring.
note:
I add a try-except because I did not want to deal with boundaries, but you can edit the code to do it.
Problem description
We are trying to match a scanned image onto a template image:
Example of a scanned image:
Example of a template image:
The template image contains a collection of hearts varying in size and contour properties (closed, open left and open right). Each heart in the template is a Region of Interest for which we know the location, size, and contour type. Our goal is to match a scanned onto the template so that we can extract these ROIs in the scanned image. In the scanned image, some of these hearts are crossed, and they will be presented to a classifier that decides if they are crossed or not.
Our approach
Following a tutorial on PyImageSearch, we have attempted to use ORB to find matching keypoints (code included below). This should allow us to compute a perspective transform matrix that maps the scanned image on the template image.
We have tried some preprocessing steps such as thresholding and/or blurring the scanned image. We have also tried to increase the maximum number of features as much as possible.
The problem
The method fails to work for our image set. This can be seen in the following image:
It appears that a lot of keypoints are mapped to the wrong part of the template image, so the transform matrix is not calculated correctly.
Is ORB the right technique to use here, or are there parameters of the algorithm that could be fine-tuned to improve performance? It feels like we are missing out on something simple that should make it work, but we really don't know how to go forward with this approach :).
We are trying out an alternative technique where we cross-correlate the scan with individual heart shapes. This should give an image with peaks at the heart locations. By drawing a bounding box around these peaks we hope to map that bounding box on the bounding box of the template (I can elaborat on this upon request)
Any suggestions are greatly appreciated!
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
# Preprocessing parameters
THRESHOLD = True
BLUR = False
# ORB parameters
MAX_FEATURES = 4048
KEEP_PERCENT = .01
SHOW_DEBUG = True
# Convert both the input image and template to grayscale
scan_file = r'scan.jpg'
template_file = r'template.jpg'
scan = cv.imread(scan_file)
template = cv.imread(template_file)
scan_gray = cv.cvtColor(scan, cv.COLOR_BGR2GRAY)
template_gray = cv.cvtColor(template, cv.COLOR_BGR2GRAY)
if THRESHOLD:
_, scan_gray = cv.threshold(scan_gray, 127, 255, cv.THRESH_BINARY)
_, template_gray = cv.threshold(template_gray, 127, 255, cv.THRESH_BINARY)
if BLUR:
scan_gray = cv.blur(scan_gray, (5, 5))
template_gray = cv.blur(template_gray, (5, 5))
# Use ORB to detect keypoints and extract (binary) local invariant features
orb = cv.ORB_create(MAX_FEATURES)
(kps_template, desc_template) = orb.detectAndCompute(template_gray, None)
(kps_scan, desc_scan) = orb.detectAndCompute(scan_gray, None)
# Match the features
#method = cv.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING
#matcher = cv.DescriptorMatcher_create(method)
#matches = matcher.match(desc_scan, desc_template)
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.match(desc_scan, desc_template)
# Sort the matches by their distances
matches = sorted(matches, key = lambda x : x.distance)
# Keep only the top matches
keep = int(len(matches) * KEEP_PERCENT)
matches = matches[:keep]
if SHOW_DEBUG:
matched_visualization = cv.drawMatches(scan, kps_scan, template, kps_template, matches, None)
plt.imshow(matched_visualization)
Based on the clarifications provided by #it_guy, I have attempted to find all the crossed hearts using just the scanned image. I would have to try the algorithm on more images to check whether this approach will generalize or not.
Binarize the scanned image.
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
Perform dilation to close small gaps in the outline of the hearts, and the curves representing crosses. Note - The structuring element np.ones((1,2), np.uint8 can be changed by running the algorithm through multiple images and finding the most suitable structuring element.
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)).
Find all the contours in the image. The contours include all hearts and the triangle at the bottom. We eliminate other contours like dots by placing constraints on the height and width of contours to filter them. Further, we also use contour hierachies to eliminate inner contours in cross hearts.
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
We iterate through each of the filtered contours.
Contour with normal heart -
Contour with crossed heart -
Let us observe the difference between these two types of hearts. If we look at the transition from white-to-black pixel and black-to-white pixel ( from top to bottom ) inside the normal heart, we see that for majority of the image columns the number of such transitions are 4. ( Top border - 2 transitions, bottom border - 2 transitions )
white-to-black pixel - (255, 255, 0, 0, 0)
black-to-white pixel - (0, 0, 255, 255, 255)
But, in the case of the crossed heart, the number of transitions for majority of the columns must be 6, since the crossed curve / line adds two more transitions inside the heart (black-to-white first, then white-to-black). Hence, among all image columns which have greater than or equal to 4 such transitions, if more than 40% of the columns have 6 transitions, then the given contour represents a crossed contour. Result -
Code -
import cv2
import numpy as np
def convert_to_binary(rgb_image):
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
return gray_image, thresh
original = cv2.imread('original.jpg')
height, width = original.shape[:2]
original_gray, original_binary = convert_to_binary(original) # Get binary image
cv2.imwrite("binary.jpg", original_binary)
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)) # Close small gaps in the binary image
cv2.imwrite("closed.jpg", closing_original)
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE) # Get all the contours
bounding_rects_original = [cv2.boundingRect(c) for c in contours_original] # Get all contour bounding boxes
orig_boxes = list()
all_contour_image = original.copy()
for i, (x, y, w, h) in enumerate(bounding_rects_original):
if h > height / 2 or w > width / 2: # Eliminate extremely large contours
continue
if h < w / 2 or w < h / 2: # Eliminate vertical / horuzontal lines
continue
if w * h < 200: # Eliminate small area contours
continue
if hierarchy_original[0][i][3] != -1: # Eliminate contours created by heart crosses
continue
orig_boxes.append((x, y, w, h))
cv2.rectangle(all_contour_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
# cv2.imshow("warped", closing_original)
cv2.imwrite("all_contours.jpg", all_contour_image)
final_image = original.copy()
for x, y, w, h in orig_boxes:
cropped_image = closing_original[y - 2 :y + h + 2, x: x + w] # Get the heart binary image
col_pixel_diffs = np.abs(np.diff(cropped_image.T.astype(np.int16))/255) # Obtain all consecutive pixel differences in all the columns
column_sums = np.sum(col_pixel_diffs, axis=1) # Get the sum of each column's transitions. This results in an array of size equal
# to the number of columns, each element representing the number of black-white and white-black transitions.
percent_crosses = np.sum(column_sums >= 6)/ np.sum(column_sums >= 4) # Percentage of columns with 6 transitions among columns with 4 transitions
if percent_crosses > 0.4: # Crossed heart criterion
cv2.rectangle(final_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
cv2.imwrite("crossed_heart.jpg", cropped_image)
else:
cv2.imwrite("normal_heart.jpg", cropped_image)
cv2.imwrite("all_crossed_hearts.jpg", final_image)
This approach can be tested on more images to find its accuracy.
I would like to detect these points in this graph , also I wanted to detect the lines ,
I searched for edge detection and corner detection (such as Harris Corner Detector ), but I don't know how to handle such graph , I only need to know a sudo algorithm , or steps of going through such problem
Detect the vertices - segment by color( r>>max(g,b) ) and then apply the median or minimum filter of the appropriate size, or simply binary erode few times. Then just label the remaining connected blobs.
Detect the lines - use the simplified Hough Transform. Basicaly, draw a virtual line from the center of each vertix to all others and count the red pixels along the line. If there are plenty of them - the line exists, otherwise the two vertices are not connected.
Something like that:
import numpy as np
from scipy.misc import imshow, imsave, imread
from scipy.ndimage import filters, morphology, measurements
from skimage.draw import line
img = imread("laGK6.jpg")
r = img[:,:, 0]
g = img[:,:, 1]
b = img[:,:, 2]
mask = (r.astype(np.float)-np.maximum(g,b) ) > 20
mask2 = morphology.binary_erosion(mask)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_dilation(mask2)
label, numfeatures = measurements.label(mask2)
mc = measurements.center_of_mass(mask2, label, range(1,numfeatures+1) )
mask3 = np.zeros_like(mask2)
for p in mc:
mask3[p[0], p[1]]=255
arr = range(numfeatures)
connections=[]
for i in range( numfeatures):
arr.remove(i)
for j in arr:
rr,cc = line(mc[i][0], mc[i][1], mc[j][0], mc[j][1])
mask3[rr,cc]=255
ms = np.sum(mask[rr,cc]).astype(np.float)/len(rr)
if ms > 0.9:
connections.append((i,j))
print "vertices: ", mc
print "connections: ", connections
This outputs the following:
vertices: [(76.551724137931032, 288.72413793103448),
(76.568181818181813, 613.61363636363637), (138.72727272727272,
126.04545454545455), (139.33333333333334, 450.33333333333331), (265.18181818181819, 207.5151515151515), (264.96666666666664,
369.53333333333336), (265.41379310344826, 694.51724137931035), (265.51724137931035, 45.379310344827587), (327.57692307692309,
532.42307692307691)]
connections: [(0, 4), (0, 5), (1, 6), (1, 8), (2, 4), (2, 7), (3, 5), (3, 8)]
I am also working on a project that detects shapes in a drawing. I am not sure if it will solve your problem as well but here is what I have done for such problems.
I am assuming that you need the value of X and Y coordinates of those edge points
first thing you need is the X and Y values of the complete shape
next inside a loop put an if condition saying "get this point if
Y[i]<Y[i+1] and Y[i]<Y[i-1]". point whose next and previous points have Y greater than the value of current Y.
this condition will give you the X and Y values of the edge points.
Good Luck
If the graph is always the same color, and the vertices are always marked with squares, you can threshold the image by its color to detect lines and vertices. Then look for connected sets of pixels whose width and height are exactly the ones you can just measure.