OpenCV - detect components of certain width - opencv

I have an image with detected components. From this I need to detect components that form "polyline" of certain width (white and red in the image below).
What algorithm is best for this in OpenCV? I have tried separate all components one by one and use morphological operations but that was quite slow and not entirely accurate.
Note: the image below is downsampled. Original image has resolution 8K and border thickness is approx. 30-40px.

I like your question - it is kind of like granulometry of lines instead of grains.
My approach is to find the unique colours in your image and then, for each colour:
isolate that colour as white on black
repeatedly erode by 3 pixels till nothing is left
Note that 20-30% of the code below is just for debug and explanation, and also that it could be speeded up with multi-processing and a little tweaking.
#!/usr/bin/env python3
import cv2
import numpy as np
from skimage.morphology import medial_axis, erosion, disk
def getColoursAndCounts(im):
"""Returns list of unique colours in an image and their counts."""
# Make a single 24-bit number for each pixel - it's faster
f = np.dot(im.astype(np.uint32), [1,256,65536])
# Count unique colours in image and how often they occur
colours, counts = np.unique(f, return_counts=1)
# Convert found colours back from 24-bit number to BGR
return np.dstack((colours&255,(colours>>8)&255,colours>>16)).reshape((-1,3)), counts
if __name__ == "__main__":
# Load image and get colours present and their counts
im = cv2.imread('classes_fs.png',cv2.IMREAD_COLOR)
colours, counts = getColoursAndCounts(im)
# Iterate over unique colours/classes - this could easily be multi-processed
for index, colour in enumerate(colours):
b, g, r = colour
count = counts[index]
print(f'DEBUG: Processing class {index}, colour ({b},{g},{r}), area {count}')
# Generate this class in white on a black background for processing
m = np.where(np.all(im==[colour], axis=-1), 255, 0).astype(np.uint8)
# Create debug image - can be omitted
cv2.imwrite(f'class-{index}.png', m)
# DEBUG only - show progression of erosion
out = m.copy()
# You could trim the excess black around the shape here to speed up morphology
# Erode, repeatedly with disk of radius 3 to determine line width
radius = 3
selem = disk(radius)
for j in range(1,7):
# Erode again, see what's left
m = erosion(m,selem)
c = cv2.countNonZero(m)
percRem = int(c*100/count)
print(f' Iteration: {j}, nonZero: {c}, %remaining: {percRem}')
# DEBUG only
out = np.hstack((out, m))
if c==0:
break
# DEBUG only
cv2.imwrite(f'erosion-{index}.png', out)
So, the 35 unique colours in your image give rise to these classes once isolated:
Here is the output:
DEBUG: Processing class 0, colour (0,0,0), area 629800
Iteration: 1, nonZero: 390312, %remaining: 61
Iteration: 2, nonZero: 206418, %remaining: 32
Iteration: 3, nonZero: 123643, %remaining: 19
Iteration: 4, nonZero: 73434, %remaining: 11
Iteration: 5, nonZero: 40059, %remaining: 6
Iteration: 6, nonZero: 21975, %remaining: 3
DEBUG: Processing class 1, colour (10,14,0), area 5700
Iteration: 1, nonZero: 2024, %remaining: 35
Iteration: 2, nonZero: 38, %remaining: 0
Iteration: 3, nonZero: 3, %remaining: 0
Iteration: 4, nonZero: 0, %remaining: 0
...
...
DEBUG: Processing class 22, colour (174,41,180), area 3600
Iteration: 1, nonZero: 1501, %remaining: 41
Iteration: 2, nonZero: 222, %remaining: 6
Iteration: 3, nonZero: 17, %remaining: 0
Iteration: 4, nonZero: 0, %remaining: 0
DEBUG: Processing class 23, colour (241,11,185), area 200
Iteration: 1, nonZero: 56, %remaining: 28
Iteration: 2, nonZero: 0, %remaining: 0
DEBUG: Processing class 24, colour (247,23,185), area 44800
Iteration: 1, nonZero: 38666, %remaining: 86
Iteration: 2, nonZero: 32982, %remaining: 73
Iteration: 3, nonZero: 27904, %remaining: 62
Iteration: 4, nonZero: 23364, %remaining: 52
Iteration: 5, nonZero: 19267, %remaining: 43
Iteration: 6, nonZero: 15718, %remaining: 35
DEBUG: Processing class 25, colour (165,142,185), area 33800
Iteration: 1, nonZero: 30506, %remaining: 90
Iteration: 2, nonZero: 27554, %remaining: 81
Iteration: 3, nonZero: 24970, %remaining: 73
Iteration: 4, nonZero: 22603, %remaining: 66
Iteration: 5, nonZero: 20351, %remaining: 60
Iteration: 6, nonZero: 18206, %remaining: 53
DEBUG: Processing class 26, colour (26,147,198), area 2100
Iteration: 1, nonZero: 913, %remaining: 43
Iteration: 2, nonZero: 152, %remaining: 7
Iteration: 3, nonZero: 12, %remaining: 0
Iteration: 4, nonZero: 0, %remaining: 0
DEBUG: Processing class 27, colour (190,39,199), area 18500
Iteration: 1, nonZero: 6265, %remaining: 33
Iteration: 2, nonZero: 0, %remaining: 0
DEBUG: Processing class 28, colour (149,210,201), area 2200
Iteration: 1, nonZero: 598, %remaining: 27
Iteration: 2, nonZero: 0, %remaining: 0
DEBUG: Processing class 29, colour (188,169,216), area 10700
Iteration: 1, nonZero: 9643, %remaining: 90
Iteration: 2, nonZero: 8664, %remaining: 80
Iteration: 3, nonZero: 7763, %remaining: 72
Iteration: 4, nonZero: 6932, %remaining: 64
Iteration: 5, nonZero: 6169, %remaining: 57
Iteration: 6, nonZero: 5460, %remaining: 51
DEBUG: Processing class 30, colour (100,126,217), area 5624300
Iteration: 1, nonZero: 5565713, %remaining: 98
Iteration: 2, nonZero: 5511150, %remaining: 97
Iteration: 3, nonZero: 5464286, %remaining: 97
Iteration: 4, nonZero: 5420125, %remaining: 96
Iteration: 5, nonZero: 5377851, %remaining: 95
Iteration: 6, nonZero: 5337091, %remaining: 94
DEBUG: Processing class 31, colour (68,238,237), area 2100
Iteration: 1, nonZero: 1446, %remaining: 68
Iteration: 2, nonZero: 922, %remaining: 43
Iteration: 3, nonZero: 589, %remaining: 28
Iteration: 4, nonZero: 336, %remaining: 16
Iteration: 5, nonZero: 151, %remaining: 7
Iteration: 6, nonZero: 38, %remaining: 1
DEBUG: Processing class 32, colour (131,228,240), area 4000
Iteration: 1, nonZero: 3358, %remaining: 83
Iteration: 2, nonZero: 2788, %remaining: 69
Iteration: 3, nonZero: 2290, %remaining: 57
Iteration: 4, nonZero: 1866, %remaining: 46
Iteration: 5, nonZero: 1490, %remaining: 37
Iteration: 6, nonZero: 1154, %remaining: 28
DEBUG: Processing class 33, colour (0,0,255), area 8500
Iteration: 1, nonZero: 6046, %remaining: 71
Iteration: 2, nonZero: 3906, %remaining: 45
Iteration: 3, nonZero: 2350, %remaining: 27
Iteration: 4, nonZero: 1119, %remaining: 13
Iteration: 5, nonZero: 194, %remaining: 2
Iteration: 6, nonZero: 18, %remaining: 0
DEBUG: Processing class 34, colour (255,255,255), area 154300
Iteration: 1, nonZero: 117393, %remaining: 76
Iteration: 2, nonZero: 82930, %remaining: 53
Iteration: 3, nonZero: 51625, %remaining: 33
Iteration: 4, nonZero: 24842, %remaining: 16
Iteration: 5, nonZero: 6967, %remaining: 4
Iteration: 6, nonZero: 2020, %remaining: 1
If we look at class 34 - the one you are interested in. The successive erosions look like this - you can see the shape disappearing completely by a radius of around 15 pixels, which corresponds to losing 15 pixels on the left and 15 pixels on the right of your 30 pixel wide shape:
If you plot the percentage of pixels remaining after each successive erosion, you can easily see the difference between class 34 where it goes to zero after 5-6 erosions of 3 pixels each (i.e. 15-18 pixels) and class 25 where it doesn't:
Notes:
For anyone wishing to run my code, note that I up-scaled the input image (nearest-neighbour resampling) to 10x its current size with ImageMagick:
magick classes.png -scale 1000%x classes_fs.png

I mentioned this concept in my comment. One inelegant way to achieve this could be something like this:
_, ctrs, hierarchy = cv2.findContours(img, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
out = np.zeros(img.shape[:2], dtype="uint8")
epsilon = 1.0
desired_width = 30.0
for i in range(len(ctrs)):
if(hierarchy[0][i][3] != -1):
continue
a = cv2.contourArea(ctrs[i])
p = cv2.arcLength(ctrs[i], True)
print(a, p)
if a != 0 and p != 0 and abs(a/((p-2*desired_width)/2) - desired_width) < epsilon:
cv2.drawContours(out, [ctrs[i]], -1, 255, -1)
A few parameters might need adjusting based on how opencv calculates area and perimeter.
EDIT: Adding a test image which has 4 squiggly lines with 14-16px width. Of course these are way too simplistic as compared to images you are dealing with.

You can try this:
Convert the image to bicolor. Objects are white, borders are black.
Erosion of all objects by 15-20 pixels. We get the marker.
Morphological reconstruction of the original image with a marker. You get an image without narrow lines.
Bitwise XOR paragraph 1 and 3.

Related

How to make a predictive model using a timeseries data consisted of binary information?

How would you do regression to predict the sate in a future time:
SeriesYear MonthDay State
0 1 2019 12 13 [1, 0, 0, 1, 0, 0]
1 2 2019 12 17 [0, 1, 0, 0, 1, 0]
2 3 2019 12 20 [0, 0, 1, 0, 1, 0]
3 4 2019 12 24 [0, 1, 0, 1, 0, 0]
4 5 2019 12 27 [0, 1, 0, 0, 1, 0]
5 6 2019 12 31 [0, 0, 0, 1, 0, 1]
6 7 2020 1 3 [1, 0, 0, 0, 0, 1]
.
.
. some future date ?
Basically I want to know state in some future time in the form of a binary list?
NOTE:
Every single row has its own unique state that is not the same as any other row.

Yolov5 model not able to train

I'm making a model to detect potholes in an image. I've done everything right or so it seems to me, but I can't train the model for some reason. What might be the problem here?
!python train.py --img 640 --cfg yolov5m.yaml --hyp data/hyps/hyp.scratch-med.yaml --batch 20 --epochs 300 --data data/potholeData.yaml --weights yolov5m.pt --workers 4 --name yolo_pothole_det_m
This is the final line of the code, which outputs the following.
train: weights=yolov5m.pt, cfg=yolov5m.yaml, data=data/potholeData.yaml, hyp=data/hyps/hyp.scratch-med.yaml, epochs=300, batch_size=20, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=4, project=runs/train, name=yolo_pothole_det_m, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v7.0-23-g5dc1ce4 Python-3.9.13 torch-1.13.0 CPU
hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.9, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.1, copy_paste=0.0
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=1
from n params module arguments
0 -1 1 5280 models.common.Conv [3, 48, 6, 2, 2]
1 -1 1 41664 models.common.Conv [48, 96, 3, 2]
2 -1 2 65280 models.common.C3 [96, 96, 2]
3 -1 1 166272 models.common.Conv [96, 192, 3, 2]
4 -1 4 444672 models.common.C3 [192, 192, 4]
5 -1 1 664320 models.common.Conv [192, 384, 3, 2]
6 -1 6 2512896 models.common.C3 [384, 384, 6]
7 -1 1 2655744 models.common.Conv [384, 768, 3, 2]
8 -1 2 4134912 models.common.C3 [768, 768, 2]
9 -1 1 1476864 models.common.SPPF [768, 768, 5]
10 -1 1 295680 models.common.Conv [768, 384, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 2 1182720 models.common.C3 [768, 384, 2, False]
14 -1 1 74112 models.common.Conv [384, 192, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 2 296448 models.common.C3 [384, 192, 2, False]
18 -1 1 332160 models.common.Conv [192, 192, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 2 1035264 models.common.C3 [384, 384, 2, False]
21 -1 1 1327872 models.common.Conv [384, 384, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 2 4134912 models.common.C3 [768, 768, 2, False]
24 [17, 20, 23] 1 24246 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]]
Isn't it supposed to train the model after that? What am I doing wrong for it to stop it right here?
in cmd you can see that it didn't read any images dataset. make sure that your potholedata.yaml file true located. in this file you have to write this code:
train: ../train/images #path to train images
val: ../valid/images #path to valid images
nc: 1 #number of classes
names: ['Weapon'] #name of classes
After this you can run and your train will continue

Q: sorting a table of numbers in Lua

I've looked around for a bit to find a solution to my problem but I haven't gotten anything that completely fixes it. Essentially the function does sort but it doesn't sort the numbers in the table just the numbers 1 through 10
local numbers = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local t = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local tkeys = {}
for k in pairs(t) do table.insert(tkeys, k) end
table.sort(tkeys)
for _, k in ipairs(tkeys) do print(k, t[k]) end
return t -- return the sorted table
end
list = selectionSort(list)
and this is what comes out
1 18
2 45
3 90
4 77
5 65
6 18
7 3
8 57
9 81
10 10
and what I want is
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10
any solutions?
You are taking the key from your input and you want the value.
you can change it to:
local list = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local tSorted = {}
for _,v in pairs(t) do
table.insert(tSorted, v)
end
table.sort(tSorted)
for i=1,#t,1 do
print(tSorted[i], t[i])
end
return tSorted -- return the sorted table
end
list = selectionSort(numbers)
and you will get:
sorted original
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10

Use a custom kernel / image filter to find a specific pattern in a 2d array

Given an image im,
>>> np.random.seed(0)
>>> im = np.random.randint(0, 100, (10,5))
>>> im
array([[44, 47, 64, 67, 67],
[ 9, 83, 21, 36, 87],
[70, 88, 88, 12, 58],
[65, 39, 87, 46, 88],
[81, 37, 25, 77, 72],
[ 9, 20, 80, 69, 79],
[47, 64, 82, 99, 88],
[49, 29, 19, 19, 14],
[39, 32, 65, 9, 57],
[32, 31, 74, 23, 35]])
what is the best way to find a specific segment of this image, for instance
>>> im[6:9, 2:5]
array([[82, 99, 88],
[19, 19, 14],
[65, 9, 57]])
If the specific combination does not exist (maybe due to noise), I would like to have a similarity measure, which searches for segments with a similar distribution and tells me for each pixel of im, how good the agreement is. For instance something like
array([[0.03726647, 0.14738364, 0.04331007, 0.02704363, 0.0648282 ],
[0.02993497, 0.04446428, 0.0772978 , 0.1805197 , 0.08999 ],
[0.12261269, 0.18046972, 0.01985607, 0.19396181, 0.13062801],
[0.03418192, 0.07163043, 0.15013723, 0.12156613, 0.06500945],
[0.00768509, 0.12685481, 0.19178985, 0.13055806, 0.12701177],
[0.19905991, 0.11637007, 0.08287372, 0.0949395 , 0.12470202],
[0.06760152, 0.13495046, 0.06344035, 0.1556691 , 0.18991421],
[0.13250537, 0.00271433, 0.12456922, 0.97 , 0.194389 ],
[0.17563869, 0.10192488, 0.01114294, 0.09023184, 0.00399753],
[0.08834218, 0.19591735, 0.07188889, 0.09617871, 0.13773224]])
The example code is python.
I think there should be a solution correlating a kernel with im. This will have the issue though, that a segment with the same value but scaled, will give a sharper response.
Template matching would be one of the ways to go about it. Of course deep learning/ML can also be used for more complicated matching.
Most image processing libraries support some sort of matching function which compares a set of 2 image - reference and the one to match. In OpenCV it returns a score which can used to determine a match. The matching method uses various functions that support scale and/or rotation invariant matching. Beware of licensing constraints in the method you plan to use.
In case the images may not always be exact, you can use standard deviation (StdDev) to allow for permissible deviation and yet classify them into buckets. Histogram matching may also be used depending on the condition of image to be matched (lighting, color can be important, unless you use specific channels). Use of histogram will avoid matching template in its entirety.
Ref for Template Matching:
OpenCV - https://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html
SciPy - https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_template.html
Thanks to banerjk for the great answer - template matching is exactly the solution!
some backup method
Considering my correlating-with-a-kernel idea, there is some progress:
When one correlates the image with the template (i.e. what I called target segment in the question), chances are high, that the most intense point in the correlated image (relative to the mean intensity) matches the template position (see im and m in the example). Seems like I am not the first, who comes up with this idea, as can be see in these lecture notes on page 39.
However, this is not always true. This method, more or less, just detects weight at the largest values in the template. In the example, im2 is constructed such, that it tricks this concept.
Maybe it gets more reliable if one applies some filter (for instance median) on the image beforehand.
I just wanted to mention it here, as it might have advantages for certain situations (it should be more performant compared to the Wikipedia-implementation of template_matching).
example
import numpy as np
from scipy import ndimage
np.random.seed(0)
im = np.random.randint(0, 100, (10,5))
t = im[6:9, 2:5]
print('t', t, sep='\n')
m = ndimage.correlate(im, t) / ndimage.correlate(im, np.ones(t.shape))
m /= np.amax(m)
print('im', im, sep='\n')
print('m', m, sep='\n')
print("this can be 'tricked', however")
im2 = im.copy()
im2[6:9, :3] = 0
im2[6,1] = 1
m2 = ndimage.correlate(im2, t) / ndimage.correlate(im2, np.ones(t.shape))
m2 /= np.amax(m2)
print('im2', im2, sep='\n')
print('m2', m2, sep='\n')
output
t
[[82 99 88]
[19 19 14]
[65 9 57]]
im
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[47 64 82 99 88]
[49 29 19 19 14]
[39 32 65 9 57]
[32 31 74 23 35]]
m
[[0.73776208 0.62161208 0.74504705 0.71202601 0.66743979]
[0.70809611 0.70617161 0.70284942 0.80653741 0.67067733]
[0.55047727 0.61675268 0.5937487 0.70579195 0.74351706]
[0.7303857 0.77147963 0.74809273 0.59136392 0.61324214]
[0.70041161 0.7717032 0.69220064 0.72463532 0.6957257 ]
[0.89696894 0.69741108 0.64136612 0.64154719 0.68621613]
[0.48509474 0.60700037 0.65812918 0.68441118 0.68835903]
[0.73802038 0.83224745 0.87301124 1. 0.92272565]
[0.72708573 0.64909142 0.54540817 0.60859883 0.52663327]
[0.72061572 0.70357846 0.61626289 0.71932261 0.75028955]]
this can be 'tricked', however
im2
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[ 0 1 0 99 88]
[ 0 0 0 19 14]
[ 0 0 0 9 57]
[32 31 74 23 35]]
m2
[[0.53981867 0.45483201 0.54514907 0.52098765 0.48836403]
[0.51811216 0.51670401 0.51427317 0.59014141 0.49073293]
[0.40278285 0.4512764 0.43444444 0.51642621 0.54402958]
[0.5344214 0.56448972 0.54737758 0.43269951 0.44870774]
[0.51248943 0.56465331 0.50648148 0.53021386 0.50906076]
[0.78923691 0.56633529 0.51641414 0.44336403 0.50210263]
[0.88137788 0.89779614 0.63552189 0.55070797 0.50367059]
[0.88888889 1. 0.75544508 0.75694003 0.67515605]
[0.43965976 0.48492221 0.37490287 0.48511085 0.38533625]
[0.30754918 0.32478065 0.27066895 0.46685032 0.548985 ]]
Maybe someone can contribute on the background of the lecture notes.
update: It is discussed in J. P. Lewis, “Fast Normalized Cross-Correlation”, Industrial Light and Magic. on the very first page.

What is correct implementation of LDA (Linear Discriminant Analysis)?

I found that the result of LDA in OpenCV is different from other libraries. For example, the input data was
DATA (13 data samples with 4 dimensions)
7 26 6 60
1 29 15 52
11 56 8 20
11 31 8 47
7 52 6 33
11 55 9 22
3 71 17 6
1 31 22 44
2 54 18 22
21 47 4 26
1 40 23 34
11 66 9 12
10 68 8 12
LABEL
0 1 2 0 1 2 0 1 2 0 1 2 0
The OpenCV code is
Mat data = (Mat_<float>(13, 4) <<\
7, 26, 6, 60,\
1, 29, 15, 52,\
11, 56, 8, 20,\
11, 31, 8, 47,\
7, 52, 6, 33,\
11, 55, 9, 22,\
3, 71, 17, 6,\
1, 31, 22, 44,\
2, 54, 18, 22,\
21, 47, 4, 26,\
1, 40, 23, 34,\
11, 66, 9, 12,\
10, 68, 8, 12);
Mat mean;
reduce(data, mean, 0, CV_REDUCE_AVG);
mean.convertTo(mean, CV_64F);
Mat label(data.rows, 1, CV_32SC1);
for (int i=0; i<label.rows; i++)
label.at<int>(i) = i%3;
LDA lda(data, label);
Mat projection = lda.subspaceProject(lda.eigenvectors(), mean, data);
The matlab code is (used Matlab Toolbox for Dimensionality Reduction)
cd drtoolbox\techniques\
load hald
label=[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0]
[projection, trainedlda] = lda(ingredients, label)
The eigenvalues are
OpenCV (lda.eigenvectors())
0.4457 4.0132
0.4880 3.5703
0.5448 3.3466
0.5162 3.5794
Matlab Toolbox for Dimensionality Reduction (trainedlda.M)
0.5613 0.7159
0.6257 0.6203
0.6898 0.5884
0.6635 0.6262
Then the projections of data are
OpenCV
1.3261 7.1276
0.8892 -4.7569
-1.8092 -6.1947
-0.0720 1.1927
0.0768 3.3105
-0.7200 0.7405
-0.3788 -4.7388
1.5490 -2.8255
-0.3166 -8.8295
-0.8259 9.8953
1.3239 -3.1406
-0.5140 4.2194
-0.5285 4.0001
Matlab Toolbox for Dimensionality Reduction
1.8030 1.3171
1.2128 -0.8311
-2.3390 -1.0790
-0.0686 0.3192
0.1583 0.5392
-0.9479 0.1414
-0.5238 -0.9722
1.9852 -0.4809
-0.4173 -1.6266
-1.1358 1.9009
1.6719 -0.5711
-0.6996 0.7034
-0.6993 0.6397
The eigenvectors and projections are different even though these LDAs have the same data. I believe there are 2 possibilities.
One of the libraries is wrong.
I am doing it wrong.
Thank you!
The difference is because eigenvectors are not normalized.
The normalized (L2 norm) eigenvectors are
OpenCV
0.44569 0.55196
0.48798 0.49105
0.54478 0.46028
0.51618 0.49230
Matlab Toolbox for Dimensionality Reduction
0.44064 0.55977
0.49120 0.48502
0.54152 0.46008
0.52087 0.48963
They look simliar now, although they have quite different eigenvalues.
Even though the PCA in OpenCV returns normalized eigenvectors, LDA does not. My next question is 'Is normalizing eigenvectors in LDA not necessary?'

Resources