Overlaping raw thermal image and embedded image from FLIR - opencv

I have two images: a thermal image from FLIR e an embedded image which I extract from the metadata embedded in the thermal image. Example:
All metadata available in the thermal image are:
File Name: FLIR2721.jpg
File Size: 1000 kB
File Type: JPEG
File Type Extension: jpg
Mime Type: image/jpeg
Jfif Version: 1.01
Exif Byte Order: Little-endian (Intel, II)
Make: FLIR Systems AB
Model: FLIR T620
Orientation: Horizontal (normal)
X Resolution: 72
Y Resolution: 72
Resolution Unit: inches
Software: 4.8.2
Modify Date: 2017:11:29 08:34:08
Y Cb Cr Positioning: Centered
Exposure Time: 1/46
Exif Version: 220
Create Date: 2017:11:29 08:34:08
Components Configuration: -, Cr, Cb, Y
Subject Distance: 1 m
Focal Length: 41.3 mm
Image Temperature Max: 332
Image Temperature Min: 235
Flashpix Version: 100
Color Spaces: RGB
Exif Image Width: 640
Exif Image Height: 480
Digital Zoom Ratio: 1
Image Unique Id: DF9BB2EBB15922366AB9059A1D8D53A9
Gps Version Id: 2.2.0.0
Gps Altitude Ref: Above Sea Level
Gps Time Stamp: 11:48:03
Gps Satellites: 0
Gps Img Direction Ref: Magnetic North
Gps Img Direction: 160
Compression: JPEG (old-style)
Thumbnail Offset: 2172
Thumbnail Length: 5973
Emissivity: 0.75
Object Distance: 1.00 m
Reflected Apparent Temperature: 20.0 C
Atmospheric Temperature: 28.0 C
Ir Window Temperature: 20.0 C
Ir Window Transmission: 1
Relative Humidity: 60.0 %
Planck R: 115965.437
Planck B: 1417.1
Planck F: 1
Atmospheric Trans Alpha1: 0.006569
Atmospheric Trans Alpha2: 0.01262
Atmospheric Trans Beta1: -0.002276
Atmospheric Trans Beta2: -0.00667
Atmospheric Trans X: 1.9
Camera Temperature Range Max: 150.0 C
Camera Temperature Range Min: -40.0 C
Camera Temperature Max Clip: 160.0 C
Camera Temperature Min Clip: -60.0 C
Camera Temperature Max Warn: 150.0 C
Camera Temperature Min Warn: -40.0 C
Camera Temperature Max Saturated: 180.0 C
Camera Temperature Min Saturated: -60.0 C
Camera Model: FLIR T620
Camera Part Number: 55903-5022
Camera Serial Number: 55906523
Camera Software: 25.0.0
Lens Model: FOL41
Lens Part Number: T197524
Lens Serial Number: 56702775
Field Of View: 15.0 deg
Planck O: -3949
Planck R: 20.01401533
Raw Value Range Min: 5427
Raw Value Range Max: 56178
Raw Value Median: 13350
Raw Value Range: 10060
Date Time Original: 2017:11:29 08:34:08.131+00:00
Focus Step Count: 5925
Focus Distance: 8.9 m
Frame Rate: 30
Palette Colors: 224
Above Color: 170 128 128
Below Color: 50 128 128
Overflow Color: 67 216 98
Underflow Color: 41 110 240
Isotherm1 Color: 100 128 128
Isotherm2 Color: 100 110 240
Palette Method: 0
Palette Stretch: 2
Palette File Name: \FlashBFS\system\iron.pal
Palette Name: Iron
Palette: (Binary data 672 bytes)
Raw Thermal Image Width: 640
Raw Thermal Image Height: 480
Raw Thermal Image Type: PNG
Raw Thermal Image: (Binary data 362104 bytes)
Real2 Ir: 3.7806735038757
Offset X: -35
Offset Y: -131
Pi Px1: 160
Pi Px2: 479
Pi Py1: 120
Pi Py2: 359
Gps Valid: Yes
Gps Latitude Ref: South
Gps Longitude Ref: West
Gpsdop: 0.53
Gps Map Datum: WGS84
Embedded Image Width: 2592
Embedded Image Height: 1944
Embedded Image Type: JPG
Embedded Image: (Binary data 600954 bytes)
Image Width: 640
Image Height: 480
Encoding Process: Baseline DCT, Huffman coding
Bits Per Sample: 8
Color Components: 3
Y Cb Cr Sub Sampling: YCbCr4:2:0 (2 2)
Gps Altitude: 0 m Above Sea Level
Gps Latitude: 26 deg 18' 1.14" S
Gps Longitude: 48 deg 51' 2.82" W
Gps Position: 26 deg 18' 1.14" S, 48 deg 51' 2.82" W
Image Size: 640x480
Megapixels: 0.307
Peak Spectral Sensitivity: 10.2 um
Shutter Speed: 1/46
Thumbnail Image: (Binary data 5973 bytes)
Focal Length35Efl: 41.3 mm
Category: image
Raw Header: FF D8 FF E0 00 10 4A 46 49 46 00 01 01 00 00 01 00 01 00 00 FF E1 1F BC 45 78 69 66 00 00 49 49 2A 00 08 00 00 00 0B 00 0F 01 02 00 10 00 00 00 92 00 00 00 10 01 02 00 0A 00 00 00 A2 00 00 00
I want to "overlap" the thermal image in the embedded image but I don't know how to do this since the thermal image is not centered with the embedded image.
I've tried to use matchTemplate form OpenCV but the result is not quite good.
Any suggestions about how to solve it?

I think here is all data you need:
Raw Thermal Image Width: 640
Raw Thermal Image Height: 480
Raw Thermal Image Type: PNG
Raw Thermal Image: (Binary data 362104 bytes)
Real2 Ir: 3.7806735038757
Offset X: -35
Offset Y: -131
Real2 Ir value looks like zoom ratio of raw IR image to JPEG photo. 640x480*3.7806735038757=2420x1815 which is very close to 2592x1944. So you just need to test some combinations of what Pi Px/y coordinates and Offset X/Y means. Enlarge IR image (or reduce JPEG), center both images manually in graphics editor and check what X/Y distance between upper left corners of both images is. I think it will be same as some of these values.

I have found a solution but have no ideia of why it works.
Sample code:
import math
import cv2
emb_img = cv2.imread(<path_to_embedded_image>, cv2.IMREAD_COLOR)
emb_height, emb_width, emb_channels = emb_img.shape
emb_center = {
'x': math.floor(emb_width / 2),
'y': math.floor(emb_height / 2)
}
raw_img = cv2.imread(<path_to_raw_image>, cv2.IMREAD_GRAYSCALE)
raw_height, raw_width = raw_img.shape
raw_center = {
'x': math.floor(raw_width / 2),
'y': math.floor(raw_height / 2)
}
j = {
'OffsetX': <some number>,
'OffsetY': <some number>
}
x = emb_center.get('x') - raw_center.get('x') + j.get('OffsetX') - 45
y = emb_center.get('y') - raw_center.get('y') + j.get('OffsetY') - 45
cropped_img = emb_img[y:y+540, x:x+720, :]
cropped_img = cv2.resize(cropped_img, (640, 480))
cv2.imwrite('path_you_want.jpg', cropped_img)
I have no ideia why to write cropped_img = emb_img[y:y+540, x:x+720, :] instead of cropped_img = emb_img[y:y+480, x:x+680, :] since raw size is 680x540. Another thing, 45 came from where??? I just know it worked.

I've been playing with Photoshop and the embedded image, i believe this may work:
import cv2
import flirimageextractor
flir = flirimageextractor.FlirImageExtractor()
flir.process_image('C:\Temp\FLIR14107.jpg',RGB=True)
c = flir.get_metadata('C:\Temp\FLIR1234.jpg')
img = flir.extract_embedded_image()
Scale = c['Real2IR']
OffsetX = c['OffsetX']
OffsetY = c['OffsetY']
PiPX1 = c['PiPX1']
PiPX2 = c['PiPX2']
PiPY1 = c['PiPY1']
PiPY2 = c['PiPY2']
Width = c['EmbeddedImageWidth']
Height = c['EmbeddedImageHeight']
scaled_img = cv2.resize(img.astype('uint8'), (int(Width * Scale), int(Height *
Scale)), interpolation=cv2.INTER_AREA)
ct = [int(scaled_img.shape[1] / 2), int(scaled_img.shape[0] / 2)]
cx = ct[0] + OffsetX * Scale
cy = ct[1] + OffsetY * Scale
img1 = scaled_img[int(cy - Height/2):int(cy + Height/2),
int(cx - Width/2):int(cx + Width/2)]
cv2.imshow('1',cv2.resize(img1,(PiPX1 + PiPX2,PiPY1 +
PiPY2),interpolation=cv2.INTER_AREA))
cv2.waitKey(0)

Related

What is the average mobile rating on Google PageSpeed Insights?

We use Google PageSpeed Insights as a marketing tool to compare the download speed of websites we do with what our competitors do. But so many mobile sites are rated in the 30s and wondered if that's what the average mobile rating is. Does anyone know? Thx
Short Answer
The average mobile rating is 31.
Long Answer.
An article I found after writing the below that answers the question
This article from tunetheweb has actually done the hard work for us here and gathered the data from httparchive. (give the article a read it has a wealth of interesting information!)
The below table taken from that article covers your question (the answer is 31 for the performance metric 50th percentile)
Percentile Performance Accessibility Best Practices SEO PWA
10 8 56 64 69 14
25 16 69 64 80 25
50 31 80 71 86 29
75 55 88 79 92 36
90 80 95 86 99 54
95 93 97 93 100 54
99 99 100 93 100 64
I have left the below in as the information may be useful to somebody but the above answers the question much better. At least my guess of 35 wasn't a millions miles away from the actual answer. hehe.
My original Answer
You would imagine that a score of 50 would be the average right? Nope!
Lighthouse uses a log-normal curve to create a curve that dictates scores.
The two key control points on that curve are the 25th percentile for the median (a score of 50 means you are in the top 25% effectively) and the 8th percentile for a score of 90.
The numbers used to determine these points are derived from http archive data.
You can explore the curve used for Time To Interactive scoring here as an example.
Now I am sure someone who is a lot better at maths than me can use that data to calculate the average score for a site, but I would estimate it to be around 35 for a mobile site, which is pretty close to what you have observed.
One thing I can do is provide how the scoring works based on those control points so you can see the various cutoff points etc. for each metric.
The below is taken from the maths module at https://github.com/paulirish/lh-scorecalc/tree/190bed715a3589601f314b3c8a50fb0fb147c121
I have also included the median and falloff values currently used in this calculation in the scoring variable.
To play with it use either the VALUE_AT_QUANTILE function to get what value you need to achieve a certain percentage (so to see the value for the 90th percentile for Time To Interactive you would use VALUE_AT_QUANTILE(7300, 2900, 0.9); (take median (7300) and falloff (2900) from TTI in the scoring variable and then enter the desired percentile as a decimal (90 -> 0.9)).
Similar for QUANTILE_AT_VALUE function which does the reverse (shows the percentile that a particular value would fall at). E.g. if you wanted to see what percentile a First CPU Idle time of 3200 gets you would use QUANTILE_AT_VALUE(6500, 2900, 3200).
Anyway I have gone a bit off tangent, but hopefully the above and below will let someone cleverer than me the info needed to work it out (I have included the weightings for each item as well in the weights variable).
const scoring = {
FCP: {median: 4000, falloff: 2000, name: 'First Contentful Paint'},
FMP: {median: 4000, falloff: 2000, name: 'First Meaningful Paint'},
SI: {median: 5800, falloff: 2900, name: 'Speed Index'},
TTI: {median: 7300, falloff: 2900, name: 'Time to Interactive'},
FCI: {median: 6500, falloff: 2900, name: 'First CPU Idle'},
TBT: {median: 600, falloff: 200, name: 'Total Blocking Time'}, // mostly uncalibrated
LCP: {median: 4000, falloff: 2000, name: 'Largest Contentful Paint'},
CLS: {median: 0.25, falloff: 0.054, name: 'Cumulative Layout Shift', units: 'unitless'},
};
const weights = {
FCP: 0.15,
SI: 0.15,
LCP: 0.25,
TTI: 0.15,
TBT: 0.25,
CLS: 0.05
};
function internalErf_(x) {
// erf(-x) = -erf(x);
var sign = x < 0 ? -1 : 1;
x = Math.abs(x);
var a1 = 0.254829592;
var a2 = -0.284496736;
var a3 = 1.421413741;
var a4 = -1.453152027;
var a5 = 1.061405429;
var p = 0.3275911;
var t = 1 / (1 + p * x);
var y = t * (a1 + t * (a2 + t * (a3 + t * (a4 + t * a5))));
return sign * (1 - y * Math.exp(-x * x));
}
function internalErfInv_(x) {
// erfinv(-x) = -erfinv(x);
var sign = x < 0 ? -1 : 1;
var a = 0.147;
var log1x = Math.log(1 - x*x);
var p1 = 2 / (Math.PI * a) + log1x / 2;
var sqrtP1Log = Math.sqrt(p1 * p1 - (log1x / a));
return sign * Math.sqrt(sqrtP1Log - p1);
}
function VALUE_AT_QUANTILE(median, falloff, quantile) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
return Math.exp(location + shape * Math.SQRT2 * internalErfInv_(1 - 2 * quantile));
}
function QUANTILE_AT_VALUE(median, falloff, value) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
var standardizedX = (Math.log(value) - location) / (Math.SQRT2 * shape);
return (1 - internalErf_(standardizedX)) / 2;
}
console.log("Time To Interactive (TTI) 90th Percentile Time:", VALUE_AT_QUANTILE(7300, 2900, 0.9).toFixed(0));
console.log("First CPU Idle time of 3200 score / percentile:", (QUANTILE_AT_VALUE(6500, 2900, 3200).toFixed(3)) * 100);

How to calculate png size based on dimension and bit depth

I'm trying to generate white png (jpg, gif as well) files using imagemagick. I have to calculate image's dimension based on size(kb) and bit depth(1).
I'm using this command on my windows machine:
magick -size "width" x "height" canvas:black white.png
I'm getting the following results
1 x 1 = 258 bytes;
2 x 2 = 260;
9 x 9 = 262;
17 x 17 = 263;
33 x 33 = 264;
40 x 40 = 263;
41 x 41 = 265;
65 x 65 = 267;
66 x 66 = 268;
What I understood from the results above is that minimal size is 256 + 1 (width) + 1 (height). So 1 x 1 file's size would be 258, 2 x 2 = 260. Results that goes next to these two seems not logical for me, why 33x33 is bigger than 40x40?
I have read png specification but couldn't figure out the formula how to calculate png (or other formats) size?

xarray: rolling mean of dask array conflicting sizes for data and coordinate in rolling operation

I am trying to do a rolling mean to a dask array within xarray. My issue may lay in the rechunking before the rolling mean. I am getting a ValueError of conflicting sizes between data and coordinates. However, this arises within the rolling operation as I don't think there are conflicts in the data and coords of the array before going into the rolling operation.
Apologies for not creating data to test but my project data is quick to play with:
import xarray as xr
remote_data = xr.open_dataarray('http://iridl.ldeo.columbia.edu/SOURCES/.Models'\
'/.SubX/.RSMAS/.CCSM4/.hindcast/.zg/dods',
chunks={'L': 1, 'S': 1})
da = remote_data.isel(P=0,L=0,M=0,X=0,Y=0)
da_day_clim = da.groupby('S.dayofyear').mean('S')
print(da_day_clim)
#<xarray.DataArray 'zg' (dayofyear: 366)>
#dask.array<shape=(366,), dtype=float32, chunksize=(1,)>
#Coordinates:
# L timedelta64[ns] 12:00:00
# Y float32 -90.0
# M float32 1.0
# X float32 0.0
# P int32 500
# * dayofyear (dayofyear) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
# Do a 31-day rolling mean
# da_day_clim.rolling(dayofyear=31, center=True).mean()
# This brings up:
#ValueError: The overlapping depth 30 is larger than your
#smallest chunk size 1. Rechunk your array
#with a larger chunk size or a chunk size that
#more evenly divides the shape of your array.
# Read http://xarray.pydata.org/en/stable/dask.html
# and found http://xarray.pydata.org/en/stable/generated/xarray.Dataset.chunk.html#xarray.Dataset.chunk
# I could make a little PR to add the .chunk() into the ValeError message. Thoughts?
# Rechunk. Played around with a few values but decided on
# the len of dayofyear
da_day_clim2 = da_day_clim.chunk({'dayofyear': 366})
print(da_day_clim2)
#<xarray.DataArray 'zg' (dayofyear: 366)>
#dask.array<shape=(366,), dtype=float32, chunksize=(366,)>
#Coordinates:
# L timedelta64[ns] 12:00:00
# Y float32 -90.0
# M float32 1.0
# X float32 0.0
# P int32 500
# * dayofyear (dayofyear) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
# Rolling mean on this
da_day_clim_smooth = da_day_clim2.rolling(dayofyear=31, center=True).mean()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-57-6acf382cdd3d> in <module>()
4 da_day_clim = da.groupby('S.dayofyear').mean('S')
5 da_day_clim2 = da_day_clim.chunk({'dayofyear': 366})
----> 6 da_day_clim_smooth = da_day_clim2.rolling(dayofyear=31, center=True).mean()
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/rolling.py in wrapped_func(self, **kwargs)
307 if self.center:
308 values = values[valid]
--> 309 result = DataArray(values, self.obj.coords)
310
311 return result
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/dataarray.py in __init__(self, data, coords, dims, name, attrs, encoding, fastpath)
224
225 data = as_compatible_data(data)
--> 226 coords, dims = _infer_coords_and_dims(data.shape, coords, dims)
227 variable = Variable(dims, data, attrs, encoding, fastpath=True)
228
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/dataarray.py in _infer_coords_and_dims(shape, coords, dims)
79 raise ValueError('conflicting sizes for dimension %r: '
80 'length %s on the data but length %s on '
---> 81 'coordinate %r' % (d, sizes[d], s, k))
82
83 if k in sizes and v.shape != (sizes[k],):
ValueError: conflicting sizes for dimension 'dayofyear': length 351 on the data but length 366 on coordinate 'dayofyear'
The length 351 is related to 366-351=15 (half the window).
This turned out to be a bug in Xarray and was fixed in https://github.com/pydata/xarray/pull/2122
The fix will be in Xarray 0.10.4 which is slated for imminent release.

Extract x-axis value using y-axis data in R

I have a time-series dataset in this format:
Time Val1 Val2
0 0.68 0.39
30 0.08 0.14
35 0.12 0.07
40 0.17 0.28
45 0.35 0.31
50 0.14 0.45
100 1.01 1.31
105 0.40 1.20
110 2.02 0.57
115 1.51 0.58
130 1.32 2.01
Using this dataset I want to extract(not predict) Time at which FC1=1 and FC2=1. Here is a plot that I created with annotated points I would like to extract.
I am looking for a solution using or function to interpolate/intercept to extract values. For example, if I draw a straight line at fold change 1 (say in y-axis), I want to extract all the points on X-axis where the line intercepts.
Looking forward for suggestions and thanks in advance !
You can use approxfun to do interpolations and uniroot to find single roots (places where the line crosses). You would need to run uniroot multiple times to find all the crossings, the rle function may help choose the starting points.
The FC values in your data never get close to 1 let alone cross it, so you must either have a lot more data than shown, or mean a different value.
If you can give more detail (possibly include a plot showing what you want) then we may be able to give more detailed help.
Edit
OK, here is some R code that finds where the lines cross:
con <- textConnection(' Time Val1 Val2
0 0.68 0.39
30 0.08 0.14
35 0.12 0.07
40 0.17 0.28
45 0.35 0.31
50 0.14 0.45
100 1.01 1.31
105 0.40 1.20
110 2.02 0.57
115 1.51 0.58
130 1.32 2.01')
mydat <- read.table(con, header=TRUE)
with(mydat, {
plot( Time, Val1, ylim=range(Val1,Val2), col='green', type='l' )
lines(Time, Val2, col='blue')
})
abline(h=1, col='red')
afun1 <- approxfun( mydat$Time, mydat$Val1 - 1 )
afun2 <- approxfun( mydat$Time, mydat$Val2 - 1 )
points1 <- cumsum( rle(sign(mydat$Val1 - 1))$lengths )
points2 <- cumsum( rle(sign(mydat$Val2 - 1))$lengths )
xval1 <- numeric( length(points1) - 1 )
xval2 <- numeric( length(points2) - 1 )
for( i in seq_along(xval1) ) {
tmp <- uniroot(afun1, mydat$Time[ points1[c(i, i+1)] ])
xval1[i] <- tmp$root
}
for( i in seq_along(xval2) ) {
tmp <- uniroot(afun2, mydat$Time[ points2[c(i, i+1)] ])
xval2[i] <- tmp$root
}
abline( v=xval1, col='green' )
abline( v=xval2, col='blue')

Calculate the minimum and maximum in cvInRangeS

cvCvtColor(frame, hsv_frame, CV_BGR2HSV);
cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded);
I try to follow blue ball. To determine the maximum and minimum I open a picture I took with the camera, open it MS paint and doubles at (180/240) result out of me in H
And (255/240) the result of S and L
then i recive the next values:
108 113 115 112 105 H
145 40 107 129 143 S
97 129 96 102 124 L
So I chose the next values:
CvScalar hsv_min = cvScalar( 105, 40, 96 );
CvScalar hsv_max = cvScalar( 115, 140, 130);
But when I try to follow it hardly ever see him
Am I wrong calculation? or what can i do to improve the result?
First of all why do you convert your image to HSV and then talk about HSL? If I'm not mistaken they are different color spaces.
To determinate blue color using HSV color space use this range:
Min (H/S/V): 90, 50, 50
Max (H/S/V): 130, 255, 255
Also this online converter should help you.
And don't forget that Hue value after converting image to HSV using CV_BGR2HSV code is in range [0..180], while using CV_BGR2HSV_FULL will give you range [0..360].

Resources