Linear Interpolation - shrinking a line - image-processing

Suppose we have a 1D array named that consists of 9 elements:
Source[0 to 8].
Using "Linear Interpolation" we want to shrink it into a smaller 4 point array: Destination [0 to 3].
This is how I understand the Algorithm:
Calculate the ratio between both array lengths: 9/4 = 2.5
Iterate over the destination coordinates and find the appropriate source coordinate:
Destination [0] = 0 * 2.5 = Source [0] -> Success! use this exact value.
Destination [1] = 1 * 2.5 = Source [2.5] -> No such element! Calculate the average of Source[2] and Source[3].
Destination [2] = 2 * 2.5 = Source [5] -> Success! use this exact value.
Destination [2] = 3 * 2.5 = Source [7.5] -> No such element! Calculate the average of Source[7] and Source[8].
Is this correct ?

Almost correct. 9/4 = 2.25. ;-)
Anyway, if you want to preserve the endpoint values, you should calculate the ratio as (9-1)/(4-1) = 2.666... (Between points 0, 1, 2, 3 there are only three segments, thus the length equals to 3. The same refers to 0...8).
If you don't hit the exact value, remember to compute a weigheted mean, e.g.
Destination[1] = 1 * 2.667 -> (3-2.667)*Source[2] + (2.667-2)*Source[3]
This is from the equation,
y = y0(x1-x) + y1(x-x0)
where, in this case,
x=2.66
x0=2
x1=3
y0=Source[2]
y1=Source[3]

Related

Shaping input for a coremlmodel

I have a coremlmodel taking an input of shape MultiArray (Float32 67 x 256 x 320)
I am having a hard time to shape the input to this model.
Currently, I am trying to achieve it like so,
var m = try! MLMultiArray(shape: [67,256,320], dataType: .double)
for i in 0...66{
var cost = rand((256,320)) // this is coming from swix [SWIX]
memcpy(m.dataPointer+i*256*320, &cur_cost.flat.grid , 256*320)
}
I will have to replace the rand with matrices of that size later. I am using this for testing purposes first.
Any pointers on how to mould input to fit the volume would be greatly appreciated..
[SWIX]
What seems to be wrong in your code is that you're copying bytes instead of doubles. A double is 8 bytes, so your offset should be i*256*320*MemoryLayout<Double>.stride and the amount you're copying should be 256*320*MemoryLayout<Double>.stride.
Note that you can also use the MLMultiArray's strides property to compute the offset for a given data element in the array:
let offset = i0 * strides[0].intValue + i1 * strides[1].intValue + i2 * strides[2].intValue

SystemVerilog constraint for mapping between two 2D arrays

There are two MxN 2D arrays:
rand bit [M-1:0] src [N-1:0];
rand bit [M-1:0] dst [N-1:0];
Both of them will be randomized separately so that they both have P number of 1'b1 in them and rest are 1'b0.
A third MxN array of integers named 'map' establishes a one to one mapping between the two arrays 'src' and 'dst'.
rand int [M-1:0] map [N-1:0];
Need a constraint for 'map' such that after randomization, for each element of src[i][j] where src[i][j] == 1'b1, map[i][j] == M*k+l when dst[k][l] == 1. The k and l must be unique for each non-zero element of map.
To give an example:
Let M = 3 and N = 2.
Let src be
[1 0 1
0 1 0]
Let dst be
[0 1 1
1 0 0]
Then one possible randomization of 'map' will be:
[3 0 1
0 2 0]
In the above map:
3 indicates pointing from src[0,0] to dst[1,0] (3 = 1*M+0)
1 indicates pointing from src[0,2] to dst[0,1] (1 = 0*M+1)
2 indicates pointing from src[1,1] to dst[0,2] (2 = 0*M+2)
This is very difficult to express as a SystemVerilog constraint because
there is no way to conditionally select elements of an array to be unique
You cannot have random variables as part of index expression to an array element.
Since you are randomizing src and dst separately, it might be easier to compute the pointers and then randomly choose the pointers to fill in the map.
module top;
parameter M=3,N=4,P=4;
bit [M-1:0] src [N];
bit [M-1:0] dst [N];
int map [N][M];
int pointers[$];
initial begin
assert( randomize(src) with {src.sum() with ($countones(item)) == P;} );
assert( randomize(dst) with {dst.sum() with ($countones(item)) == P;} );
foreach(dst[K,L]) if (dst[K][L]) pointers.push_back(K*M+L);
pointers.shuffle();
foreach(map[I,J]) map[I][J] = pointers.pop_back();
$displayb("%p\n%p",src,dst);
$display("%p",map);
end
endmodule

st_buffer multipoint with different distance

I have a sfc_multipoint object and want to use st_buffer but with different distances for every single point in the multipoint object.
Is that possible?
The multipoint object are coordinates.
table = data
Every coordinate point (in the table in "lon" and "lat") should have a buffer with a different size. This buffer size is containt in the table in row "dist".
The table is called data.
This is my code:
library(sf)
coords <- matrix(c(data$lon,data$lat), ncol = 2)
tt <- st_multipoint(coords)
sfc <- st_sfc(tt, crs = 4326)
dt <- st_sf(data.frame(geom = sfc))
web <- st_transform(dt, crs = 3857)
geom <- st_geometry(web)
buf <- st_buffer(geom, dist = data$dist)
But it uses just the first dist of (0.100).
This is the result. Just really small buffers.
small buffer
For visualization see this picture. It´s just an example to show that the buffer should get bigger. example result
I think that he problem here is in how you are "creating" the points dataset.
Replicating your code with dummy data, doing this:
library(sf)
data <- data.frame(lat = c(0,1,2,3), lon = c(0,1,2,3), dist = c(0.1,0.2,0.3, 0.4))
coords <- matrix(c(data$lon,data$lat), ncol = 2)
tt <- st_multipoint(coords)
does not give you multiple points, but a single MULTIPOINT feature:
tt
#> MULTIPOINT (0 0, 1 1, 2 2, 3 3)
Therefore, only a single buffer distance can be "passed" to it and you get:
plot(sf::st_buffer(tt, data$dist))
To solve the problem, you need probably to build the point dataset differently. For example, using:
tt <- st_as_sf(data, coords = c("lon", "lat"))
gives you:
tt
#> Simple feature collection with 4 features and 1 field
#> geometry type: POINT
#> dimension: XY
#> bbox: xmin: 0 ymin: 0 xmax: 3 ymax: 3
#> epsg (SRID): NA
#> proj4string: NA
#> dist geometry
#> 1 0.1 POINT (0 0)
#> 2 0.2 POINT (1 1)
#> 3 0.3 POINT (2 2)
#> 4 0.4 POINT (3 3)
You see that tt is now a simple feature collection made of 4 points, on which buffering with multiple distances will indeed work:
plot(sf::st_buffer(tt, data$dist))
HTH!

image shuffling and slicing

This is my code for slicing my 512*512 image into a cube of 64*64*64 dimension. but when i reshape it again into a 2D array why is it not giving me the original image.am i doing something incorrect please help.
clc;
im=ind2gray(y,ymap);
% im=imresize(im,0.125);
[rows ,columns, colbands] = size(im)
end
image3d=reshape(image3d,512,512);
figure,imshow(uint8(image3d));
Just a small hint.
P(:,:,1) = [0,0;0,0]
P(:,:,2) = [1,1;1,1]
P(:,:,3) = [2,2;2,2]
P(:,:,4) = [3,3;3,3]
B = reshape(P,4,4)
B =
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
So you might change the slicing or do the reshaping on your own.
If I have understood your question right, you can look into the code below to perform the same operation.
% Random image of the provided size 512X512
imageX = rand(512,512)
imagesc(imageX)
% Converting the image "imageX" into the cube of 64X64X64 dimension
sliceColWise = reshape(imageX,64,64,64)
size(sliceColWise)
% Reshaping the cube to obtain the image original that was "imageX",
% in order to observe that they are identical the difference is plotted
imageY = reshape(sliceColWise,512,512);
imagesc(imageX-imageY)
n.b: From MATLAB help you can see that the reshape works column wise
reshape(X,M,N) or reshape(X,[M,N]) returns the M-by-N matrix
whose elements are taken columnwise from X. An error results
if X does not have M*N elements.

List comprehensions with float iterator in F#

Consider the following code:
let dl = 9.5 / 11.
let min = 21.5 + dl
let max = 40.5 - dl
let a = [ for z in min .. dl .. max -> z ] // should have 21 elements
let b = a.Length
"a" should have 21 elements but has got only 20 elements. The "max - dl" value is missing. I understand that float numbers are not precise, but I hoped that F# could work with that. If not then why F# supports List comprehensions with float iterator? To me, it is a source of bugs.
Online trial: http://tryfs.net/snippets/snippet-3H
Converting to decimals and looking at the numbers, it seems the 21st item would 'overshoot' max:
let dl = 9.5m / 11.m
let min = 21.5m + dl
let max = 40.5m - dl
let a = [ for z in min .. dl .. max -> z ] // should have 21 elements
let b = a.Length
let lastelement = List.nth a 19
let onemore = lastelement + dl
let overshoot = onemore - max
That is probably due to lack of precision in let dl = 9.5m / 11.m?
To get rid of this compounding error, you'll have to use another number system, i.e. Rational. F# Powerpack comes with a BigRational class that can be used like so:
let dl = 95N / 110N
let min = 215N / 10N + dl
let max = 405N / 10N - dl
let a = [ for z in min .. dl .. max -> z ] // Has 21 elements
let b = a.Length
Properly handling float precision issues can be tricky. You should not rely on float equality (that's what list comprehension implicitely does for the last element). List comprehensions on float are useful when you generate an infinite stream. In other cases, you should pay attention to the last comparison.
If you want a fixed number of elements, and include both lower and upper endpoints, I suggest you write this kind of function:
let range from to_ count =
assert (count > 1)
let count = count - 1
[ for i = 0 to count do yield from + float i * (to_ - from) / float count]
range 21.5 40.5 21
When I know the last element should be included, I sometimes do:
let a = [ for z in min .. dl .. max + dl*0.5 -> z ]
I suspect the problem is with the precision of floating point values. F# adds dl to the current value each time and checks if current <= max. Because of precision problems, it might jump over max and then check if max+ε <= max (which will yield false). And so the result will have only 20 items, and not 21.
After running your code, if you do:
> compare a.[19] max;;
val it : int = -1
It means max is greater than a.[19]
If we do calculations the same way the range operator does but grouping in two different ways and then compare them:
> compare (21.5+dl+dl+dl+dl+dl+dl+dl+dl) ((21.5+dl)+(dl+dl+dl+dl+dl+dl+dl));;
val it : int = 0
> compare (21.5+dl+dl+dl+dl+dl+dl+dl+dl+dl) ((21.5+dl)+(dl+dl+dl+dl+dl+dl+dl+dl));;
val it : int = -1
In this sample you can see how adding 7 times the same value in different order results in exactly the same value but if we try it 8 times the result changes depending on the grouping.
You're doing it 20 times.
So if you use the range operator with floats you should be aware of the precision problem.
But the same applies to any other calculation with floats.

Resources