Related
I have a distributed dask array with shape (2400,2400) with chunksize (100,100). I thought I could use topk(-n) to find the smallest n values. However, it appears to return an array of shape (2400,n), so it looks like it finds the smallest n in each row.Is there a way to use topk to get the smallest n values across all rows (entire array)?
One idea is to call topk twice, once for each axis.
>>> dist
dask.array<pow, shape=(2400, 2400), dtype=float64, chunksize=(100, 100)>
>>> dist.topk(-5,axis=0).topk(-5,axis=1).compute()
array([[ 0. , 2620.09503644, 2842.15200157, 2955.08409356,
3163.49458669],
[3660.67698657, 3670.4457495 , 3700.09837707, 3717.09052889,
4002.86497399],
[4125.89820524, 4139.44658137, 4250.50420539, 4331.01304547,
4402.14606754],
[4328.22966119, 4378.25193428, 4507.94409903, 4522.4913488 ,
4555.06860541],
[4441.58755402, 4560.95625938, 4576.39333974, 4682.06215251,
4765.11531865]])
One idea is to call topk twice, once for each axis.
Sounds good to me!
You might consider flattening the array first, but I can't see an advantage to this to what you've already found.
x.flatten().topk(...)
after digging through my code over and over again, i am desperate enough to ask online, hoping that someone can help me. i am trying to develop a python-fu script, and one of it's essential parts is mapping the image to an object. But whenever i try to call pdb.plug_in_map_object(), the console says
File "<input>", line 29, in <module> TypeError: wrong parameter type.
My current code looks like this:
pdb.plug_in_map_object(
#image, drawable, maptype=sphere
gimp.Image,gimp.Layer,1,
#viewpoint x, y, z
0.5,0.5,1,
#position x, y, z
0.5,0.5,0,
#first-axis x, y, z
1,0,0,
#second-axis x, y, z
0,1,0,
#rotation-angle x, y, z
0,0,0,
#lighttype=none
2,
#light color (r,g,b)
(0,0,0),
#light position x, y, z
-0.5,-0.5,2,
#light direction x, y, z
-1,-1,1,
#ambientintesity, diffuseintesity, dissufereflectivity, specularreflectivity
0.3,1,0.5,0.5,
#highlight, antialiasing, tiled, newimage, traparentbackground, radius
27,1,0,0,1,0.25,
#scale x, y, z
0.5,0.5,0.5,
#cylinderlegth, 8 drawables for cylinders & boxes
0,gimp.Layer,gimp.Layer,gimp.Layer,gimp.Layer,gimp.Layer,gimp.Layer,gimp.Layer,gimp.Layer
);
(note that this is not the code i use in my script, i use these ugly and senseless gimp.Layer's to make the python console accept it. i want to be able to call the function correctly before filling in the right values.)
line 29, that is mentioned in the error, is the last one, containing nothing but one PF_INT32 and eight PF_DRAWABLE. this is exactly the way those parameters are mentioned in the oldest as well as in the latest (GIMP git) source code i found (if you don't want to download the latest gimp code, i uploaded the relevant file here).
can someone please tell me what i am doing wrong?
it works now, and the only thing that changed was that i used some random (existing) layer instead of calling gimp.Layer. so in my case, i just use the currently active layer via image.activelayer. it doesn't matter when mapping to a sphere anyways, these layers are only used when mapping to a cube or sphere
I am trying to extract Rotation matrix and Translation vector from the essential matrix.
<pre><code>
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Mat_<double> R = svd_u * Mat(W).t() * svd_vt; //or svd_u * Mat(W) * svd_vt;
Mat_<double> t = svd_u.col(2); //or -svd_u.col(2)
</code></pre>
However, when I am using R and T (e.g. to obtain rectified images), the result does not seem to be right(black images or some obviously wrong outputs), even so I used different combination of possible R and T.
I suspected to E. According to the text books, my calculation is right if we have:
E = U*diag(1, 1, 0)*Vt
In my case svd.w which is supposed to be diag(1, 1, 0) [at least in term of a scale], is not so. Here is an example of my output:
svd.w = [21.47903827647813; 20.28555196246256; 5.167099204708699e-010]
Also, two of the eigenvalues of E should be equal and the third one should be zero. In the same case the result is:
eigenvalues of E = 0.0000 + 0.0000i, 0.3143 +20.8610i, 0.3143 -20.8610i
As you see, two of them are complex conjugates.
Now, the questions are:
Is the decomposition of E and calculation of R and T done in a right way?
If the calculation is right, why the internal rules of essential matrix are not satisfied by the results?
If everything about E, R, and T is fine, why the rectified images obtained by them are not correct?
I get E from fundamental matrix, which I suppose to be right. I draw epipolar lines on both the left and right images and they all pass through the related points (for all the 16 points used to calculate the fundamental matrix).
Any help would be appreciated.
Thanks!
I see two issues.
First, discounting the negligible value of the third diagonal term, your E is about 6% off the ideal one: err_percent = (21.48 - 20.29) / 20.29 * 100 . Sounds small, but translated in terms of pixel error it may be an altogether larger amount.
So I'd start by replacing E with the ideal one after SVD decomposition: Er = U * diag(1,1,0) * Vt.
Second, the textbook decomposition admits 4 solutions, only one of which is physically plausible (i.e. with 3D points in front of the camera). You may be hitting one of non-physical ones. See http://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E .
I have a series of Lat/Long coords. I need to transform it in X, Y coordinates. I've read about UTM, but the problem is that UTM coordinates are relatives to a single zone.
For example, this two coordinates UTM has the same Easting (x) and Northing (y) but different code zone, and so each coords point to a completly different location (one in spain and one in italy):
UTM: 33T 292625m E 4641696m N
UTM: 30U 292625m E 4641696m N
I need a method to automatically transform that relatives coord in absolute X, Y coordinates. Ideas?
Does it have to be UTM? If not, you can also use Mercator, which is a simpler projection that doesn't rely on zones.
See, for example, the Bing Maps system.
You should be able to use the ProjNET library.
What you need is to find the WKT (well known text) that defines your projections, and then you should be able to convert between them.
var utm33NCoordinateSystem = CoordinateSystemWktReader.Parse("WKT for correct utm zone") as IProjectedCoordinateSystem;
var wgs84CoordiateSystem = CoordinateSystemWktReader.Parse(MappingTransforms.WGS84) as IGeographicCoordinateSystem;
var ctfac = new CoordinateTransformationFactory();
_etrsToWgsTransformation = ctfac.CreateFromCoordinateSystems(etrs89CoordinateSystem,wgs84CoordiateSystem);
double[] transform = _etrsToWgsTransformation.MathTransform.Transform(new double[] { y,x });
Note: you have to find the correct WKTs, but that can be found on the project site.
Also you may have to flip the order of the inputs, depending on the transforms.
if we want to correct bearing & Distance in between two points than we use (Polar) method by using Scientific Calculator first of all Press The Button polar than started bracket first
For the Distance POL(N-N,E-E)
For the Bearing POL(N-N,E-E)RclTan-1
Now you have got a Correct bearing
I'm trying to develop an application using SOM in analyzing data. However, after finishing training, I cannot find a way to visualize the result. I know that U-Matrix is one of the method but I cannot understand it properly. Hence, I'm asking for a specific and detail example how to construct U-Matrix.
I also read an answer at U-matrix and self organizing maps but it only refers to 1 row map, how about 3x3 map? I know that for 3x3 map:
m(1) m(2) m(3)
m(4) m(5) m(6)
m(7) m(8) m(9)
a 5x5 matrix must me created:
u(1) u(1,2) u(2) u(2,3) u(3)
u(1,4) u(1,2,4,5) u(2,5) u(2,3,5,6) u(3,6)
u(4) u(4,5) u(5) u(5,6) u(6)
u(4,7) u(4,5,7,8) u(5,8) u(5,6,8,9) u(6,9)
u(7) u(7,8) u(8) u(8,9) u(9)
but I don't know how to calculate u-weight u(1,2,4,5), u(2,3,5,6), u(4,5,7,8) and u(5,6,8,9).
Finally, after constructing U-Matrix, is there any way to visualize it using color, e.g. heat map?
Thank you very much for your time.
Cheers
I don't know if you are still interested in this but I found this link
http://www.uni-marburg.de/fb12/datenbionik/pdf/pubs/1990/UltschSiemon90
which explains very speciffically how to calculate the U-matrix.
Hope it helps.
By the way, the site were I found the link has several resources referring to SOMs I leave it here in case anyone is interested:
http://www.ifs.tuwien.ac.at/dm/somtoolbox/visualisations.html
The essential idea of a Kohonen map is that the data points are mapped to a
lattice, which is often a 2D rectangular grid.
In the simplest implementations, the lattice is initialized by creating a 3D
array with these dimensions:
width * height * number_features
This is the U-matrix.
Width and height are chosen by the user; number_features is just the number
of features (columns or fields) in your data.
Intuitively this is just creating a 2D grid of dimensions w * h
(e.g., if w = 10 and h = 10 then your lattice has 100 cells), then
into each cell, placing a random 1D array (sometimes called "reference tuples")
whose size and values are constrained by your data.
The reference tuples are also referred to as weights.
How is the U-matrix rendered?
In my example below, the data is comprised of rgb tuples, so the reference tuples
have length of three and each of the three values must lie between 0 and 255).
It's with this 3D array ("lattice") that you begin the main iterative loop
The algorithm iteratively positions each data point so that it is closest to others similar to it.
If you plot it over time (iteration number) then you can visualize cluster
formation.
The plotting tool i use for this is the brilliant Python library, Matplotlib,
which plots the lattice directly, just by passing it into the imshow function.
Below are eight snapshots of the progress of a SOM algorithm, from initialization to 700 iterations. The newly initialized (iteration_count = 0) lattice is rendered in the top left panel; the result from the final iteration, in the bottom right panel.
Alternatively, you can use a lower-level imaging library (in Python, e.g., PIL) and transfer the reference tuples onto the 2D grid, one at a time:
for y in range(h):
for x in range(w):
img.putpixel( (x, y), (
SOM.Umatrix[y, x, 0],
SOM.Umatrix[y, x, 1],
SOM.Umatrix[y, x, 2])
)
Here img is an instance of PIL's Image class. Here the image is created by iterating over the grid one pixel at a time; for each pixel, putpixel is called on img three times, the three calls of course corresponding to the three values in an rgb tuple.
From the matrix that you create:
u(1) u(1,2) u(2) u(2,3) u(3)
u(1,4) u(1,2,4,5) u(2,5) u(2,3,5,6) u(3,6)
u(4) u(4,5) u(5) u(5,6) u(6)
u(4,7) u(4,5,7,8) u(5,8) u(5,6,8,9) u(6,9)
u(7) u(7,8) u(8) u(8,9) u(9)
The elements with single numbers like u(1), u(2), ..., u(9) as just the elements with more than two numbers like u(1,2,4,5), u(2,3,5,6), ... , u(5,6,8,9) are calculated using something like the mean, median, min or max of the values in the neighborhood.
It's a nice idea calculate the elements with two numbers first, one possible code for that is:
for i in range(self.h_u_matrix):
for j in range(self.w_u_matrix):
nb = (0,0)
if not (i % 2) and (j % 2):
nb = (0,1)
elif (i % 2) and not (j % 2):
nb = (1,0)
self.u_matrix[(i,j)] = np.linalg.norm(
self.weights[i //2, j //2] - self.weights[i //2 +nb[0], j // 2 + nb[1]],
axis = 0
)
In the code above the self.h_u_matrix = self.weights.shape[0]*2 - 1 and self.w_u_matrix = self.weights.shape[1]*2 - 1 are the dimensions of the U-Matrix. With that said, for calculate the others elements it's necessary obtain a list with they neighboors and apply a mean for example. The following code implements that's idea:
for i in range(self.h_u_matrix):
for j in range(self.w_u_matrix):
if not (i % 2) and not (j % 2):
nodelist = []
if i > 0:
nodelist.append((i-1,j))
if i < 4:
nodelist.append((i+1, j))
if j > 0:
nodelist.append((i,j -1))
if j < 4:
nodelist.append((i,j+1))
meanlist = [self.u_matrix[u_node] for u_node in nodelist]
self.u_matrix[(i,j)] = np.mean(meanlist)
elif (i % 2) and (j % 2):
meanlist = [
(i - 1, j),
(i + 1, j),
(i, j - 1),
(i, j + 1)]
self.u_matrix[(i,j)] = np.mean(meanlist)