Power Bi Axis X and Y - histogram

I am new on power Bi
I want to create an Histogram and define Axis :
X -> Function (Term, R&D...)
Y-> Method (Cost , benef...)
Value _> Sum(amount )
And in each Method have a vertical Histogram
I dont see where can I define Axis X and Y to build my Histogram
In Tableau Software and Spotfire I can define several dimension in the column
but not in Power BI
Thank you

Did you give the histogram custom visual at try?

Related

Combine two datasets with each their own X and Y axes in same plot in Tableau

We have two datasets that each consist of an X and an Y axis. The two X and the two Y axes have the same scaling (millimeters) but the values of course differ. So for the X values in dataset 1 there are no corresponding values in dataset 2.
If we just put the plots into one plot with dual X and dual Y axes, the two datasets are somehow combined into four different plots, one for each combination of the X and Y axes. Like we want the plot for X1/Y1 and for X2/Y2. What we're also seeing are plots X1/Y2 and X2/Y1 which do not make any sense at all.
How do we correctly combine the two datasets into a single plot where they only share the same X and Y axes but do not mix like that?
The easiest solution is to combine and reshape your data to have 3 columns;X, Y, Type - where Type distinguishes between the data sets, could be actual vs predicted for example. Then just put x (as a continuous dimension) on columns, y on rows and type on color or detail.
You can reshape the data like this using the UNION feature when defining your data source

How to generate a probability distribution on an image

I have a question as follows:
Suppose I have an image(size=360x640(row by col)), and I have a center coordinate that's say is (20, 100). What I want is to generate a probability distribution that has the highest value in that center (20,100), and lower probability value in the neighbor and much more lower value farer than the center.
All I figure out is to put a multivariate gaussian (since the dimension is 2D) and set mean to the center(20,100). But is that correct and how do I design the covariance matrix?
Thanks!!
You could do it in 2D by generating radial and polar coordinates
Along the line:
Pi = 3.1415926
cx = 20
cy = 100
r = sqrt( -2*log(1-U(0,1)) )
a = 2*Pi*U(0,1)
x = scale*r*cos(a)
y = scale*r*sin(a)
return (x + cx, y + cy)
where scale is a parameter to make it from unitless gaussian to some unit applicable to your problem. U(0,1) is uniform in [0...1) random value.
Reference: Box-Muller sampling.
If you want generic 2D gaussian, meaning ellipse in 2D, then you'll have to use different scales for X and Y, and rotate (x,y) vector by predefined angle using well-known rotation matrix

Alternating Least Squares Derivative

I'm trying to understand recommender systems that use ALS by reading up some content here : https://blog.insightdatascience.com/explicit-matrix-factorization-als-sgd-and-all-that-jazz-b00e4d9b21ea
I don't understand how the
the second line of the equation , follows from the first line of the equation.
Moreover, x(u)_T seems to have dimensions : k x 1
and Y_T seems to have the dimensions k x m
In that case, how can these matrices be multiplied ?
Thanks!

Implementing convolutional neural network backprop in ArrayFire (gradient calculation)

I modified equation 9.12 in http://www.deeplearningbook.org/contents/convnets.html to center the MxN convolution kernel.
That gives the following expression (take it on faith for now) for the gradient, assuming 1 input and 1 output channel (to simplify):
dK(krow, kcol) = sum(G(row, col) * V(row+krow-M/2, col+kcol-N/2); row, col)
To read the above, the single element of dK at krow, kcol is equal to the sum over all of the rows and cols of the product of G times a shifted V. Note G and V have the same dimensions. We will define going outside V to result in a zero.
For example, in one dimension, if G is [a b c d], V is [w x y z], and M is 3, then the first sum is dot (G, [0 w x y]), the second sum is dot (G, [w x y z]), and the third sum is dot (G, [x y z 0]).
ArrayFire has a shift operation, but it does a circular shift, rather than a shift with zero insertion. Also, the kernel sizes MxN are typically small, e.g., 7x7, so it seems a more optimal implementation would read in G and V once only, and accumulate over the kernel.
For that 1D example, we would read in a and w,x and start with [a*0 aw ax]. Then we read in b,y and add [bw bx by]. Then read in c,z and add [cx cy cz]. Then read in d and finally add [dy dz d*0].
Is there a direct way to compute dK in ArrayFire? I can't help but think this is some kind of convolution, but I've been unable to wrap my head around what the convolution would look like.
Ah so. For a 3x3 dK array, I use unwrap to convert my MxN input arrays to two MxN column vectors. Then I do 9 dot products of shifted subsets of the two column vectors. No, that doesn't work since the shift is in 2 dimensions.
So I need to create intermediate arrays of 1 x (MxN) and (MxN) x 9 in size, where each column of the latter is a shifted MxN window of the original with a pad border of zeros of size 1, and then do a matrix multiply.
Hmm, that requires too much memory (sometimes.) So the final solution is to do a gfor over the output 3x3, and for each loop, do a dot product of the unwrapped-once G and the unwrapped-repeatedly V.
Agreed?

T and R estimation from essential matrix

I created a simple test application to perform translation (T) and rotation (R) estimation from the essential matrix.
Generate 50 random Points.
Calculate projection pointSet1.
Transform Points via matrix (R|T).
Calculate new projection pointSet2.
Then calculate fundamental matrix F.
Extract essential matrix like E = K2^T F K1 (K1, K2 - internal camera matrices).
Use SVD to get UDV^T.
And calculate restoredR1 = UWV^T, restoredR2 = UW^T. And see that one of them equal to initial R.
But when I calculate translation vector, restoredT = UZU^T, I get normalized T.
restoredT*max(T.x, T.y, T.z) = T
How to restore correct translation vector?
I understand! I don't need real length estimation on this step.
When i get first image, i must set metric transformation (scale factor) or estimate it from calibration from known object. After, when i recieve second frame, i calculate normilized T, and using known 3d coordinates from first frame to solve equation (sx2, sy2, 1) = K(R|lambdaT)(X,Y,Z); and find lambda - than lambdaT will be correct metric translation...
I check it, and this is true/ So... maybe who know more simple solution?

Resources