How does predict_proba() function work internally? - machine-learning

Can anyone help me understand the intuition and math behind predict_proba() in KNN algorithm with help of an example?

Related

Why PyTorch MultiheadAttention is considered as activation function?

When scrolling all activation functions available on PyTorch package (here) I found that nn.MultiheadAttention is described there. Can you please explain why it's considered activation function? Maybe I understand something wrong, but Multihead Attention have it's own learnable weights, so it seems to be more suitable for Layers, and not activation functions. Can you please correct me, and give some insights that I'm not getting.
Thank you!

how to apply genetic algorithm on 2d or multidimesional images for optimisation

I am trying to Code a genetic algorithm in Matlab but really dont know how it works in images and how to proceed? Is there any basic tutorial that can help me understand how to apply GA on images (starting from 2d to multidimentional images ).
That will be a great help for me.
Thanking everyone in anticipations.
Kind Regards.
For GA you need two things: a fitness function that can evaluate any solution and tell how good it is, and a representation of your solution so that you can do crossover and mutation. Once you have these, you are good to go. I'm not an expert on image processing so I can't help you with that exactly.
Look at the book Essentials of metaheuristics which is a very good resource for start with evolutionary computation (and not only that) in general. It's free.
There is a paper on this subject which you can find at the IEEE library. I believe it solves the problem you vaguely describe.

How to choose the right normalization method for the right dataset?

There are several normalization methods to choose from. L1/L2 norm, z-score, min-max. Can anyone give some insights as to how to choose the proper normalization method for a dataset?
I didn't pay too much attention to normalization before, but I just got a small project where it's performance has been heavily affected not by parameters or choices of the ML algorithm but by the way I normalized the data. Kind of surprise to me. But this may be a common problem in practice. So, could anyone provide some good advice? Thanks a lot!

Solving eigen problem with sparse matrices?

I'm trying to compute the eigenvectors and eigenvalues of a fairly big matrix. I expect that matrix will be sparse though. Can anyone recommend a way to solve this efficiently?
I've called cv::eigen but the process takes way too long. If anyone has an efficient implementation or a library to solve this I'd really appreciate it.

Algorithm used in Achartengine

I'd like to ask what algorithms have been used in the achartengine? Is Dijkstra's algorithm used in here or any other algorithms? I'm learning algorithms as of now. And if you could give me recommendations where to start, I'd gladly appreciate it. Thank you.
No fancy algorithm has been used in AChartEngine. The only one is for transforming real data to screen coordinates.

Resources