Maximum a Poseriori Derivation - machine-learning

can anyone please tell me how the author came to equation 2 from equation 1. I've applied Bayes rules but I'm not able to come to equation 2 directly. Thanks so much in advance.

These two are equal by definition of Bayes theorem :)

Related

Autocorrelation interpretation

Hi I am trying to understand how to interpret autocorrelation:
By looking at the graph how can we say the auto-correlation with the previous period (t + 1 in relation to t). TIA.
The following section from the Forecasting Principles and Practice book may help you to understand the interpretation of the Auto-correlation plots.
Thanks.
[1] https://otexts.com/fpp2/autocorrelation.html

Some questions about SAMME.R

The picture of the algorithm
The paper about SAMME.R algorithm
Firstly,in the 2a step, if fits a classifier T(x) to the training data using weights,but I don't know how the algorithm uses the classifier T(x) in the following part.
Secondly, in the 2b step, i don't know how to obtain the weighted class probability estimates. It just says we can use decision tree to estimate the probability, but I don't know how to do it.
Thanks in advance. My English is poor and my question may be vague. I am really sorry for this. If you can't understand my question, just comment it and I will try my best to expound my question clearly! Thank you very much!!
I also found myself contemplating on the same problem not too long ago. For whatever it's worth, here is my opinion on the matter:
Step 2a
Train a DecisionTree or any other classifier that can supply probability estimates. You can find an interesting article on estimating probabilities with DecisionTrees here. This classifier will be used in step 2b.
Step 2b
One way to look at this if you expand the formula:
In words, to compute the weighted probability for some label (i) you multiply the probability estimated at the previous step for label i with the sum of weights of the samples that have the label i. In practice, the classifier at step 2a may use the weights in some other way and at the end only supply the weighted probability estimates. A nice post on this for Decision Trees is here.
I hope you find this answer helpful!

forward backward algorithm pseudocode clarification

I had a question regarding the pseudocode on the wikipedia page (https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm#Python_example) for the forward backward algorithm. Specifically,what is the purpose of this section of code:
merging the two parts
posterior = []
for i in range(L):
posterior.append({st: fwd[i][st]*bkw[i][st]/p_fwd for st in states})
This might be a little late.
The goal of this algorithm is to inference on data given the model. By this calculation you are calculating the posterior probability of P(x_i|o_i) which can help finding out the next step in the test procedure.
Hope that has helped.

ruby-on-rails gems to determine probability curve

Is there a gem which enables you provide 3 to 4 values of known probability and generate en equation showing lognormal curve of best fit? The curve to be used for determining different values of (y) given the value of (x)
thanks Pierre
I didn't appreciate how hard this would be. A third party is providing javascript to go to solve this problem. Thanks - Pierre

Difference between SVM and MPM (example)

I know the theorical difference between theses methods, but someone could give an example that make SVM != MPM? I think it's the same thing.
An image would be awesome.
SVM: Maximize margin between 2 samples classes
MPM: Minimize the probability of wrong classification
Thank you.
here's is a link to the picture.

Resources