My takeaways from 5th lecture of stanford machine learning course.
The algorithms, we studied in the lectures so far, are all discriminative learning algorithms. Formally, algorithms that try to learn p(y|x) directly are called discriminative learning algorithms. This lecture is about generative learning algorithms where we try to model p(x|y) , p(y) and using these we obtain p(y|x) using the bayes theorem.
Next, multivariate normal distribution (or multivariate gaussian distribution) is explained and how its shape/size changes when you change covariance matrix.
Then, Gaussian Discriminant Analysis is explained that is used to solve classification problems(where y can have multiple values and not just 2 like that in case of logistic regression) where features x are continuous-valued random variables. It assumes that p(x|y) has multivariate normal distribution. It also explains that if p(x|y) is multivariate normal distribution then p(y|x) turns out to be logistic regression(viceversa is not true) and shows that if p(x|y) is approximately normal then one should be using gaussian discriminant model as it gives better results but if p(x|y) is completely unknown then using logistic regression might give better results as p(y|x) turns out to be logistic in many more cases, for example when p(x|y) is poisson.
Next, Naive Bayes classifier is explained that again is used for classification problems and makes naive bayes assumption for the features, assumes that x is discrete random variable. Then, Laplace smoothing is introduced for the cases that are absent in training data for naive bayes classifier.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment