Saturday, December 18, 2010

stanford machine learning course - lecture#6

My takeaways from 6th lecture of stanford machine learning course.

In this lecture naive bayes classifier algorithm, introduced in previous lecture, is extended to the case where each feature($x_i$) can be any positive integer as opposed to just being 0 or 1. For this reason, in previous case the model is called multi-variate Bernoulli event model and in this case its called multinomial event model. This model is commonly used for text classification problems and hence also known as event model for text classification.
Then, briefly, neural network is mentioned with 2 videos showing early on milestone projects. One was about recognizing a digit from a picture and the other one about text to speech algorithm. Both of them used neural network. It was to show that neural network was considered a very good solution for a wide range of classification problems untill support vector machine, a supervised learning algorithm, was invented. It is considered among the best off the shelf supervised learning algorithm now a days.
Remaining lecture sets the foundations for understanding support vector machine to be explained in later lectures. Now a new notation , concept of functional and geometrical margins are explained. And, maximizing margin classifier is briefly mentioned. It is told that this classifier can perform almost equivalent to logistic regression but more importantly this happens to be the base for support vector machine that can handle features in infinite dimension.

No comments:

Post a Comment