Thursday, December 30, 2010

stanford machine learning course - lecture#9

My takeaways from 9th lecture of stanford machine learning course.

Up until now, all the lecture focused on supervised learning algorithms and that covers most of them. This lecture talks "learning theory" i.e. how to analyze when to use or not to use a particular algorithm and this is what separates an expert from an average. In particular, this lecture introduces high bias(underfitting) and high variance(overfitting). It talks about empirical error and generalized error. It, for the case of finite sized set of all the hypothesis, proves the uniform convergence result. Based on it, an equation is derived for generalized error of best possible hypothesis(obtained by empirical risk minimization) that shows a bias-variance trade-off in model selection.
Next lecture will cover the case when set of all hypothesis is not finite.

No comments:

Post a Comment