week 1: reflections

37 sec read

Week 1 of our machine learning “course” entailed reading through the introduction and chapter 2 of An Introduction to Statistical Learning with Applications in R. In these opening chapters, the authors wove together statistical concepts, many that were covered in our required graduate-level statistics courses, with the basics of machine learning. Though I was familiar with some of the introductory content, I enjoyed how the authors framed important statistical and machine learning concepts as dichotomous pairs. Several are listed below:

  • Regression vs. classification
  • Supervised vs. unsupervised learning
  • Variance vs. bias
  • Inference vs. prediction
  • Interpretability vs. flexibility
  • Parametric vs. non-parametric

The introduction provided a great review, but one message really rose above the rest:

“There is no free lunch in statistics; no one method dominates all others over all possible data sets.” (pg. 29)

I suppose one could say this is the art of statistical modeling. Choosing an appropriate model for a data set is a constant balancing act between model flexibility and interpretability – an idea to keep in mind as we progress through the course.

individual differences in sensitivity to homophony in visual word…

I successfully defended my MS thesis, titled Individual Differences in Sensitivity to Homophony in Visual Word Recognition, on 24 April 2017. Follow the link...
Henry Wolf
20 sec read

new blog series incoming

While there are a large number of useful tutorials that explain machine learning to coders, most seem to lack a basis in curriculum development. By...
Henry Wolf
43 sec read

Leave a Reply

Your email address will not be published. Required fields are marked *