I am now planning the next offering of a Generalized Linear Mixed Models course that I sometimes teach to our graduate students. I’m teaching the next offering next spring. All our graduate students are clamoring for a course in R, and I am sure I’ll get much pressure to teach this course using R.
So if you’re not a statistics geek, like myself, then you need not read further. But if you are, have you ever thought about why it’s OK to use ordinary least squares methods to fit functions to curves – for example polynomials?
One of the oldest and most revered methods for fitting statistical models to data is using the method of ordinary least squares. By this method, one can fit a model to a set of data and arrive at the least squares estimates of parameters for the model under a set of assumptions (e.g., normal and constant error variances). This method allows one to estimate the parameters for the model that minimize the sum of squared deviations of the observed to predicted values, based on the model.
Statistics is as much moral philosophy and epistemology as it is mathematical analysis of data. One must make judgements about what you know, what you think you know, what the data represent, and how you think the world works. Also, one must have a strong moral sense of right and wrong, fair and unfair. The act of data analysis involves a huge set of judgments and decisions that must be made along the way, and almost all of these go unstated when the answer is presented to others. Lying with statistics is real, and most frequently one is really lying to oneself. Most importantly, I think every statistical analysis should be capable of telling the analyst that she or he is completely wrong – in other words, none of the models being considered fit the data.