Theory and Practice
by rrusczyk, Aug 5, 2008, 7:11 PM
For a few days I've been intending to unify some of my earlier thoughts on various books with a post about theory and empirical observation. Joml just inspired me to finally write these ideas down by forwarding me this article, which notes that in a particular ceramics class, the works of highest quality came from those graded on quantity. This is not terribly surprising to me: through quantity, we explore lots of ideas, and try lots of things. Through an obsession on "quality", on perfecting each piece of something, a great many ideas and possibilities are left unexplored.
In earlier posts on poverty, medicine, education, physics, and probably a number of other places, I've noted that one of our biggest shortcomings in pretty much all these areas is an over-reliance on theory, instead of on empirical observation.
But now, we are rolling in ideas, and, with a little effort, we can be rolling in data to evaluate them. However, for the most part, we don't do so -- we go with what seems right (particularly in politics!) rather than investigating and evaluating. Why? For one, it's much easier to spout off theories than it is to test them. In most areas, it's also much cooler to be a producer of ideas than an evaluator or an aggregator of them. (Though the latter is proving to be more lucrative -- just ask the folks at Google.) Moreover, it's much more lucrative to come up with ideas you can sell (figuratively or literally) than it is to *test* those ideas -- what do you do if you find out you're wrong?!?
I think joml's article indicates a bit why this is a problem in all the aforementioned fields. Each individual practitioner usually operates like a person graded on quality, doggedly pursuing their pet idea. Only in the aggregate, taking all the practitioners together, do we have a system that is somewhat graded on quantity. What we need in many of these areas is an effective evaluator, to filter the ideas produced in such quantity, to identify those of quality.
To be fair, we're not immune to this problem at AoPS. We haven't done any scientific testing of our curriculum; we'd never get off the ground if we did that first (we're not alone -- show me a realistic study of any curriculum). We're in the process of building a tool that will allow me to test some of my pet theories (more on that later . . . ), but at this point, we're operating more on intuition and our observations of students than on empirically-proven results. Pretty much like almost everyone else everywhere in every field. (Quantitative trading *may* be a notable exception, which may be the leading non-financial reason so many top math folks like that field.) I'm not sure how to globally change this mindset and bring the scientific method more to bear in many areas in which it's somewhat or entirely absent (including possibly some areas of science) -- it's in very few people's interest to test ideas and find out what's going on in practice. What works, what doesn't. It's so much easier to just stick with the theory. But . . .
In earlier posts on poverty, medicine, education, physics, and probably a number of other places, I've noted that one of our biggest shortcomings in pretty much all these areas is an over-reliance on theory, instead of on empirical observation.
But now, we are rolling in ideas, and, with a little effort, we can be rolling in data to evaluate them. However, for the most part, we don't do so -- we go with what seems right (particularly in politics!) rather than investigating and evaluating. Why? For one, it's much easier to spout off theories than it is to test them. In most areas, it's also much cooler to be a producer of ideas than an evaluator or an aggregator of them. (Though the latter is proving to be more lucrative -- just ask the folks at Google.) Moreover, it's much more lucrative to come up with ideas you can sell (figuratively or literally) than it is to *test* those ideas -- what do you do if you find out you're wrong?!?
I think joml's article indicates a bit why this is a problem in all the aforementioned fields. Each individual practitioner usually operates like a person graded on quality, doggedly pursuing their pet idea. Only in the aggregate, taking all the practitioners together, do we have a system that is somewhat graded on quantity. What we need in many of these areas is an effective evaluator, to filter the ideas produced in such quantity, to identify those of quality.
To be fair, we're not immune to this problem at AoPS. We haven't done any scientific testing of our curriculum; we'd never get off the ground if we did that first (we're not alone -- show me a realistic study of any curriculum). We're in the process of building a tool that will allow me to test some of my pet theories (more on that later . . . ), but at this point, we're operating more on intuition and our observations of students than on empirically-proven results. Pretty much like almost everyone else everywhere in every field. (Quantitative trading *may* be a notable exception, which may be the leading non-financial reason so many top math folks like that field.) I'm not sure how to globally change this mindset and bring the scientific method more to bear in many areas in which it's somewhat or entirely absent (including possibly some areas of science) -- it's in very few people's interest to test ideas and find out what's going on in practice. What works, what doesn't. It's so much easier to just stick with the theory. But . . .
Yogi Berra wrote:
In theory there is no difference between theory and practice. In practice there is.