One of the biggest, oldest questions in the philosophy of science could be paraphrased as: “why is Occam’s Razor justified? when we find simple descriptions of past events, why do we have any grounds whatsoever to expect those descriptions to predict future events?” This, I think, is the core of Hume’s “problem of induction.” Now, I think theoretical computer science has contributed large insights to this question — including Leslie Valiant’s Probably Approximately Correct (PAC) learning model, for which he recently won the Turing Award; the notion of Vapnik-Chernonenkis (VC) dimension; and the notion of the universal prior from algorithmic information theory. In essence, these ideas all give you various formal models where Occam’s Razor provably works — where you can give “simplicity” a precise definition, and then see exactly why simple hypotheses are more likely to predict the future than complicated ones. Of course, a skeptic about induction could still ask: OK, but why are the assumptions behind these formal models justified? But to me, this represents progress! The whole discussion can now start from a more sophisticated place than before.
Scott Aaronson on Philosophical Progress | Machine Intelligence Research Institute Sunday, December 15, 2013 @ 12:15pm