the question of how we can ever know for sure that something is “random.” I.e., even if a string of bits passes every statistical test for randomness that we throw it at, how could we ever rule out that there’s some complicated regularity that we simply failed to find? In the 1960s, the theory of Kolmogorov complexity offered one possible answer to that question, but a rather abstract and inapplicable one: roughly speaking, it said we can consider a string “random enough for our universe” if it has no computable regularities, if there’s no program to output the string shorter than the string itself. More recently, a much more practical answer has come from the Bell inequality — and in particular, from the realization that the experimental violation of that inequality can be used to produce so-called “Einstein-certified random numbers.” These are numbers that are provably random, assuming only (a) that they were produced by two separated devices that produced such-and-such outputs in response to challenges, and (b) there was no faster-than-light communication between the devices. But it’s only within the last few years that computer scientists figured out how to implement this striking idea, in such a way that you get out more randomness than you put in. (Recently, two MIT grad students proved that, starting from a fixed “seed” of, let’s say, 100 random bits, you can produce unlimited additional random bits in this Einstein-certified way
Scott Aaronson on Philosophical Progress | Machine Intelligence Research Institute Sunday, December 15, 2013 @ 12:18pm