Classical probability theory gives all sequences of fair coin tosses of the same length the same probability. On the other hand, when considering sequences such as
none but the most contrarian among us would deny that the second (obtained by the first author by tossing a coin) is more random than the first. Indeed, we might well want to say that the second sequence is entirely random, whereas the first one is entirely nonrandom. But what are we to make in this context of, say, the sequence obtained by taking our first sequence, tossing a coin for each bit, and if the coin comes up heads, replacing that bit by the corresponding one in the second sequence? There are deep and fundamental questions involved in trying to understand why some sequences should count as "random," or "partially random," and others as "predictable," and how we can transform our intuitions about these concepts into meaningful mathematical notions.
No entries found