Why does randomness exist




















It predicts only the probability of all different outcomes when the experiment is repeated several times with the same preparation. The typical example is the decay of an radioactive atom. A widespread interpretation of these results is the Kopenhagen interpretation, which considers QM a complete theory and this type of probability an inherent property of nature.

But today this interpretation is continously questioned, e. Quantum randomness is indeed different from classical one, where uncertainty can be reduced to lack of information about the underlying full picture, and a description of that full picture is hypothetically provided.

Mathematical formalism of quantum mechanics does not provide these two levels, there is a probabilistic wave function, but there is no underlying full picture. Some mathematical theorems show that providing such a deterministic underlying picture comes at a high price, in particular, many symmetries are lost, faster than light motion has to be allowed for hidden variables even though when everything is translated into the actual predictions of quantum mechanics no such possibility remains.

Because of this feature, so far no deterministic interpretation of quantum mechanics has been successfully extended to quantum field theory, which supersedes quantum mechanics and special relativity, despite decades long efforts by Bohm and others. Perhaps, part of the aversion to quantum randomness comes from analogy to classical situation, when thinking about some underlying picture with causal gaps in it that account for miraculous randomness.

This is indeed how indeterminism enters classical mechanics of non-Lipschitz systems, like the n-body problem for gravity, but that is not how it enters quantum theory. On the Bohmian interpretation of quantum mechanics the probability distribution comes from uncertainty about the initial distribution of Bohmian particles, the hidden variables of the theory, which is unknowable because individual particles are undetectable in principle, only the wave function is. Since luminiferous ether most physicists are weary of such objects.

Mainstream interpretations Feynman's for example ask us instead to give up the idea that there exists mathematically describable "absolute precision" reality about which one can reason classically in the first place. And there is no more reason to presume that reality we face is completely lawless than to presume that it is completely deterministic, there is regularity expressed in probabilistic laws, but no classical certainty.

All that physics can predict is probabilities of future events conditioned on past events, randomness does not enter "miraculously", it is built in so to speak. How satisfactory is such answer? Clearly, we would like more, and classical physics "spoiled" us to expect more. But it breaks down at small scales, and there is no reason why our classical intuitions acquired from experience with macroscopic objects should have any bearing on how the world should be at small scales or very large ones for that matter.

But the "miracle" of randomness presupposes that we intuitively project classical intuition onto the underlying reality first. For example, asking to explain a random event presupposes some form of the principle of sufficient reason, no little detail happens without a cause, but that is more or less determinism itself.

So why should we presuppose it? Again, it is more or less an extrapolation of classical experience since in practice we were never really able to predict anything "in every detail", not even the motion of planets. Our best evidence about behavior at small scales comes not from intuition but from empirical evidence summarized in mathematics of quantum theory.

This mathematics so far can not be forced into deterministic interpretations fully, and for parts that can be the result does not retain regularities manifest in probabilistic picture. Here is a historical analogy. Early examples of waves were mechanical waves in a medium, when it was discovered that light was a wave naturally the assumption was that it spreads in a medium too.

But eventually the picture of this medium became so unattractive that the idea of a medium was given up, even though the idea of waves without a medium is counterintuitive. But there is no a priori reason to think that intuitions and expectations formed classically should be a good guide to describing an aspect of reality that was never encountered classically.

When one actually looks at various cosmologies, as opposed to the modern classical one based on the Newtonian paradigm, the concept of randomness has clear irreducible precursors; for example:. Genesis, the opening book in the Bible, has 'the earth was without form and void, and darkness was upon the face of the deep'. Notably, Hegel like the Tao, equivocates between both: pure indeterminate Being is equivalent but not identical to pure Nothingness.

It's only really the success of the mechanistic philosophy following the success of Newtonian Physics that made neccessity and determinateness the irreducible point of physical phenomena; this is why when uncertainty was discovered in the early 20C, first empirically in radioactivity and then theoretically established in QM; it was seen as new phenomena, rather than a reintroduction and re-establishing of an old and antique idea.

Looking over these answers, the question of whether or not there is true randomness in the universe seems logically equivalent to whether or not there is free will. They're the same question -- how many things in the universe can be said to happen but not be caused by anything external to them. Spinoza calls that thing "God. The problem with getting any farther in either randomness or free will is that, although there are events or acts for which we do not know the causes, that doesn't prove that the causes do not exist.

We very often find that new people, events, and ideas explain things that had been heretofore unexplainable. In fact, in order to pursue any line of thought, we must say "Until such time as new information changes our perspective, we think x.

So, a random event must be always mean something like, "an event that we currently understand to be not determined by anything external to it, although we might find later that it was. Neither are solvable problems or answerable questions, but both ideas are practically useful. Which you ain't getting anytime soon. I don't believe in either actual randomness or free will or, rather, I don't believe that I could prove definitively that any individual event or decision is uncaused , but still use both concepts -- to me, they simply mean "the events or decisions made that I don't currently know the causes of, but still wish to make use of in my thinking.

One way of looking at things is the "Greenness Disappears" form of argument: it is question-begging to explain something in terms of itself. A green thing ultimately must be made up of colorless things, like atoms, or you have not explained why it IS green. So, the only way that things can be deterministic ultimately, is by being based on uncaused whether you label that "random" is up to you events below that.

Otherwise you can keep digging with your deterministic shovel until you fall out the other side of the Earth. When people realize this, they should know to be happy with a sufficient explanation, and not go on asking "why why why " like a child. The knowledge to calculate that number with e.

However if one starts at a random place in a transcendental, the successive numbers are not predictable without knowing the preceding digits or the offset and having a formula for calculating the succeeding digits. So we see transcendentals in nature, leading to some strange means of prediction. So the phenomenon we observe in reality, such as quantum mechanics, could derive from an arbitrarily determined precision of a transcendental.

In such cases, the randomness is a consequence of universal, fundamental, and immutable laws of mathematics. Sign up to join this community. The best answers are voted up and rise to the top. First of all, I am extremely glad to see this post, people discussing Randomness and Free Will. Secondly I am glad by the kind of explanation or opinion put by Zane.

Since many years back,I came to understand that science in simple words is the study of Cause and Effect. Everything in the universe is abiding cause and effect. Nothing is cause less. Randomness is just our inability to observe a process or phenomenon or predict the outputs because of our limitations or the limitations of our present level of technologies.

However things in reality are not random or causeless. Also randomness or causeless events leads to supernatural or magic which do not exist. Since our brains are also part of this physical world, the knowledge or memories in its neurons, the past, the moods, the circumstances, priorities etc all matters in our decisions, our so called free will is also an illusion, and not actually free will or possess ability to think randomly.

Things are flowing automatically as per set in the first event of creation. All of this points towards the infinitely infinite creator or source. Thank you Swaraj Singh.

I think these words should be left to religions, not science. Science operates under the assumptions that all effect has a cause, and that rigorously identical systems evolve in the exact same way; as a consequence, reproducibility of a result is what makes it scientifically valid — which is not the same as being true, unless we can prove the underlying assumptions of science are valid in every situation.

In other words, the conceptual impossibility of causa sui inherent in science sends the question of the origin of our universe into an infinite regression, and since it would require an original uncaused event, it constitutes a scientific impossibility — yet here we are.

Therefore, scientific model! If you believe that all our actions derive from the motion of matter in our brains, but also that an intangible element can meddle with it, then you have to explain how this element is not matter despite being able to impart momentum to matter. Dualism makes more unverifiable assumptions than physicalism or metaphysical solipsism, for the same result the same subjective experience. Pingback: Can free will exist in our deterministic universe?

Just pause for a moment and think of any random world city. Stop now, think of a city, then keep reading. So when you were thinking of your random city, did you consider the city of Kyoto in Japan? If not, then, I ask, did you know Kyoto was a city?

According to the modern scientific view, there is simply no room at all for freedom of the human will. Everything, including that which happens in our brains, depends on these and only on these:.

There is no room on either side for any third alternative. Whatever actions we may choose, they cannot make the slightest change in what might otherwise have been — because those rigid, natural laws already caused the states of mind that caused us to decide that way.

And if that choice was in part made by chance — it still leaves nothing for us to decide. We sometimes understand a few of them, but most lie far beyond our ken. But none of us enjoys the thought that what we do depends on processes we do not know; we prefer to attribute our choices to volition, will, or self-control. We like to give names to what we do not know, and instead of wondering how we work, we simply talk of being free.

Perhaps it would be more honest to say, My decision was determined by internal forces I do not understand. But no one likes to feel controlled by something else. But in order to achieve any long-range goals, effective difference-engines must also learn to resist whatever other processes attempt to make them change those goals. In childhood, everyone learns to recognize, dislike, and resist various forms of aggression and compulsion. In any case, both alternatives are unacceptable to self-respecting minds.

No one wants to submit to laws that come to us like the whims of tyrants who are too remote for any possible appeal. There remains only one thing to do: we add another region to our model of our mind. We imagine a third alternative, one easier to tolerate; we imagine a thing called freedom of will, which lies beyond both kinds of constraint. I personally think this is fine tuning. Or at the cosmological scale?

Also modern science does not preclude the discovery of other things that are fundamental. Consequently, I think pure reductionism is a nonsense viewpoint. There is no evidence that randomness does NOT emerge at higher levels of physics. In fact, there is an awful lot of circumstantial evidence that it does.

Sooner or later we will discover why. For randomness to exist you need infinity, can we proof infinity? We can select random numbers from nature in any finite range we choose. I do not see how infinity is at all involved …? Does the live or dead state of car in a box become random depending on the release of a poisonous gas triggered or not by the decay of a radioactive atom?

Is that the result of a quantum event but expressed at the macroscopic scale? Can you confidently say that the atomic counts in the bounds of each experiment are identical? This is especially true in light of a recent argument by Porter — It turns out that for each sense of randomness ML randomness, Schnorr randomness, computable randomness, etc. Most importantly, none of these properties look overwhelmingly more natural or canonical than the others.

Unfortunately, these notions of typical sequence diverge from one another. That conclusion may well be justified. But we can largely sidestep the dispute over whether there is a single precise notion of randomness that answers perfectly to our intuitive conception of random sequence.

It is adopted here as a useful working account of randomness for sequences. None of the difficulties and problems I raise below for the connection between random sequences and chance turns in any material way on the details of which particular set of sequences gets counted as random most are to do with the mismatch between the process notion of chance and any algorithmic conception of randomness, with differences amongst the latter being relatively unimportant.

So while the observations below are intended to generalise to Schnorr randomness and other proposed definitions of random sequences, I will explicitly treat only KML randomness in what follows. The notions of chance and randomness discussed and clarified in the previous two sections are those that have proved scientifically and philosophically most fruitful.

Whatever misfit there may be between ordinary language uses of these terms and these scientific precisifications, is made up for by the usefulness of these concepts. On these conceptions, randomness is fundamentally a product notion, applying in the first instance to sequences of outcomes, while chance is a process notion, applying in the single case to the process or chance setup which produces a token outcome.

Of course the sample thus selected may not be random in some intuitive sense; nevertheless, it will not be biased because of any defect in the selection procedure, but rather only due to bad luck. With these precise notions in mind, we can return to the Commonplace Thesis CT connecting chance and randomness.

Two readings make themselves available, depending on whether we take single outcomes or sequences of outcomes to be primary:. But a problem presents itself, if we consider chancy outcomes in possible situations in which only very few events ever occur. The problem arises because the outcomes may be too few or too orderly to properly represent the randomness of the entire sequence of which they are part all infinite random sequences have at least some non-random initial subsequences.

The most common solution in the case of frequentism was to opt for a hypothetical outcome sequence—a sequence of outcomes produced under the same conditions with a stable limit frequency von Mises 14—5.

Likewise, we may refine the commonplace thesis as follows:. Here the idea is that chancy outcomes would, if repeated often enough, produce an appropriately homogenous sequence of outcomes which is random.

If the trial is actually repeated often enough, this sequence should be the actual sequence of outcomes; the whole point of Kolmogorov randomness was to permit finite sequences to be random. RCT is intuitively appealing, even once we distinguish process and product randomness in the way sketched above.

It receives significant support from the fact that fair coins, tossed often enough, do in our experience invariably give rise to random sequences, and that the existence of a random sequence of outcomes is compelling evidence for chance. The truth of RCT explains this useful constraint on the epistemology of chance, since if we saw an actual finite random sequence, we could infer that the outcomes constituting that sequence happened by chance.

However, in the next two sections, we will see that there are apparent counterexamples even to RCT, posing grave difficulties for the Commonplace Thesis. A fundamental problem with RCT seems to emerge when we consider the fate of hypothetical frequentism as a theory of chance. However, there is some reason to think that RCT will fare better than hypothetical frequentism in this respect.

In particular, RCT does not propose to analyse chance in terms of these hypothetical sequences, so we can rely on the law of large numbers to guide our expectation that chance processes produce certain outcome frequencies, in the limit, with probability 1; this at least may provide some reason for thinking that the outcome sequences will behave as needed for RCT to turn out true.

Even so, one might suspect that many difficulties for hypothetical frequentism will recur for RCT. However, these difficulties stem from general issues with merely possible evidential manifestations of chance processes, and have nothing specifically to do with randomness. The objections canvassed below are, by contrast, specifically concerned with the interaction between chance and randomness. It is possible for a fair coin—i. An infinite sequence of heads has, on the standard probability calculus, zero chance of occurring.

Indeed, it has zero chance even on most non-standard views of probability: Williamson Nevertheless if such a sequence of outcomes did occur, it would have happened by chance—assuming, plausibly, that if each individual outcome happens by chance, the complex event composed by all of them also happens by chance.

But in that case we would have an outcome that happened by chance and yet the obvious suitable sequence of outcomes is not KML-random.

This kind of example exploits the fact that while random sequences are a measure one set of possible outcome sequences of any process, chancy or otherwise, measure one does not mean every. This counterexample can be resisted. For while it is possible that a fair coin lands only heads when tossed infinitely many times, it may not be that this all heads outcome sequence is a suitable sequence.

For if we consider the counterfactual involved in RCT—what would happen, if a fair coin were tossed infinitely many times—we would say: it would land heads about half the time. That is on the standard, though not uncontroversial, Lewis-Stalnaker semantics for counterfactuals: Lewis , though the all-heads outcome sequence is possible, it does not occur at any of the nearest possibilities in which a fair coin is tossed infinitely many times. If we adopt a non-reductionist account of chance, this line of resistance is quite implausible.

For there is nothing inconsistent on such views about a situation where the statistical properties of the occurrent sequence of outcomes and the chance diverge arbitrarily far, and it seems that such possibilities are just as close in relevant respects as those where the occurrent outcome statistics reflect the chances.

In particular, as the all-heads sequence has some chance of coming to pass, there is by BCP a physical possibility sharing history and laws with our world in which all-heads occurs. This looks like a legitimately close possibility to our own. Prospects look rather better on a reductionist view of chance Supplement A. On such a view, we can say that worlds where an infinite sequence of heads does occur at some close possibility will look very different from ours; they differ in law or history to ours.

In such worlds, the chance of heads is much closer to 1 reflecting the fact that if a coin were tossed infinitely many times, it might well land heads on each toss —the coin is not after all fair.

The response, then, is that in any situation where the reductionist chance of heads really is 0. That is to say, they at least satisfy the property of large numbers; and arguably they can be expected to meet other randomness properties also. So, on this view, there is no counterexample to RCT from the mere possibility of these kinds of extreme outcome sequences. This response depends on the success of the reductionist similarity metrics for chancy counterfactuals developed by Lewis a and Williams ; the latter construction, in particular, invokes a close connection between similarity and randomness.

For the same phenomenon exists with almost any unrepresentative outcome sequence. A fair coin, tossed times, has a positive chance of landing heads more than times. But any outcome sequence of tosses which contains more than heads will be compressible long runs of heads are common enough to be exploited by an efficient coding algorithm, and outcomes is long enough to swamp the constants involved in defining the universal prefix-free Kolmogorov complexity. So any such outcome sequence will not be random, even though it quite easily could come about by chance.

The only way to resist this counterexample is to refuse to acknowledge that such a sequence of outcomes can be an appropriate sequence in RCT. This is implausible, for such sequences can be actual, and can be sufficiently long to avoid the analogue of the problem of the single case, certainly long enough for the Kolmogorov definition of randomness to apply. The only reason to reject such sequences as suitable is to save RCT, but that is clearly question begging in this context.

Suppose Lizzie tosses a coin on Tuesday; this particular coin toss may be considered as a coin toss; a coin toss on a Tuesday; a coin toss by Lizzie; an event caused by Lizzie; etc. Each of these ways of typing the outcome give rise to different outcome sequences, some of which may be random, while others are not. Each of these outcome sequences is unified by a homogenous kind of trial; as such, they may all be suitable sequences to play a role in RCT.

This is not a problem if chance too is relative to a type of trial, for we may simply make the dependence on choice of reference class explicit in both sides of RCT. If chances were relative frequencies, it would be easy enough to see why chance is relative to a type of trial.

We naturally speak of the chance that this coin will lands heads on its next toss, with the chance taken to be a property of the possible outcome directly, and not mediated by some particular description of that outcome as an instance of this or that kind of trial.

Indeed, the inability of frequentists to single out a unique reference class, the frequency in which is the chance, was taken to be a decisive objection to frequentism. On the standard understanding of chance, then, there is a mismatch between the left and right sides of the RCT. And this gives rise to a counterexample to RCT, if we take an event with a unique non-trivial single-case chance, but such that at least one way of classifying the trial which produced it is such that the sequence of outcomes of all trials of that kind is not random.

The trivial case might be this: a coin is tossed and lands heads. The natural response—and the response most frequentists offered, with the possible exception of von Mises [ 14 ] —was to narrow the available reference classes.

Salmon appeals to objectively homogenous reference classes those which cannot be partitioned by any relevant property into subclasses which differ in attribute frequency to the original reference class. In the present context, this will amount to a number of sequences long enough to make for reliable judgements of their randomness or lack thereof. This objection requires the chance of an event to be insensitive to reference class.

For a related view of relativised chance, though motivated by quite different considerations, see Glynn The two previous problems notwithstanding, many have found the most compelling cases of chance without randomness to be situations in which there is a biased chance process. A sequence of unfair coin tosses will have an unbalanced number of heads and tails, and such a sequence cannot be random.

But such a sequence, and any particular outcome in that sequence, happens by chance. In the latter case, as we have already seen, a biased sequence will be more compressible than an unbiased sequence, if the sequence is long enough, because an efficient coding will exploit the fact that biased sequences will typically have longer subsequences of consecutive digits, and hence will not be random.

So on the standard account of randomness, no sequences of outcomes of a biased chance process are random, but of course these outcomes happened by chance. One response to this problem is to try and come up with a characterisation of randomness which will permit the outcomes of biased chances to be random.

This account is able to handle any value for the frequency, not only the case where each of two outcomes are equifrequent. This measure is not the standard Lebesgue measure, but rather a measure defined by the chance function in question. We can similarly re-interpret the other effective statistical tests of randomness. There are some potential pitfalls, suggesting that perhaps the generalisation to an arbitrary computable measure is an overgeneralisation: see supplement B.

While the above approach, with the modifications suggestion in the supplement taken on board, does permit biased random sequences, it comes at a cost.

While the Lebesgue measure is a natural one that is definable on the Cantor space of sequences directly, the generalisation of ML-randomness requires an independent computable probability measure on the space of sequences to be provided. While this may be done in cases where we antecedently know the chances, it is of no use in circumstances where the existence of chance is to be inferred from the existence of a random sequence of outcomes, in line with RCT—for every sequence there is some chance measure according to which it is random, which threatens to trivialise the inference from randomness to chance.

As Earman also emphasises, this approach to randomness seems to require essentially that the chanciness of the process producing the random sequence is conceptually prior to the sequence being random.

By contrast, the Lebesgue measure has the advantage of being intrinsically definable from the symmetries of the Cantor space, a feature other computable measures lack.

The generalisation above shows that we can define a notion of disorderliness that is relative to the probabilities underlying the sequence, but that is not intrinsic to the sequence itself independently of whatever measure we are considering.

As Earman puts it in slightly misleading terminology :. It is very intuitive, as this remark from Dasgupta suggests, to take that biasedness—the increased orderliness of the sequence—to contrast with randomness:. As the bias in a chance process approaches extremal values, it is very natural to reject the idea that the observed outcomes are random.

Moreover, there is a relatively measure-independent notion of disorder or incompressibility of sequences, such that biased sequences really are less disorderly. We can define a measure-dependent notion of disorder for biased sequences only by ignoring the availability of better compression techniques that really do compress biased sequences more than unbiased ones. To generalise the notion of randomness, as proposed above, permits highly non-random sequences to be called random as long as they reflect the chances of highly biased processes.

So there is at least some intuitive pull towards the idea that if randomness does bifurcate as Earman suggests, the best deserver of the name is Kolmogorov randomness in its original sense. But this will be a sense that contrasts with the natural generalisation of ML-randomness to deal with arbitrary computable probability measures, and similarly contrasts with the original sense of randomness that von Mises must have been invoking in his earliest discussions of randomness in the foundations of probability.

In light of the above discussion, while there has been progress on defining randomness for biased sequences in a general and theoretically robust way, there remain difficulties in using that notion in defence of any non-trivial version of RCT, and difficulties in general with the idea that biased sequences can be genuinely disorderly.

But the generalisation invoked here does give some succour to von Mises, for a robust notion of randomness for biased sequences is a key ingredient of his form of frequentism. A further counterexample to RCT, related to the immediately previous one, is that randomness is indifferent to history, while chance is not. Chance is history-dependent. The simplest way in which chance is history-dependent is when the conditions that may produce a certain event change over time:.

But there are more complicated types of history-dependence. But there are cases where the property which changes is a previous outcome of the very same process. Indeed, any process in which successive outcomes of repeated trials are not probabilistically independent will have this feature. One example of chance without randomness involves an unbiased urn where balls are drawn without replacement.

Each draw with the exception of the last is an event which happens by chance, but the sequence of outcomes will not be random because the first half of the sequence will carry significant information about the composition of the second half, which may aid compressibility. But a more compelling example is found in stochastic processes in which the chances of future outcomes depend on past outcomes.

One well-known class of such process are Markov chains, which produce a discrete sequence of outcomes with the property that the value of an outcome is dependent on the value of the immediately prior outcome but that immediately prior outcome screens off the rest of the history.

If a Markov chain is the correct model of a process, then even when the individual trial outcomes happen by chance, we should expect the entire sequence of repeated trials to be non-random. In the weather case just discussed, we should expect a sunny day to be followed by a sunny day, and a rainy day by a rainy one.

In our notation 11 and 00 should be more frequent than 10 or But the condition of Borel normality, which all random sequences obey, entails that every finite sequence of outcomes of equal length should have equal frequency in the sequence. So no Borel normal sequence, and hence no random sequence, can model the sequence of outcomes of a Markov chain, even though each outcome happens by chance.

At least some non-random sequences satisfy many of the measure one properties required of random sequences. For example, the Champernowne sequence, consisting of all the binary numerals for every non-negative integer listed consecutively i. But it looks like it satisfies at least some desiderata for random sequences. This sequence is an attempt at producing a pseudorandom sequence—one that passes at least some statistical tests for randomness, yet can be easily produced. The main impetus behind the development of pseudorandom number generators has been the need to efficiently produce numbers which are random for all practical purposes, for use in cryptography or statistical sampling.

Much better examples exist than the Champernowne sequence, which meet more stringent randomness properties. Obviously this is useless if the seed is known, or can in some way be expected to be correlated with the events to which one is applying these pseudorandom numbers.

But in practical applications, the seed is often chosen in a way that we do expect it to carry no information about the application in simple computer pseudorandom number generators, the seed may be derived in some way from the time at which the seed is called for.

With a finite seed, this sequence will obviously repeat after a some period. A symbol shift is the simplest possible function from seed to outcome sequence; better algorithms use a more complicated but still efficiently computable function of the seed to generate outcome sequences with a longer period, much longer than the length of the seed e.

If the seed is not fixed, but is chosen by chance, we can have chance without randomness. For example, suppose the computer has a clock representing the external time; the time at which the algorithm is started may be used as a seed. But if it is a matter of chance when the algorithm is started, as it well may be in many cases, then the particular sequence that is produced by the efficient pseudorandom sequence generator algorithm will be have come about by chance, but not be random since there is a program which runs the same algorithm on an explicitly given seed; since the seed is finite, there will be such a program; and since the algorithm is efficient, the length before the sequence produced repeats will be longer than the code of the program plus the length of the seed, making the produced sequence compressible.

Whether the seed is produced by chance or explicitly represented in the algorithm, the sequence of outcomes will be the same—one more way in which it seems that the chanciness of a sequence can vary while whether or not it is random remains constant. Much the same point could have been made, of course, with reference to any algorithm which may be fed an input chosen by chance, and so may produce an outcome by chance, but where the output is highly compressible.

One way in which pseudorandom sequence generators are nice in this respect is that they are designed to produce highly compressible sequences, though non-obviously highly compressible ones. The other interesting thing about those algorithms which produce pseudorandom sequences is that they provide another kind of counterexample to the epistemic connection between chance and randomness. For our justification in thinking that a given sequence is random will be based on its passing only finitely many tests; we can be justified in believing pseudorandom sequence to be random in some respectable sense of justification, as long as justification is weaker than truth , and justified in making the inference to chance via RCT.

But then we might think that this poses a problem for RCT to play the right role epistemically, even if it were true. Suppose one sees a genuinely random sequence and forms the justified belief that it is random. The existence of pseudorandom sequences entails that things might seem justificatorily exactly as they are and yet the sequence not be random.

But such a scenario, arguably, defeats my knowing that the sequence is random, and thus defeats my knowing the sequence to have been produced by chance and presumably undermines the goodness of the inference from randomness to chance.

Yet when we widen our gaze to encompass a fuller range of chance processes, the appeal of the right-to-left direction of RCT is quite diminished. It is now time to examine potential counterexamples to the other direction of RCT. There are a number of plausible cases where a random sequence potentially exists without chance. Many of these cases involve interesting features of classical physics, which is apparently not chancy, and yet which gives rise to a range of apparently random phenomena.

Unfortunately some engagement with the details of the physics is unavoidable in the following. One obvious potential counterexample involves coin tossing.

Some have maintained that coin tossing is a deterministic process, and as such entirely without chances, and yet which produces outcome sequences we have been taking as paradigm of random sequences.

For many short sequences, even the most efficient prefix-free code will be no shorter than the original sequence as prefix-free codes contain information about the length of the sequence as well as its content, if the sequence is very short the most efficient code may well be the sequence itself prefixed with its length, which will be longer than the sequence.

So all short sequences will be Kolmogorov random. This might seem counterintuitive, but if randomness indicates a lack of pattern or repetition, then sequences which are too short to display pattern or repetition must be random.

Of course it will not usually be useful to say that such sequences are random, mostly because in very short sequences we are unlikely to talk of the sequence at all, as opposed to talking directly about its constituent outcomes. But for events which are unrepeatable or seldom repeatable, even the merely possible suitable reference classes will be small.

And such unrepeatable events do exist—consider the Big Bang which began our universe, or your birth your birth, not the birth of qualitatively indistinguishable counterpart , or the death of Ned Kelly. These events are all part of outcome sequences that are necessarily short, and hence these events are part of Kolmogorov random sequences. But it is implausible to say that all of these events happened by chance; no probabilistic theory need be invoked to predict or explain any of them.

So there are random sequences—those which are essentially short—in which each outcome did not happen by chance. The natural response is to reject the idea that short sequences are apt to be random.

The right hand side of RCT makes room for this, for we may simply insist that unrepeatable events cannot be repeated often enough to give rise to an adequate sequence whether or not the inadequate sequence they do in fact give rise to is random.

The problem here is that we can now have chance without randomness, if there is a single-case unrepeatable chance event. Difficulties in fact seem unavoidable. If we consider the outcomes alone, either all short sequences are random or none of them are; there is no way to differentiate on the basis of any product-based notion between different short sequences.

But as some single unrepeatable events are chancy, and some are not, whichever way we opt to go with respect to randomness of the singleton sequences of such events we will discover counterexamples to one direction or another of RCT. Yet there seem to be physical situations in which a symbol shift dynamics is an accurate way of representing the physical processes at work. This corresponds to transforming the unit square to a rectangle twice as wide and half the height, chopping off the right half, and stacking it back on top to fill the unit square again.

This is easily seen, as the symbol shift dynamics on the basic sets of infinite binary sequences is measure-preserving, and each coordinate can be represented as an infinite binary sequence. If the RCT is true, then a system which behaves exactly like a chance process should have as its product a random sequence. So while the product produced is random, as random as a genuine chance process, these outcomes do not happen by chance; given the prior state of the system, the future evolution is not at all a matter of chance.

So we have a random sequence without chance outputs. To be perfectly precise, the trial in this case is sampling the system at a given time point, and seeing which cell of the coarse grained partition it is in at each time. This is a sequence of arbitrarily repeated trials which produces a random sequence; yet none of these outcomes happens by chance. And if probability plays no role, it is very difficult to see how chance could play a role, since there is no probability function which serves as norm for credences, governs possibility, or is non-trivial and shared between intrinsic duplicate trials.

In short, no probability function that has the features required of chance plays a role in the dynamics of this system, and that seems strong reason for thinking there is no chance in this system. It is a question of considerable interest whether there are physically more realistic systems which exhibit the same features.

The evolution of the system over time is characterised by its Hamiltonian , a representation of the energetic and other properties of the system. Yet for closed systems, in which energy is conserved over time, this is not generally possible. Indeed, for closed systems it is not generally possible to satisfy even a very weak randomness property, ergodicity.

A system is ergodic just in case, in the limit, with probability one, the amount of time a system spends in a given state is equal to the standard measure of state space that corresponds to that state Earman —61; Sklar 60—3; Albert 59— While a Bernoulli system is ergodic, the converse entailment does not hold; if the system moves only slowly from state to state, it may be ergodic while the state at one time is strongly dependent on past history Sklar —7.

While ergodicity has been shown to hold of at least one physically interesting system Yakov Sinai has shown that the motion of hard spheres in a box is ergodic, a result of great significance for the statistical mechanics of ideal gases , a great many physically interesting systems cannot be ergodic.

This is the upshot of the so-called KAM theorem, which says that for almost all closed systems in which there are interactions between the constituent particles, there will be stable subregions of the state space—regions of positive measure such that if a system is started in such a region, it will always stay in such a region Sklar — Such systems obviously cannot be ergodic. There are physically interesting systems to which the KAM theorem does not apply.

Open or dissipative systems, those which are not confined to a state space region of constant energy, are one much studied class, because such systems are paradigms of chaotic systems. The combination of these two features permits very interesting behaviour—while the existence of an attractor means that over time the states of the system will converge to the region of the attractor, the sensitive dependence on initial conditions means that close states, at any time, will end up arbitrarily far apart.

For this to happen the attractor must have a very complex shape it will be a region of small measure, but most of the state space will be in the neighbourhood of the attractor.

More importantly for our purposes, a system with these characteristics, supposing that the divergence under evolution of close states happens quickly enough, will yield behaviour close to Bernoulli—it will yield rapid mixing Luzzatto et al.

This is weaker than Bernoulli since the states of a Bernoulli system are probabilistically independent if there is any time between them , but still strong enough to plausibly yield a random sequence of outcomes from a coarse grained partition of the state space, sampled infrequently enough.

So we do seem to have physically realistic systems that yield random behaviour without chance. See also the systems discussed in Frigg Indeed, the behaviour of a chaotic system will be intuitively random in other ways too. It all appears completely deterministic. A lack of true randomness would be a huge problem, just like it was for the Germans during World War II with their revered but ultimately doomed Enigma enciphering machine. With its quintillion different settings, many Allied cryptologists believed the code was unbreakable.

Yet, because it was a mere matter of rotor settings and circuitry—or put simply, completely deterministic—the Allies were able to crack the code. Since Newtonian physics has proven resistant to true randomness, cryptologists have since looked to quantum physics, or the rules that govern subatomic particles, which are completely different than Newtonian physics. Radioactive materials spontaneously throw off particles in a probabilistic manner, but the exact time when each particle will be discarded is inherently random.

We think. So given a small window of time, the number of radioactive particles discarded can act as the seed for the random number generator.

This is where the difference between random and pseudo-random becomes vastly important. Pseudo-random patterns, like the ones created by the Enigma machine, are messages begging to be read.



0コメント

  • 1000 / 1000