abelard's home latest changes & additions at abelard.org link to document abstracts link to short briefings documents quotations at abelard.org, with source document where relevant click for abelard's child education zone economics and money zone at abelard.org - government swindles and how to transfer money on the net latest news headlines at abelard's news and comment zone
socialism, sociology, supporting documents described Loud music and hearing damage Architectural wonders and joys at abelard.org about abelard and abelard.org visit abelard's gallery Energy - beyond fossil fuels France zone at abelard.org - another France
astracts of documents on abelard.org on computable numbers... - a.m. turing, 1936 welcome to outer mongolia - how to get around this ger multiple uses for this glittering entity e-mail abelard at abelard@abelard.org why aristotelian logic does not work - abelard franchise by examination; education and intelligence - abelard children and television violence - abelard

Gödel’s confusions— METALOGIC B

Decision processes: Metalogic B

With glosses on Turing’s approach to the Entscheidungsproblem [1]

New translation, the Magna Carta
Decision processes is one in a series of documents showing how to reason clearly, and so to function more effectively in society.

Index to index start
advertising disclaimer
Introductory review  
Closed systems  
Intentions, decision processes and acts  
Do computers ‘decide’ and ‘act'?  
Time 2 ‘Past’  
    Statistical notions  
Loop breaking  
Stopping and changing  
Computers, decision processes and Turing  
Bibliography return to top of the index
K 'Y

Introductory review

  1. There is not some single ‘decision problem’. We make decisions minute by minute. It could even be said that elephants, atoms and lesser particles make constant decisions as they bustle about their business.

  2. While I much prefer Turing’s engineering approach to the Entscheidungsproblem to that of Gödel, it is my judgement that Turing’s approach still remains insufficiently empiric. This is aggravated by
    1. a too easy acceptance of the Cantorian ideas of ‘infinity’ and the diagonal argument [2]. You may see from the section on Cantor’s diagonalisation that I regard the diagonal argument as fundamentally unsound.
    2. an insufficient care in distinguishing ‘observer’ from ‘observed’.

  3. It can be seen that Turing was well aware that his appeal was primarily to ‘intuition’,[3] whereas my appeal is always to rigorous empiricism, see MetalogicA2, paragraphs 99 and 100. However, I do come to a rather similar view, that ‘the problem’ cannot be ‘solved’. I regard ‘the problem’ as a matter that involves maybe the estimation of probabilities, or perhaps a direct observation, or even some strange uncomfortable mixture of the ‘two’. I do not regard ‘the problem’ as a matter of Aristotelian dogma or ‘certainties’.
  4. The great human progress made via deconstructive Aristotelian ‘logic’ does not make the ‘logic’ empirically sound. Aristotelian ‘logic’ is, in reality, deeply flawed and unsound, however useful it may have proved.

  5. A considerable problem with the development of Aristotelian ‘logic’ is insufficient attention to temporal issues. There is an unacknowledged, illegitimate assumption of simultaneity, which is also inherent in the relativity of Einstein.

  6. The concept of ‘perfect’ is empirically unsound, as it suggests a countable (infinite?) ‘complete’. The ‘concept’ of ‘completeness’ discords with the continuous nature of reality, and substitutes for that reality, arbitrary and empirically unverifiable factors that are inherent in human communication. The ‘concept’ of ‘perfect’ attempts to establish an end point, a termination, a halt or stop. The world just is not like that; the world goes on regardless.[4]

  7. To suggest an ‘endless’ iteration is to import perpetual motion.

    back to the index on ‘Metalogic B: Decision processes'
  8. To suggest that a programme may be fed to itself is a nonsense, only a copy may be entered.

Closed systems

By predefining a box with a set ‘number’ of items, it is possible to count until the last of those items is reached. It is possible to count ‘all’ the ‘items’, but it must be kept firmly in mind that, as an ‘individual’ person (‘object’) counts those ‘objects’, both the person and the ‘objects’ are undergoing change.

The next person counting, or the previous person recounting, the ‘objects’ will be counting different real objects. The words they use, if they are counting aloud, will be new and different movements of the air as their ever-changing vocal chords set the air vibrating.

No, you cannot step in the same river twice.[5]

Only by relaxing the rigour of our expressions in such a manner, when ‘same’ comes to mean “I do not at this moment care about the very real differences”, can we count the ‘same’ ‘objects’. It is possible to ‘complete’ ‘the’ count, if and only if we relax rigour.

Fortunately for us, there is sufficient practical similarity in reality that we can indeed communicate and survive. But this is an empiric pragmatic finding; it isback to the index on ‘Metalogic B: Decision processes' not a statement that ‘two’ ‘objects’ can be ‘the same’.

Intentions, decision processes and acts

After screwing up a game of chess, we may decide that in future, when our king is in check, we will be particularly attentive and cautious. Perhaps we may decide that, when our bank account falls below some predetermined level, we will carefully revue our situation before further actions. In the later case, we may decide that in future, after we have written one cheque too many, we will look for a ‘job’ to redress the difficulty. We may even lay out the steps in obtaining such a job, that is: buy appropriate journals, write applications, go to interviews, and repeat until the medicine is effective. Such a planned procedure is termed ‘a decision process’ or an algorithm.

When the nuisance next occurs, however, we may ‘forget’ our ‘intentions’ and fail to take the previously ‘planned’ action. Yet, if we do apply the decision process, the decision process now becomes an act [6] or a choice in the real world; it is no longer just a ‘plan’ or a ‘good intention’. Carrying out the decision process does not imply acting upon the indications of that process. Performing the act indicated by the decision process can be called another act.

It is essential that each of these steps remain clear in the mind, for it is a lack of such care that leads to the interminable muddle of much misunderstood or misapplied ‘mathematics’.

decision is a future intention, a planned act, a mental state.

decision procedure, or algorithm, is a series of instructions that can then be interpreted.

choice is a real-world act, right now in ‘the’ present.

All acts take time, including decisions and chosen acts.

A mental act is a real world choice. Thus, the mental act of choosing to apply a decision process at a future time is a present act! That act is ‘separate’ from putting the decision process into action, or even acting on the ‘results’ of that back to the index on ‘Metalogic B: Decision processes'decision process. It is vital to grasp these ‘distinctions’.


  1. As decision procedures are so fundamental to human existence, we tend to have lots of different words meaning approximately the ‘same’. Some terms meaning ‘decision process’ are: algorithm, or series of instructions, or rules or recursion, or iteration, or feedback. For more detail, see list of instructions.

  2. A decision procedure means a step by step method, determined now, to be applied in the future in order to reach a ‘result’. The instructions, “Light blue touch paper and retire”, which is often seen on fireworks, is a well-known example of a decision procedure or algorithm.

  3. It is essential not to confuse a decision procedure with a decision.

  4. At the point when an act is performed, what was until then a future event, becomes (changes to) a present event. That is, the decision gives way to a choice/act.

  5. A decision is not the ‘same’ as an act. A decision is an intention to act. A decision may be considered to be a mental act, but must not be confused with the act decided upon, herein called a choice.

  6. It is essential not to confuse a decision with a choice.

  7. We may ‘decide’ now that we will use a decision process at a future time, but that is really an intention or mental act, or perhaps a prediction. It only becomes a choice when we actually carry out the indication of the decision process, or act upon a decision. The application of the decision process in time, by a series of steps, may guide us to a decision.

  8. We may then enact that decision in the form of a chosen act in the outside world, or we may change our mind.

  9. All acts happen through time, including carrying out a recursion (‘decision procedure’). No act occurs without extension in time. There is no such ‘thing’ as a ‘precise point in time’.

  10. Decisions can be in arrears, or in advance, of outcomes. That is, while we are carrying out a decision procedure, the real world moves on pretty well regardless. We may be forced to act before we are content that we have calculated/iterated [7] ‘enough’. For example, the spear flying towards you may reach you before you have decided which way to jump, or even whether to move at all. Thus, you may leap in panic to the left and help the spear to bury itself in your leg; whereas if you had not moved back to the index on ‘Metalogic B: Decision processes'it would have landed at your side, leaving you the option of tossing it back.

Do computers ‘decide’ and ‘act’?

  1. To communicate consistently about the world, in my view, it is useful to treat all chosen ‘objects’ with similar semantics. To say that a human, or a car, ‘moves’ uses a consistent term, that is; ‘moves’. To say that a human is alive ‘because’ it moves, and then to say that a car is not ‘alive’, requires further distinctions than that of ‘movement’. Defining ‘life’ is no simple trick!

  2. To define ‘decision’ or ‘act’ is likewise inclined to cause confusion, if it is attempted to restrict the definition to live things, or even to humans. Most certainly, other animals make decisions.

  3. What is called ‘an action’ is an individual choice. On watching a bicycle race, one person may call the race an act, another each thrust of the pedals, another every change of gear, yet another some pre-determined stage in the race. Actions are not ‘out there’; they are defined by individual observers. One potential classifier of acts is the individual who is doing the acting.

  4. Keeping in mind the arbitrary nature of acts, I now give as examples: a mud pie hitting a wall; an armadillo eating lunch; and a computer stopping at the end of a calculation. Consider a car ‘stopping’:

    Remember: all acts, even ‘stopping’, take finite time!

  5. By these definitions, computers act. Do computers ‘decide’? It would be easy enough to wonder about whether a decision was ‘conscious’ or not, but I have dealt with the concept of ‘consciousness’ elsewhere and will not confuse that issue here. See consciousness boxes in franchise by examination; education and intelligence and feedback and crowding).

  6. Does a computer (or even a person) decide to ‘stop’ before it acts (to stop)? Clearly, it does; on ‘deciding’ that a calculation is ‘complete’, a computer hands control elsewhere, for example to the ‘operating system’.[8] M. L. Minsky attempts to view the functioning human brain in terms of a series of systems that hand control around according to requirements.[9] While this form of analysis may have uses, unlike Minsky, I do not find it a convincing explanation of ‘consciousness’.

  7. Our ability to observe the actions of our ‘own’ brains is still extremely limited. Our tendency to link consciousness with the notion of a ‘decision’ is more likely to obfuscate than to clarify at our present level of understanding. I will therefore limit myself to the definition above:

I define a decision as a mental act prior to a planned external act. An example would be deciding to lob a ripe tomato at a politician and then the act of launching the missile.

  1. In this sense, a computer can be said to ‘decide’, as long as such a distinction is definable in particular circumstances. A human may be said to come with a set of genetically-coded instructions, telling the body how to grow and, even to an extent, how to see and think and walk. Likewise a computer is programmed with instructions telling it how to calculate,back to the index on ‘Metalogic B: Decision processes' to think and to shut down for sleep.

advertising disclaimer


  1. Nothing in reality ‘stops’. The world is constant change. It is humans that describe the world in terms of stops or starts for their convenience.

  2. Humans attend to various arrangements of matter as ‘changes’. When an arrangement of matter ceases to serve a particular human purpose, the time of such a change is regarded as an end point or a ‘stop’. Some examples of attributed changes are:

a number being reached,
a person ‘dying’ or
a table rotting, or being chopped, to a state that we no longer define as ‘table’

  1. Consider that stopping may involve two ‘different’ ‘types’ of observer:
    1. I have stopped running
    2. That person over there has stopped running

  2. These ‘two’ definitions of ‘stop’ will not be the ‘same’. Clear awareness of this difference is missing from Turing’s analysis of the stopping problem.

  3. Turing proposed a ‘machine’ that decides whether other logic ‘machines’ will stop. Such a ‘machine’ must have a meaning for the term ‘stop’. We observers must also agree the state we are going to label as ‘stop’.

  4. Consider ‘two’ possibilities
    1. That the machine will stop.
    2. The ‘other’ that the machine to be tested will not ‘stop’. (In the real world that is not possible, for any machine will eventually fail.)

  5. To ‘stop’ is an act in the now; while to not stop is a prediction of the future. Stopping and not stopping are different in kind.

Time 2 [10]

There is only one world and that is right now, the river is in constant motion. Only the present is fact. Only the present is accessible to direct examination.

The ‘past’ is but a memory, it is encoded in our present state of mind; the ‘past’ was previous arrangement of matter now only ‘remembered’. The past has gone; to travel in ‘time’ would require resetting every atom back to a previous state, physical ‘time travel’ is meaningless unrealism.

The human widely tends to reconstruct the past in terms that ‘make sense’ to the individual. Human memory is extremely unreliable, but humans are not too keen to acknowledge this, nor are many aware of the difficulties. A great deal of what goes on in law courts is a telling of stories, often mutually incompatible stories, flavoured with much self-serving, intentional dissembling. This dissembling and vanity pervades all actors: accused, witnesses, judges and lawyers.

The past does not exist. Human vanity has built institutional monuments upon a foundation of quicksand. The very most law can achieve is a pragmatic attenuation of friction and disruption, it cannot achieve ‘abstracts’ such as ‘justice’ with any reasonable reliability. For law to pretend to what it cannot achieve is more likely to cause injustice than to forward peace and civilisation (for more see Abelard’s teaching on ethics). Yet, without the weak reed of rule of law, we are all lost.

“This country’s planted thick with laws from coast to coast—man’s laws, not God’s—and if you cut them down [...] do you really think you could stand upright in the winds that would blow then?” [11]

The future will be a new arrangement of matter. Time is movement or physical change, nothing more. The future we may only attempt to predict as well as we can, that is, as well as our experience allows. We cannot control the future, we can do no more than attempt to predict it or to manipulate ‘the’ future by our present acts.

Statistical notions:
1) Prediction: this means we have enough experience to forecast probabilities or likelihoods of ‘outcomes’. That is, we make guesses, based upon a leavening of experience. In general, the ‘further’ into ‘the’ future we attempt to guess the less our probable accuracy: consider for example weather ‘forecasting’.

2) There is no way we can guess at all the details of the future. In fact, we are very bad at it, else we would soon all be coining fortunes on the stock exchanges or the geegees. We would certainly make far fewer errors of judgement. To imagine that one may predict the future with any great facility is more foolish human vanity. The most we can ever hope for is to lower the odds against us, by assiduously accumulating and applying our experiences in limited fields of endeavour.

3) If we say that an outcome is a matter or chance or that it is random, we suggest that we do not have ‘enough’ experience to make a useful guess at the outcome. The ‘idea’ of ‘choosing’ at ‘random’ tends to involve a confused mind; for if you make a choice, your choice cannot then comfortably be said to be ‘random’ (see also chance and choice in ethics).

back to the index on ‘Metalogic B: Decision processes'Only the present is real.

  1. There is a fundamental difference between the question:

    1. Is the sun in the sky?
    2. Will the sun be in the sky tomorrow?

  2. or if you prefer,
    1. Am I alive now?
    2. Will I be alive tomorrow?

  3. In each case, the first question is a matter of observing the fact right now;[12] while the second question requires a guess, a forecast, an estimate.

  4. With the first question, while the chance of the sun not being in the sky tomorrow is extremely small, keep in mind back to the index on ‘Metalogic B: Decision processes'that ‘you’ being there to observe this fact is rather less ‘certain’.


Data is counting or measuring, but we always count and indicate our ‘measures’ in monadic (individuated) numbers. So all measuring is ‘approximate’; for space is continuous to the best of our, or my, perception. Our brains, however, seem to function monadically to a great degree. That is, either/or, on or off, yes or no etc.; but the world is just not like that. Our approximate or digital methods have often served us quite well, but they also obscure and confuse and cause us much pain and idiocy.

Apparently, Laplace had the notion that, given knowledge of the complete (information) state of the universe, it would be possible to predict all future states. The impossibility of gaining such ‘complete’ information aside, our current understanding of the quantum rules and of non-linear systems has somewhat revised such optimism. Concepts of a clockwork universe and predictions of the end of science were overrun, only to re-appear recently in such superstitious vanities as ‘theories of everything’.

The digital nature of language and food and neurones. Other animals use the equivalent of words. As an example, some monkeys have different calls to distinguish snakes, eagles and leopards. They and we distinguish ‘food’ and ‘non-food’, and much else, in pragmatic ‘categories’, the brain using digital methods for storage and for decision processes. (See also loop breaking.)

We are creatures that function with much digitality ‘in’ a world that is continuous; we widely confuse our useful digital methods with the reality. Language is monadic by nature, we respond to language in bits. Language is widely used by humans to describe a maelstrom. My purpose is to show that the methods that have grown from our pre-history into the present have not explicitly recognised this mismatch at the heart of our experience.

It is my intent to make the language function in greater accord with the real world, and to make explicit the difficulties. Only by constant awareness and attention may we avoid the worst of the problems posed by language in our strange human condition. Space andback to the index on ‘Metalogic B: Decision processes' time do not arrive chopped into pieces!

See also digitisation and continuity, and quality.


  1. Whether a process is deemed to have stopped is a choice that is made by an individual.

  2. Is 1>2?
    Consider a computer that is set to keep checking the above statement,
    “is one greater than two?”
    and to ‘stop’ when that condition occurs. Now convention has it that the computer will ‘never’ ‘stop’. But that is not so. As stated, the computer will eventually breakdown or fail. Landauer[13] intelligently insists, computers are real-world artefacts constructed out of material. Their accuracy and durability is entirely dependent upon the materials and the robustness, or lack of same, involved in the construction the ‘machine’.

  3. Further, it is the case that an individual must check the machine regularly, in order to decide whether the machine has stopped or not. It is conceivable that another judge may make a differing choice, or may shuffle off this mortal coil at some critical ‘point’. One is, then, ‘always’ reliant upon the judgement of the judge.

  4. At all times and everywhere, both a decision process and the carrying out of a decision process involve objects or acts in the real world. The decision process rests in a person’s head, or in instructions expressed in symbols upon a sheet of paper or some such. Carrying out the process involves a series of real-world acts by a computer or individual, or some other ‘entity’.

  5. Clearly, if a human is going to solve a stopping problem, even with the intermediary help of a Turing ‘machine’, that human would be well advised to decide in advance just how much of their life they wish to devote to the problem.

  6. Remembering the table mentioned above, it would also help if a pretty clear definition of ‘the’ parameters were to be considered when deciding just when the table ceased to be regarded as still a table. Again, recall that ‘all’ acts take up time, that all is movement and that movement is what we call time.

  7. Turing sanely writes of the computer as an entity which makes decisions, for instance in On computable numbers..., section 9 III.[13a]

  8. Regarding Turing’s question(see computing, decision procedures and Turing), keep in mind that, quite apart from the human observer, there is both an object ‘machine’ (or ‘program’) and an observer machine. I am not at all sure that Turing was clearlyback to the index on ‘Metalogic B: Decision processes' aware of this very necessary distinction. This is a failure of relativisation.

Loop breaking

  1. In the thought experiment of Buridan, the ass was placed half-way between two attractive bales of hay. The unfortunate animal, being rather simple, starved to death because it could not decide in which direction to go and which bale of hay to eat.

  2. But consider that, while the ass’s brain may be functioning digitally, the distance between the two bales of hay is continuous. Therefore, the very slightest movement would make one or the other bale the nearer, thus solving the problem for the animal. In fact, the notion of ‘equal’ distances is most dubious. Further, the animal saccades, [14] or vibrates, as a function of living matter, as all is in motion. Therefore, the animal is quickly likely not to be ‘equidistant’ from either of the two life saving meals. Thus, a decision and an ability to act are enabled.

  3. I return to the Turing ‘machine’ designed to decide whether another (digital) ‘machine’ will halt or not. I am now in a position to formulate the Entscheidungsproblem with more clarity.
    A well-programmed decision program will know that ‘all’ things eventually stop (change-attributed state over time). Thus, the notion of decision theory as analysed by Turing is muddled in the details.

  4. Clearly, the halting problem cannot be ‘solved’ if it rests on a notion of predicting the future. It cannot always/reliably be known with ‘certainty’ when a computation will ‘stop’. As stated above, any computation will stop if machine breakdown or other reality constraints are encountered. Only with the unrealistic notions of ‘infinity’ or ‘going on forever’ does the idea of a halting problem back to the index on 'Metalogic B: Decision processes'gain an imagined but unrealistic ‘meaning’.

Stopping and Changing

  1. ‘Stopping’ and ‘change’ are effective synonyms.

  2. In rational language usage there is no difference between doing and being. Doing and being are human description, oriented to human purpose. That person is doing running, I am being energetic, I am doing sums, that person is being silly. This last example translates into, “that person is behaving in a manner that I label ‘silly’”.

  3. ‘Stopping’ is a human reaction to differences in the real world that a human regards as relevant. That person has stopped being ‘alive’ is equivalent to that person has changed from ‘being alive’ to ‘being dead’. Or more carefully, I am changing how I label that object from ‘alive’ to’ dead’.

  4. Stopping is some times regarded as an absence. But the matter remains, just in a ‘different’ state or form. What changes is the manner in which some individual expresses themselves, according to their shifting concerns.

  5. Counting is a continuous ‘repetitive’ movement. Running is a continuous ‘repetitive’ movement.

  6. ‘Stops’ or ‘starts’ are in real time, they do not ‘occur at a precise point in time’. ‘Precise points’ do not ‘exist’ in the real world. All change takes time, time is matter changing.

  7. A web reference to this section will serve as ‘words’ on any monument to my future possible ‘changes’ of state.

  8. In the main, I prefer the Turing formulation to the Gödelian reasoning. After all, Turing is no romantic Platonist. I also prefer an unfashionable constructivist concretism (for instance, see Piaget).

  9. Turing is often claimed[15] to have shown an isometric[16] idea to Gödel by using the ‘property’ called ‘stopping’. Turing however states that his work is “superficially similar to that of Gödel”,[17] and “what I shall prove is quite different from the well-known results of Gödel”.[18] I remain unsure; I am not entirely clear what was in the mind of either Turing or Gödel. Turing also did not have access to the mind of Gödel, as neither do those who comment freely upon the work of these two. Without such access, I see no possible way of resolving this.

  10. In Turing, as in Gödel, there are such impossibilities as an ‘infinity’ of steps, and the tail-swallowing error/non-sense that a copy of an algorithm (remember the copy is not the original, it is a new object) is being fed recursively to ‘the’ original algorithm, in the illusion that it is, in some sense, the ‘same’ as the original algorithm! This is an example of a fundamental error at the heart of mathematics, the ‘equals’ concept.

  11. Therefore, now there are more impossible ‘things’ to add to the error of ‘contradiction’, this being founded upon the ‘law’ of the excluded middle. This problem was clearly within Turing’s understanding as an engineer by 1950.[19] Turing also refers to “the undistributed middle is glaring” in a passage associated with rules,[20] although it is unclear to what he is referring! However it is clear that he did not understand that this error still heavily back to the index on ‘Metalogic B: Decision processes'clouded his own thinking.

computing, decision procedures and Turing

  1. A common computer is a linear device that steps through states or ‘instructions’, one by one. A Turing ‘machine’ is supposedly a theoretical analogue of such a device.

  2. There are considerable problems with the concept of ‘contradiction’, I have written of these difficulties at several places.[21] A pair of ‘opposites’ is at the heart of Turing’s paper on the decision or halting problem (Entscheidungsproblem): these ‘opposites’ are ‘stopping’ (halting) and not stopping. This is a very insecure dichotomy, in that the state of being ‘stopped’ is a state in which ‘nothing’ is happening, whereas ‘not stopping’ is a state of activity or movement. Compare this with the Asymmetry of not.

  3. Stopping is the cessation of ‘activity’, whereas ‘not stopping’ involves a continuation of movement or ‘change’. A ‘stop’ is a ‘completion’ of activity, whereas with a ‘non-stopping’ we are still awaiting a state we may choose to regard as a stopping. We are waiting for the table to stop being called a table.

  4. Recalling that the computer continues through steps until it (or a sub-part) of it stops; it is clear that, without further information, we have no idea of when the computer is liable to ‘stop’. It could step once or twice more, or it could go on for millions or billions of steps before we are content to decide that it has ‘stopped’.

  5. Turing posited a computing ‘machine’ or programme, designed to take another computer and to decide whether that other computer would stop or not. I shall call the supposed programme ‘the stop detector’.

  6. By some slight of hand, Turing then concluded that such a machine could not be built because, in his terms, such a project would involve a ‘contradiction’. I cannot pretend to be convinced, nor even to be sure just what he had in mind. I don't think I am alone in this because I have seen dozens of attempts to describe this ‘contradiction’, and they all look rather muddled to me. But, as I have done with Gödel, I shall attempt to plumb what Turing had in mind.[22]

  7. Orthodox mathematicians often assert that there are decision processes that do not terminate.
    As an example: a computer is instructed to stop when 1>2. Clearly the computer will go on checking such an instruction, round and around, until it breaks down, runs out of electricity, or makes an error.

  8. If the programme, “Stop when 1>2”, were to be fed into the stop detector ‘machine’, it is clear that the stop detector program would be imagined to return the ‘answer’, “the input programme does not stop”.

  9. A programme that continues in this manner with an unsatisfiable condition, such as “stop when 1>2”, is said by programmers to be ‘in a loop’.

  10. It is clear that we need a better-determined definition of the term ‘stop’. The only such rule that makes any sense to me is that we decide on an arbitrary number of cycles, at which time, if the computer has not yet stopped, we will decide that it is not going to stop (for instance, after a googolplex of cycles). In other words, the input programme will not stop in a reasonable time-frame. The problem with the Turing version of the stopping problem is much related to a failure to make such a pragmatic engineering decision or condition, prior to thinking out the nature of the halting problem. Without such a condition, the halting problem becomes smudgily defined.[23]

  11. Remember that the notion of ‘stopping’ exists in individual minds; it does not have some stable external real world meaning. It is important that those discussing any particular notion of ‘stopping’ have clear definitions among themselves as to just what they will agree to define as a ‘stop’. Once the ‘stop’ state is thought to ‘exist’, it is then necessary to check that all participants ‘agree’ that the state is then pertaining. Thus, we tend to require at least two medics to agree that a person is ‘dead’, that is, has ‘stopped’ as in the sense of a clock stopping.

  12. Digital arithmetic is used widely in order to crudely model the real, continuous world in which we find ourselves. A ‘stop’ on a computer is a crude conventional convenience. We use this convenience for analysing and understanding the world, and for communicating about that world in terms simple enough for our limited minds.

  13. At this point, I make the assumption that we can agree on what we mean by a computer programme coming to a stop, and that we can agree whether the stop detector programme indicates that an examined programme has come to a stop in a googolplex of steps. To attempt to vaguely ‘contrast’ ‘stopping’ with ‘not stopping’ (see § 34 and § 70), or to thence declare these states somehow ‘contradictory’ (§ 62), is not, in my judgement, well defined.

  14. Now to proceed with the Turing argument, in as far as I can give it meaning. From paragraph 70 above, if we feed a programme to the stop-detector programme (§ 65); the stop detector programme will now either declare that the tested programme will stop or, if it is still running after a googolplex of steps, the stop-detector programme will declare that the input programme will not stop.

  15. The next step in the Turing ‘proof’ suggests making a modified version of the stop-detector machine/programme; a version that itself goes into a loop when a tested programme stops—and stops if the tested programme goes into a loop. In other words, it does the opposite from what the input programme does. This is rather like the trick that is found repeatedly in the so-called ‘paradoxes’ reviewed in Metalogic A3, the prime effect of which was to confuse the listener! Another way of looking at this version of the detector machine is to regard it as a liar machine!

  16. The story continues, “let’s see what happens if we feed this lying machine to itself”. Remember that we cannot really do this, so we make a copy, or clone, of the liar machine and feed that to the original liar machine. Then the magician asks us what the liar machine will do next. The magician then tells us (what he says it will do!), possibly hoping that we will not ask too many probing questions.

  17. First, a little review:

    1. Consider that we have already decided that all machines will stop eventually, if only through breakdown or some such.
    2. The detector programme will either report a ‘stop’ or (after a googolplex of steps) a ‘not stop’ condition.

  18. So, we are intending to feed the liar machine to the copy of the liar machine. The liar machine will still either stop or not stop. Remember, the liar machine is not designed to report whether the input programme has stopped or not, we are no longer dealing with a genuine stop-detector machine.

  19. Consider that we fed the liar machine to the stop-detector machine. If we are to regard the liar machine and the programme it was given to test as one unit; it is clear that the modified stop-detector ‘machine’ will simply tell a lie about its input programme. To be long-winded, you might like to call the modified stop-detector machine ‘the lying stop-detector programme’ or machine.

  20. Therefore, a liar-detector ‘machine’/programme which tested the whole system (liar and input programme) will give out a result opposite to the result that the simple liar-detector programme would have found, had it alone been testing the input programme. Thus, the detector programme versions each continue to behave logically, or if you prefer it, rationally.

  21. Continuing to be careful, we note that the second (copy of) the liar programme is not testing some other programme, but a version of the liar programme plus an input programme. As the liar programme under test will lie (do the ‘opposite’ to) about the state of its input programme, the liar programme to which it is being input will in turn reverse the ‘output’ that it gives. Again the lying detector programme will have no serious difficulty reporting effectively.

  22. By confusing the various programmes and by not keeping absolutely clear what each programme is doing, commentators seem to imagine the liar programme will somehow hunt between ‘stop’ and ‘not stop’, never making a decision or, as it is often presented, continually ‘changing its mind’. This imagined situation is somehow regarded as a ‘contradiction’, and this is presented as a reason why the original detector programme cannot ‘exist’ or ‘work’. I am very unconvinced, but then I am not prepared

    1. to accept the vague idea of ‘going on forever’ nor
    2. to lose sight of just which ‘machine’ is the subject and which the object.[24]

  23. As long as each item and each action is kept clear and well defined, I see no reason prohibiting a stop-detector machine being built in theory, but of course as the programmers say, “garbage in, garbage out”. I am certainly prepared to accept that, if you define conditions with insufficient clarity, you may arrive at all sorts of muddled conclusions.

  24. I most certainly cannot guarantee that the above analysis will remove all doubts from the minds of the tens of thousands of people who have been persuaded to accept the original Turing ‘logic’, for I am quite unable to see precisely where the muddle lies in each individual mind. However, I am willing to assert that, by means of the type of analysis that I am proposing in these Metalogic documents; I will predictably be able to trace the sources of confusion in the mind of any person who still believes that the Entscheidungsproblem is a done deal.

  25. In the sense that the future cannot be predicted (for the simple reason that the future is an arrangement of matter that has not happened yet), I am prepared to agree that we cannot reliably predict (before we have tested that programme) whether a complex programme will stop without testing that programme. This is due to limitations in our ability to run the complexity in our minds and limitations on our ability to predict the future.

  26. I do, therefore, accept that the decision problem will, in some sense, remain; but not on the basis of a simplistic Aristotelian model of logic. Nor do I accept that the Turing formulation is empirically reliable on the basis of either of his attempts:
    1. by using the flawed logic of Cantor’s diagonalisation or
    2. by means of ‘contradition’, via the liar construction above.

  27. As usual, it all depends upon what each individual back to the index on ‘Metalogic B: Decision processes'may think they understand by any given decision problem.

Related further reading
why Aristotelian logic does not work The logic of ethics

the confusions of Gödel (in four parts) Feedback and crowding
Decision processes For related psycho-logical documents, start with
Intelligence: misuse and abuse of statistics


1. Entscheidungsproblem - decision problem, often referred to as the halting problem. Turing’s paper, On computable numbers with an application to the Entscheidungsproblem, is available at this site.
2. Note that at the end of On computable numbers..., section 8, Turing mentions expressing a formula as a Gödel number, thus allowing the application of diagonalisation in order to produce new formulae. This is rather akin to the Richardian approach mentioned in Metalogic A3, §197. Remember also that Gödelian ‘numbers’ do not include all naturals (see Metalogic A2, §150). I am, therefore, unsure whether this comment of Turing’s makes useful sense, as it is based on what I regard as insecure reasoning.
3. See the start of On computable numbers..., section 9.

“The glass is falling hour by hour, the glass will fall for ever,
But if you break the bloody glass you won't hold up the weather.”

Louis MacNeice 1938 , from Bagpipe Music.

5. “Upon those who step into the same rivers different and ever different waters flow down.”

Heraclitus c. 540 – c. 480BC, born Ephesus, now Selcuk, Turkey.
6. Or series of acts, according how you choose to count.
7. Yet more words meaning recursion or feedback. (Go to Feedback and crowding for much more detail on this concept.)
8. MS Windows, for instance, is an operating system.
9. See, for example, Marvin Minsky, The Society of Mind.
Minsky’s home page can be found at http://www.media.mit.edu/people/minsky/.
In his latest work, Minsky is attempting to integrate ideas of emotion into his model of the functioning brain, see the documents on his site page listed above, under the entry The Emotion Machine(draft).
10. See time in Why Aristotelian logic does not work,
and cause, chance and choice in The logic of ethics.
11. Robert Bolt, A Man for all Seasons [page no. not currently available]

Margaret: “Father, the man is bad.”
More: “There’s no law against that.”
Roper: “There is a law against it. God’s law.”
More: “Then God can arrest him.”
Roper: “Sophistication upon sophistication!”
More: “No. Sheer simplicity. The law, Roper, the law. I know what’s legal, but I don't always know what’s right. And I'm sticking with what’s legal.
Roper: “Then you set man’s law against God’s?”
More: “No. Far below. But let me draw your attention to a fact. I am not God. The currents and eddies of right and wrong, which you find such plain sailing, I can't navigate. I'm no voyager. But in the thickets of the law, there I am a forester. I doubt if there’s a man alive who could follow me there, thank God.”
Alice: “While you talk, he is gone.”
More: “And go he should, if he was the Devil himself, until he broke the law.”
Roper: “So now you'd give the Devil the benefit of law!”
More: “Yes. What would you do? Cut a great road through the law to get to the Devil?”
Roper: “I'd cut down every law in England to do that!”
More: “Oh? And when the last law was down, and the Devil turned round on you -- where would you hide, Roper, the laws all being flat. This country’s planted thick with laws from coast to coast -- man’s laws, not God’s -- and if you cut them down -- and you're just the man to do it -- do you really think you could stand upright in the winds that would blow then? Yes, I'd give the Devil benefit of the law, for my own safety’s sake.”
12. Ignoring the time signals take to travel.
13. See Landauer.
13a His ‘machine’ is also an idealised human calculator and, thus Turing, refers to it as ‘he’.
14. Saccade: usually describes brief rapid movement of the eye between fixation points.
15. See, for instance, Penrose.
16. Synonymous
17. On computable numbers..., p.230.
18. On computable numbers..., p.259.
19. See, for instance, “ ‘discrete state machines’...strictly speaking there are no such machines” ; Computing machinery and intelligence, p.439.
20. See Computing machinery and intelligence, p.452
21. See The logic of ethics.
22. See The confusions of Godël.
23. This is strange, considering Turing’s close attention to the finitistic requirements for his definitions elsewhere in his 1936 paper. To check out Turing’s comments on the finite elements of his ‘machine’, the best way is to search for the word ‘finite’ on the paper: On computable numbers...
24. One possible source of confusion seems to be to imagine that the stop detector somehow interacts with the input copy of the stop detector. But this is far from clear in the very many versions attempting to explain the stopping problem that I have seen. I am convinced, sufficiently for myself,back to the index on ‘Metalogic B: Decision processes' that those attempting to describe ‘the problem’ are not thinking it through with any great clarity.


Robert Bolt A Man for all Seasons A Man for all Seasons by Robert Bolt[1st ed. 1960]1990, Vintage Books,
ISBN-10: 0679728228
ISBN-13: 978-0679728221
$9.95 [amazon.com] /

[Play first performed in 1954]
Marvin Minsky The Society of Mind The Society of Mind Simon & Schuster 1st ed. 1986, pbk; reprinted 1988

ISBN-10: 0671657135
ISBN-13: 978-0671657130

$16.64 [amazon.com]

Andrew Hodges The enigma of intelligence The enigma of intelligenceHarperCollins Publishers Ltd, , 1985, pbk

ISBN-10: 0045100608
ISBN-13: 978-0045100606


A workman-like biography.
    return to the index


Blue links to parts of this document
Orange links to Bibliography and to other bibliographic information
Green gives links to sites external to  www.abelard.org
Yellow gives links to other documents within  the  abelard site. Document titles are initalics, whilst section titles are normal/in roman type.

Magenta highlights points of special note
Red attracts attention to important points
[Light green indicates the capitula under which Abelard was tried - not used in this document]

All links are underlinedreturn to index
Words in single quotes emphasise less than close, meaning for those words, or a dubious usage
Words in italics are words being used as label for other words, for instance the word word.

email abelard email email_abelard [at] abelard.org

© abelard, 2001, 31 july

all rights reserved

the address for this document is https://www.abelard.org/metalogic/metalogicB1.htm

6706 words
prints as 16 A4 pages (on my printer and set-up)

navigation bar (eight equal segments) on 'decision processes:Metalogic B - abelard' page