Challenge your parents; they want to experience your unfolding identity.

Anticipation and Computation: Is Anticipatory Computing Possible? (Part 2)

Anticipation Across Disciplines (M. Nadin, Ed.) Cognitive Science Monographs. Cham CH: Springer. Vol. 29, pp. 283-339. September 2015

Anticipation Across Disciplines (M. Nadin, Ed.) Cognitive Science Monographs. Cham CH: Springer. Vol. 29, pp. 283-339. September 2015
← Part 1
2 Practical Considerations
Pursuing the enticing goal of making everything behave like a machine—and paying the price for it—stands in sharp contrast to a vision of acknowledging the living and its definitory anticipation. One writer put it in quite expressive terms: “Think of the economy as being more like a cat than a washing machine,” (Taleb [45]). Evidently, becoming servants to robots, as Shannon cavalierly conceded, goes in the opposite direction. With this note, we are back to the preliminaries to the broader question of whether anticipatory computing is possible.
2.1 The “Why?” Question
Actually, “Why anticipatory computing?” would be a better question than simply questioning its feasibility. The reason for entertaining the question is straightforward: computation, of any nature, is nothing other than counting or measuring. The digital computer, as opposed to the person whose work was to calculate, i.e., to be a “living computer” provides automated calculation. (The first documented use of the word computer dates to 1613; it refers to persons performing calculations, to which we shall return.) It is not surprising that the human associates with calculation certain desired capabilities: the ability to make distinctions (large, small, wide, narrow), to compare, to proceed in a logical manner, to guide one’s activity. From the stones (calculae) used yesteryear to describe property, effort, and sequences of all kind to the alphabet of zeroes and ones (or Yes and No) of the new electronic abacus, the change was merely in scale, scope, speed, and variety of calculations, but not in its nature. The reason for calculations, and, implicitly, for measurement, remains the same: to cope with change, to account for it, to impact change.
Considering computer industry claims (e.g., Cigna CompassSMTM, the MindMeldTM iPad app, among others), the question of whether anticipatory computing is possible appears to be meaningless. The public and the major users of computation (banks, the military, healthcare, education, the justice system, etc.) are enticed by gadgets supposedly able to perform anticipations. Leaving aside the marketing gags—“We know where you will be on February 2, 2017 and with whom” states the Nostradamus bot—what remains, as we shall see, are computations with predictive features. As respectable as one or the other is, their performance does not have identifiable anticipation features. It is still hard to believe that the computer community, of presumed smart individuals, simply falls prey to the seductive misrepresentation methods characteristic of marketing. This is an example of lack of knowledge, of incompetence. Setting goals (to anticipate) not connected to what is actually offered—to extract information from patterns of behavior with the aim of predicting—does not qualify as competence. To define human-level control in terms of computer game proficiency is as misleading (regardless of the unreserved blessing of being published in Nature [6]).
The reason for this extended study of whether anticipatory computing is possible is to set the record straight and inform future work that reflects the understanding of anticipatory processes.
2.1.1 Counting
Stones, or knots on a rope (Fig. 11), are a form of record keeping: so many sheep, so many slaves, so many arrows, whatever; but also number of days, of bricks, of containers (for water, oil, wine, etc.), of anything that is traded. Such measurements translate as the basis for transactions: those entitled to a portion of the exchange (yes, change of ownership, currency of reference related to the days and weeks it took to hunt, process, make, preserve, etc.) will exercise their rights. To know ahead of time what and how things will change always afforded an edge in the economy of survival (as in any subsequent economy, including the transaction economy characteristic of our time). Ahead of time (ante is Latin for “before”) is ahead of others. Actions ahead of time are anticipatory. They are conducive to higher performance.
Therefore, anticipation as the “sense of the ever-changing context” is co-substantial with the preoccupation of describing change, either in words or in numbers, or, more generally, in any form of representation. Hence: representing change, as image (in the prehistoric cave paintings, for instance), as words trying to describe it, as numbers, equations, visualization, etc. is indicative of anticipation— the never-ending wager against change.

Fig. 11 The Incan quipu

Fig. 11 The Incan quipu

The fact that some descriptions (i.e., representations) are more adequate than others for certain activities is a realization of the nature of observations leading to learning. Where quantitative distinctions are more effective, numbers become more important than images, sounds, or words. With numbers—very much derived from the geometry of the human body (single head; pair of eyes, nostrils, ears; set of fingers and toes; myriad strands of hair)—comes the expectation of capturing change in operations easy to understand and reproduce. Counting emerges as a fundamental cognitive activity. Leibniz went so far as to state that music is the pleasure that the human mind experiences from counting without being aware that it is counting. (Poetry would easily qualify for the same view, so would dance.) This statement can be generalized, although temporal aspects (music unfolding over time/duration/ interval, as expression of rhythm) and spatial aspects are rather complementary. Still, counting involves the most basic forms of perception: the visual and the aural. To count implies the abstraction of the number, but also the abstraction of point, line, surface, and volume. A straight line is a set of adjacent points that can be counted (and the result is the length of the line); a surface is the collection of all lines making it up; and volume is represented by all the elements needed to arrive at it.
Just for illustration purposes: a more than 500-year-old woodcut (The Allegory of Arithmetic, Gregor Reisch, 1504). This image is part of the Margarita Philosophica describing the mapping from a counting board (for some reason, Pythagoras was chosen as a model) to the emergent written calculation (in which, for some even more obscure reason, Boethius is depicted). In this image, Hindu representation of numbers is used in what emerges as the art and science of mathematics.
Fig. 12 The Allegory of Arithmetic. The abacist uses the abacus; the algorist is involved in formulae

Fig. 12 The Allegory of Arithmetic. The abacist uses the abacus; the algorist is involved in formulae

Indeed, the calculating table (counting board) is a machine—conceptual at that stage—and so are the abacus and all the contraptions that make counting, especially of numbers of a different scale than that of the immediate reality, easier, faster, cheaper. In the image (Pythagoras competing with Boethius), an abacist (one who knows how to count using an abacus) and an algorist (one who calculates using formulae) are apparently competing (Fig. 12). The abacist is a computer, that is, a person who calculates for a living. It is appropriate to point out that there are many other forms of computing, such as counting the elements that make up a volume of liquid, or a mass (of stone, wood). Under these circumstances, counting becomes an analog measurement. With the abacus, after moving the appropriate beads, you only have to align the result, and it will fall in place. This holds even more with the attempt to measure, i.e., to introduce a unit of reference (describing volumes, or weights).
2.2 Measuring
The act of measuring most certainly implies numbers. It also implies the conventions of measuring units, i.e., a shared understanding of what the numbers represent, based on the science behind their definition. Indeed, the numbers as such are data, their meaning results from associating the numbers to the measuring process, such as “under the influence of gravity” (Fig. 13).
Fig. 13 Measurement as pouring medicine into beaker (The pharmacist “computes” the quantities prescribed)

Fig. 13 Measurement as pouring medicine into beaker (The pharmacist “computes” the quantities prescribed)

Quantitative distinctions are associated with numbers. Qualitative distinctions are associated with words, or any other means for representing them (e.g., sounds, colors, shapes). In the final analysis, there are many relations between quantitative and qualitative distinctions. Of course, numbers can be represented through words as well, or visually, or through sounds.
It should be clear at this time that counting numbers (or associating qualities, such as small, round, soft, smelly, etc.) is a discrete process, while “falling in place” (measuring, actually) is based on analogies and is a continuous process. This distinction pretty much defines the numbering procedure—one in which representations are processed sequentially, according to some rules that correspond to the mapping from the questions to be answered to what it takes to answer it. Example: If you have 50 sticks of different length, how does one order them from shortest to longest? Of course, you can “count” the length of each (using a measuring stick that contains the “counted” points corresponding to the units of measurement) and painstakingly arrange them in the order requested. Or you can use a “recipe,” a set of instructions, for doing the same, regardless of how many there are and how long each one.
2.2.1 The Early Meaning of Algorithm
Let us recall the calculating table from Margarita Philosophica allegory of arithmetic: to the right, the human “computer,” checking each stick, keeping a record of the length of each, comparing them, etc.; to the left the algorism (no spelling error here)—a person using a counting method by writing numbers in a place-value form and applying memorized rules to these numbers. Before that, once the scale changed (say, from tens to hundreds to thousands and tens of thousands), the representations used, i.e., the symbols, such as Roman numerals, changed. LVI means 56. A native of Kharazuni (a locality in what today is Uzbekistan) gave his name (Al-Khwarizuni arrived in Latin as Algorituni) to a treatise on the number system of the Indians (Algorituni de Numero Indorum). Actually, he provided the decimal number system for counting.1 The fact that the notion of the algorithm, which has characterized the dominant view of computation since Turing, is associated with his name is rather indicative of the search for simple rules in counting. Those implicit in the decimal number system and in the place value form are only an example. Algorithm (on the model of the word logarithm, in French) maintains the connection to arithmos, ancient Greek for number. Numbers are easier to use in describing purposeful operations, in particular, means and methods for measuring. Such means and methods replace guessing, the raw estimate that experienced traders knew how to make, and which were accepted by all parties involved. The ruler and the scale are “counting” devices; instructions for using them are algorithms.
Historic accounts are always incomplete. Examples related to the need and desire to make counting, and thus measurement, more like machine operations are usually indicative of the dominant knowledge metaphor of the time (the clock, the pneumatic pump, the steam engine, etc.). The machines of those past times embody algorithms. Leibniz used the label machine in describing the rules of differential calculus, which he translated into the mechanical parts (gears) of his machine. Machines such as clocks, water wheels, and even the simple balancing scale (embodying the physics of the lever) were used for various purposes. The balancing scale was used to estimate weight, or the outcome of applying force in order to move things (the most elementary form of change, i.e., in position), or change their appearance. Recalling the various contributions to computation—Blaise Pascal and his Pascaline device; Leibniz and his computer; Schickard and the calculating clock associated with his name; Babbage’s analytical engine inspired by the loom, etc.— means to recall the broader view of nature they embody. It also suggests that calculations and measurements can be performed basically in either an analog manner or a digital manner (Fig. 14).
2.2.2 Why Machines for Calculations?
To this question Leibniz provided a short answer: “…it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could be safely relegated to anyone else if machines were used.” This was written 12 years after he built (in 1673) a hand-cranked machine that could perform arithmetic operations.
1He also wrote the famous Istikhvay Tarikh al-Yahud (Extraction of the Jewish Era), a lunar cycle-based calendar for the Jewish holidays. This elaborate calendar, still in use, is a suggestion of his possible Jewish roots. (At the time of his activity, he was a Muslim scholar of the early stages of the creed, after he gave up his Zoroastrian identity). Inspired by the Hebrew Mishnat ha Middot (Treatise on Measures), he suggested elements of geometry well aligned with those of Euclid. That was a world of practiced diversity, including diversity of ideas.
Fig. 14 The Pascaline, Leibniz’s machine, Schickard’s calculating clock

Fig. 14 The Pascaline, Leibniz’s machine, Schickard’s calculating clock

What he wrote is of more than documentary relevance. Astronomy, and applications related to it, required lots of calculations in his time. Today, mathematics is automated, and thus every form of activity with a mathematical foundation, or which can be mathematically described, benefits from high-efficiency data processing. Excellent men and women (to paraphrase Leibniz) program machines that process more data and faster, because almost every form of human activity involves calculations. Still, the question of why machines for calculation does not go away, especially in view of realizations that have to do with a widely accepted epistemological premise: mathematics is the way to acquire knowledge. The reasonable (or unreasonable) effectiveness of mathematics in physics justifies the assumption. But there is no proof for it. Can this hypothesis be “falsified”?
It is at this juncture that our understanding of what computation is and the many forms it takes becomes interesting. Eugen Wigner’s article of 1960 [46] contrasts the “miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics” to the “more difficult” task of establishing a “theory of the phenomena of consciousness, or of biology.” Less than shy about the subject, Gelfand and Tsetlin [47] went so far as to state, “There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this the unreasonable ineffectiveness of mathematics in biology.” Leibniz would have seconded this formulation. Vellupillai [48] uses the same formulation in respect to economics.

2.3 Analog and Digital: Algorithmic and Non-algorithmic

The analog corresponds to the continuity of phenomena in nature. Pouring water or milk into a measuring cup in order to determine the volume, and thus, indirectly, the weight, is indicative of what analog calculations are. It is counting more than one molecule at a time, obviously. Similarity defines the domain. The lever on a scale automates the counting. It functions in the analog domain: add two or three more ounces until the scale is level, and that is the outcome of the calculation. Of course, to model simple phenomena and scale them is easy. But to reconfigure the analog— from calculating the volume of a liquid to that of a gas or solid—is more difficult.
Moreover, it is hard to distinguish between what is actually processed and the noise that interferes with the data. By way of counter-example, when you count beans in a bag, by hand, you can easily notice the stone among them.
The digital is focused on sampling, on making the continuous discrete. At low rates of sampling, much relevant data is lost. The higher the rate, the better the approximation. But there is a cost involved in higher rates, and there are physical limitations to how fast a sampling machine can go (Fig. 15).
Fig. 15 Sampling: comparisons of rate and data retention in sampling

Fig. 15 Sampling: comparisons of rate and data retention in sampling

In the context of interest in machines of all kind (for conducting wars, for successful wagers, for calculating the position of stars, for navigation, for making things, etc.), the theoretic machine called automaton was the most promising. For a while, what happened in the box (how the gears moved in Leibniz’s machine, for example) and what rules were applied—which is the same as saying which algorithm was used—was not subject to questioning. Heinz von Foerster took the time to distinguish between trivial and non-trivial machines (Fig. 16).
Fig. 16 Heinz von Foerster: trivial and non-trivial machines (his own drawings)

Fig. 16 Heinz von Foerster: trivial and non-trivial machines (his own drawings)

His distinction proved to be more consequential than initially assumed, once the model of the neuron (more precisely, its deterministic reduction was adopted (Fig. 17).
Fig. 17 The neuron machine

Fig. 17 The neuron machine

It is important to understand that input values are no longer a given, and that in the calculation scheme of neuronal networks, the machine is “taught” (through training) what it has to do. This applies from the simplest initial applications of the idea (McCulloch and Pitts, 1943) to the most recent deep Q-network (DQN) that combines reinforcement learning in association with deep neural networks (in the case of mimicking feed-forward processing in early visual cortex (Hubel and Wiesel [49]).
Evidently, the subject of interest remains the distinction between reaction-based processes—the theoretic machine has input, a number of inner states, and an output that is the outcome of the calculation—and predictive performance. There is no anticipatory dimension to account for. The “non-trivial machine” (von Foerster and Poerksen [50]) is essentially reactive: part of the calculation implies a dynamic dimension of the inner state connections. It is conceivable that along this line of an autonomic function associated with the inner state, anticipation could be defined as the result of the self-organization of such a machine. The DQN, like the professional human game testers, acquires a good understanding of the game algorithm, outperforming other reinforcement learning methods (including the training of living gamers).
2.3.1 The a-Machine
With the Turing machine, the real beginning of automated calculation was reached. Interestingly enough, behind his theoretic machine lies the same problem of automatic operations, in this case, the making and testing of mathematical statements. Hilbert was convinced that calculations were the basis for them. The meta-level of the enterprise is very relevant:
(a) objects in the reality of existence ? representations ? acts upon representations ? new knowledge inferred from representations
(b) objects ? numbers ? counting ? measurement ? ideas about objects ? ideas about ideas
The Turing saga was written so many times (and filmed with increased frequency) that it is hardly conceivable that the most important about it was not yet made public. Still, to understand the type of computation associated with his name—moreover, whether it is a possible path to anticipatory computing—a closer look is called for. Hilbert’s conjecture that mathematical theories from propositional calculus could be decided—Entscheidung is the German for decision, as in proven true-or-false—by logical methods performed automatically was rejected. Indeed, Turing (after Gödel and Alonzo Church) disappoints Hilbert, the mathematician who challenged the community with quite a number of hard problems (some not yet elucidated).
First and foremost: Turing provided the mathematical proof that machines cannot do what mathematicians perform as a matter of routine: developing mathematical statements and validating them. This is the most important, and most neglected, contribution. Nevertheless, the insight into what machines can do, which we gain from Turing’s analysis, is extremely important. Wittgenstein [51], recalling a conversation with Turing (in 1947) wrote: “ ‘Turing’s machines’: these machines are humans who calculate. And one might express what he says also in the form of games.” Indeed, the idea behind digital computers is machines intended to execute any operation that could be done by a human computer. (Remember: initially, as of 1613, “computer” applies to a person employed to calculate, what Gregor Riesch meant by an algorist.) Turing [52] himself wrote, “A man provided with paper and pencil and rubber, and subject to strict discipline, is in effect a universal machine.” At a different juncture, he added: “disciplined but unintelligent” [53]. Gödel would add, “mind, in its use, is not static, but constantly developing” [54]. “Strict discipline” means “following instructions.” Instructions are what by consensus became the algorithm. Intelligence at work often means shortcuts, new ways for performing an operation, even a possible wrong decision. Therefore, non-algorithmic means not subject to pre-defined rules, but rather discovered as the process advances. For those who fail to take notice of Turing’s own realization that not every computation is algorithmic, non-algorithmic computation does not exist.
Automatic machines (a-machines as Turing labeled them) can carry out any computation that is based on complete instructions; that is, they are algorithmic. One, and only one, problem remains: the machine’s ability to recognize the end of the calculation, or that there is no end. This means that the halting problem turned out to be undecidable. This characterization comes from Gödel’s work, where the undecidable names an entity that cannot be described completely and consistently. Turing’s a-machine consists of an infinite tape on which symbols can be stored, a read/write tape head that can move left or right (along the tape), retrieve (read) symbols from the tape or store (write) to the tape. The machine has a transition tape and a control mechanism. The initial state (one from among many on the transition tape) is followed by what the control mechanism (checking on the transition tape) causes the machine to do. This machine takes the input values, obviously defined in advance; it operates on a finite amount of memory (from the infinite tape) during a limited interval. The machine’s behavior is pre-determined; it also depends on the time context. Examining the design and functioning rules of the a-machine, one can conclude the following: whatever can be fully described as a function of something else with a limited amount of representations (numbers, words, symbols, etc.) can be “measured,” i.e., completed on an algorithmic machine. The algorithm is the description.
With the a-machine, a new science is established: the knowledge domain of decidable descriptions of problems. In some sense, the a-machine is no more than the embodiment of a physics-based view of all there is. This view ascertains that there are no fundamental differences between physical and living entities. This is a drastic epistemological reduction. It ascertains that there is a machine that can effectively measure all processes—physical or biological, reactive or anticipatory— as long as they are represented through a computational function.
2.3.2 Choice, Oracle, and Interactive Machines
Turing knew better than his followers. (Albeit, there is no benefit in making him the omniscient scientist that many proclaim him to be, reading into incidental notes ideas of a depth never reached.) In the same paper [53], Turing suggested different kinds of computation (without developing them). Choice machines, i.e., c-machines, involve the action of an external operator. The a-machines were his mathematical proof for the Hilbert challenge. Therefore, they are described in detail. The c-machine is rather a parenthesis. Even less defined is the o-machine (the oracle machine advanced in 1939), which is endowed with the ability to query an external entity while executing its operations. The c-machine entrusts the human being with the ability to interact on-the-fly with a computation process. The o-machine is rather something like a knowledge base, a set subject to queries, and thus used to validate the computation in progress. Turing insisted that the oracle is not a machine; therefore the oracle’s dynamics is associated with sets. Through the c-machine and the o-machine, the reductionist a-machine is opened up. Interactions are made possible—some interactions with a living agent, others with a knowledge representation limited to its semantic dimension. Predictive computation is attained; anticipation becomes possible.
The story continues. Actually, the theoretic construct known as the Turing machine—in it’s a-, c-, and o- embodiments—will eventually become a machine proper within the ambitious Automatic Computing Engine (ACE) project. (In the USA, the EDVAC at the University of Pennsylvania and the IAS at Princeton University are its equivalents.) “When any particular problem has to be handled, appropriate instructions…are stored in the memory…and the machine is ‘set up’ for carrying out the computation,” (Turing [55]). Furthermore, Turing diversifies the family of his machines with the n-machine, (unorganized machine of two different types), leading to what is known today as neural networks computation (the B-type n-machines having a finite number of neurons), which is different in nature from the algorithmic machine.
Von Neumann (who contributed not only to the architecture of the Turing machine-based computer, but also to the neural networks processing of data) asserted that, “…everything that can be described with a finite number of words, could be represented using a neural network” (Siegelmann and Sontag [56]). This is part of the longer subject of the Turing completeness or recurrent neural nets. Its relevance to the issue of anticipatory computing is indirect, via all processes pertinent to learning.
One more detail regarding Turing’s attempt to define a test for making the distinction between computation-based intelligence and human intelligence possible: human intelligence corresponds to the anticipatory nature of the living. Therefore, to distinguish between machine and human intelligence (the famous “Turing test”) is quite instructive for our understanding of anticipation. It is well established by now that imitation, which was Turing’s preferred game, is by no means indicative of intelligence.
Machines were programmed to answer questions in a manner that would make them indistinguishable from humans doing the same. This became the standard for winning in competitions meant to showcase progress in artificial intelligence (AI). To state that some entity—machine, person, simulation, or whatever else—can think is of low relevance, unless the thinking is about change, i.e., that it involves awareness of the future. The number of words necessary for describing such awareness is not finite; the number increases with each new self-realization of awareness. Creativity, in its broadest sense—to originate something (a thought, a melody, a theorem, a device, etc.)—is, of course, better suited to qualify as the outcome of thinking. However, at this level of the challenge, it should be clear that thinking alone is a necessary but not sufficient condition for creativity. Anticipation is the aggregate expression, in action, of all that makes up the living. Turing was not aware of this definitory condition of anticipation. It is difficult to speculate the extent to which he would have subscribed to it.
Not to be outdone by Google DeepMind, Facebook’s AI research focused on understanding language (conversing with a human [6]. Learning algorithms in this domain are as efficient as those in playing games—provided that the activity is itself algorithmic. Unfortunately, the lack of understanding anticipation undermines the effort to the extent that the automated grammar deployed and the memory networks become subjects in themselves. The circularity of the perspective is its main weakness. One more observation: Imagine that you were to count the number of matchsticks dropped from a matchbox (large or small). It is a sequential effort: one stick after another. There are persons who know at once what the total number is. The label savant syndrome (from the French idiot savant, which would mean “learned idiot”) is used to categorize those who are able to perform such counting (or other applications, such as multiplication in their head, remembering an entire telephone directory). Machines programmed to perform at this level are not necessarily different in both ability and degree of “autism”—impaired interaction, limited developmental dynamics.
But let us not lose sight of interactivity, of which he was aware, since on the one hand Turing computation is captive to the reductionist-deterministic premises within which only the reaction component of interactivity is expressed, and, on the other, since interaction computing (Eberbach et al. [57]) is not reducible to algorithmic computation. The most recent developments in the area of quantum computation, evolutionary computation, and even more so in terms of computational ubiquity, in mobile computing associated with sensory capabilities, represent a grounding for the numerous interrogations compressed in the question: Is anticipatory computation possible? Moreover, the “Internet of Everything” (IoE) clearly points to a stage in computation that integrates reactive and anticipatory dimensions.

2.4 What Are the Necessary Conditions for Anticipatory Computing?

For a computation to qualify as anticipatory, it would have to be couched in the complexity corresponding to the domain of the living. Elsewhere [12], I argued that description of objects and phenomena, natural or artificial, that correspond to the intractable, make up the realm of G-complexity. Anything else corresponds to the physical.
2.4.1 Beyond Determinism
Anticipation comes to expression within G-complexity entities. Quantum processes transcend the predictable; they are non-deterministic. Consequently, their descriptions entail the stochastic (the aim), which is one possible way to describe non-deterministic processes. To the extent that such quantum-based computers are embodied in machines (I am personally aware only of the functioning of D-Wave, and there is some question whether it is a real quantum machine), one cannot expect them to output the same result all the time (Fig. 18).
Fig. 18 (a) Quantum computation used in image recognition: apples, (b) a moving car

Fig. 18 (a) Quantum computation used in image recognition: apples, (b) a moving car

Rather, such a computer has no registers or memory locations, and therefore to execute an instruction means to generate samples from a distribution. There is a collection of qubit values—a qubit being a variable defined over the interval {0,1}. A certain minimum value has to be reached. The art of programming is to affect weights and strengths that influence the process analyzed. Instructions are not deterministic; the results have a probabilistic nature. One case: Is the object in the frame analyzed a moving car? The answer is more like “It could be!” than “It is!” or “It’s not!”
Predictive calculations are in some form or another inferences from data pertinent to a time of reference (t0) to data of the same phenomenon (or phenomena) at a later time (t1 > t0). Phenomena characteristic of the physical can be precisely described. Therefore, even if non-linearity is considered (a great deal of what happens in physical reality is described through non-linear dependencies), the inference is never of a higher order of complication than that of the process of change itself. In quantum phenomena, the luxury of assuming that precise measurements are possible is no longer available. Even the attempt to predict a future state affects the dynamics, i.e., the outcome. It is important to understand not only how sensitive the process is to initial conditions, but also how the attempt to describe the path of change is affected in the process. (For more details, the reader should consult Elsasser’s Theory of Quantum Mechanical Description [58]. One more observation: the living is at least as sensitive to observation (representation, measurement) without necessarily qualifying as having a quantum nature.
Although very few scientists pursue this thought, it is significant to understand that Feynman argued for quantum computation in order to facilitate a better understanding of quantum mechanics, not for treating what are called “intractable problems.” Factoring numbers, which are a frequent example of what quantum computation could provide, is important (for cryptography, for instance). However, it is much more relevant to better understand quantum phenomena. Paul Benioff and Richard Feynman (independently, in 1982) suggested that a quantum system can perform computations. Their focus was not on how long it takes to factor a 130-digit number (the subject of Shor’s algorithm), not even the relation between time and the size of the input (the well-known P≠NP problem of computer science).
In computations inspired by theories of evolution or genetics, the situation is somehow different. Without exception, such theories have been shaped by the determinism of physics. Therefore, they can only reproduce the epistemological premise. But the “computations” we experience in reality—the life of bacteria, plants, animals, etc.—are not congruent with those of the incomplete models of physics upon which they are based. Just one example: the motoric expression (underlying the movement of humans and animals) might be regarded as an outcome of computation. Consider the classic example of touching the nose with the tip of the index finger (or any other finger, for that matter [59]). The physics of the movement (3 coordinates for the position of the nose) and kinematic redundancy (a wealth of choices given the 7 axes of joint rotation, 3 axes of shoulder rotation, elbow, joint, wrist rotation, etc.) lead to a situation in which we have three equations and seven unknowns. Of course, the outcome is indeterminate. The central nervous system, of extreme plasticity, can handle the richness of choices, since its own configuration changes as the action advances. However, those who perform computations in artificial muscles do not have the luxury of a computer endowed with plasticity. They usually describe the finite, and at most predict the way in which the physics of the artificial muscle works. There are, of course, many attempts to overcome such limitations. But similar to computer science, where computation is always Turing computation (i.e., embodied in an a-machine), biology-based computation, as practiced in our days, is more anchored in physics, despite the vocabulary. In reality, if we want to get closer to understanding the living, we need to generate a new language (Gelfand and Tsetlin [47, pp. 1–22]). Anticipation is probably the first word in this language.
2.4.2 An Unexpected Alternative
Mobile computing, which actually is the outgrowth of cellular telephony—i.e., not at all a computing discipline in virtue of its intrinsic hybrid nature of human-machine—offers an interesting alternative. From the initial computer-telephone integration (CTI) to its many current embodiments (tablets, noteand netbooks, smartphones, etc.), mobile computing evolved into a new form of computation. First and foremost, it is interactive: somehow between the c-machine and o-machine envisaged by Turing. Things get even more interesting as soon as we realize that the computer sine qua non telephone is also the locus of sensor interactions. In other words, we have a computer that is a telephone in the first place, but actually a video scanner with quite number of functions in addition to communication. Before focusing on the ubiquity of mobile computation, it is worth defining, in reference to the first part of this study, various forms of computation that make possible forecasting, prediction, planning, and even some anticipatory processes.
Regardless of the medium in which probability-based computing is attempted— any physical substratum (such as the artificial muscle mentioned above) can be used for computational purposes—what defines this kind of calculation is the processing of probabilities. Probability values can be inputted to a large array and processed according to a functional description. A probability distribution describes past events and takes the form of a statistical set of data. In this data domain, inductions (from some sample to a larger class), or deductions (from some principle to concrete instantiations), or both, serve as operations based upon which we infer from the given to the future. The predictive path can lead to anticipation. From regularities associated with larger classes of observed phenomena, the process leads to singularities, the inference is based on abduction (or, to be faithful to Peirce’s terminology, retroduction), which is history dependent. Indeed, new ideas associated with hypotheses (yet another name for reduction) are not predictions, but an expression of anticipation (Fig. 19).
Fig. 19 Probability computer: the input values are probabilities of events. The integration of many probability streams makes possible dynamic modeling

Fig. 19 Probability computer: the input values are probabilities of events. The integration of many probability streams makes possible dynamic modeling

Alternatively, we can consider the interplay of probability and possibility. This is relevant in view of the fact that information—i.e., data associated with meaning that results from being referenced to the knowledge it affords or is based upon—can be associated with probability distributions (of limited scope within the [0,1] interval), or with the infinite space of possibilities corresponding to the nature of open-ended systems. Zadeh, [60] takes note of the fact that in Shannon’s data-transmission theory (misleadingly called “information” theory), information is equated with a reduction in entropy—and not with form (not morphology). He understands this reduction to be the source of associating information with probability. But he also calls attention to possibilistic information, orthogonal to the probabilistic: one cannot be derived from the other. In his view (widely adopted in the scientific community), possibility refers to the distribution of meaning associated with a membership function. In more illustrative terms (suggested by Chin-Liang Chang), possibility corresponds to the answers to the question, “Can it happen?” (in respect to an event). Probability (here limited to frequency, which, as we have seen, is one view of it) would be the answer to, “How often?” (Clearly, frequency, underlying probability, and the conceivable, as the expression of possibility, are not interdependent) (Fig. 20).
Fig. 20 Computing with probabilities and possibilities, computing with perceptions

Fig. 20 Computing with probabilities and possibilities, computing with perceptions

One particular form of anticipative evaluation can be computing perceptions (Zadeh [61]). Anticipation from a psychological viewpoint is the result of processing perceptions, most of the time not in a sequential, but in a configurational manner (in parallel). For instance, facial expression is, as we suggested, an expression of anticipation (like/dislike, etc. expressed autonomously) based on perception. Soundscapes are yet another example (often of interest to virtual reality applications).
2.4.3 Integrated Computing Environment
In the area of mobile computation, the meeting of many computational processes, some digital, some analog (more precisely, some manner of signal processing), is the most significant aspect. Signal processing, neural network computation, telemetry, and algorithmic computation are seamlessly integrated. The aspect pertinent to anticipation is specifically this integration, including integration of the human as a part of the interactive process.
In this sense we need to distinguish between actions initiated purposely by the person (let’s say, taking a photo or capturing a video sequence) and actions triggered automatically by the behavior of the person carrying the device (sensing of emotional state, evaluating proximity, issuing orientation cues pertinent to navigation). It is not only the “a-machine” on board (the computer integrated in the “smartphone”), but the mobile sensing connected to various forms of machine learning based on neuronal networks and the richness of interactions facilitated, which make up more than an algorithmic machine. The execution of mobile applications using cloud resources qualifies this as an encompassing information processing environment. Taken independently, what is made available is a ubiquitous calculation environment. The various sensors and the data they generate are of little, if any, significance to anticipation. If they could afford a holistic description, that would be conducive to anticipation (Nadin [62]). In this ever-expanding calculation environment, we encounter context sensing, which neither the desktop machine nor any other computer provides, or considers relevant for their performance. Motion tracking, object recognition, interpretation of data, and the attempt to extract meaning—all part of the calculation environment—are conducive to a variety of inferences. What emerge are characteristics reminiscent of cognitive processes traditionally associated with thinking. This is an embodied interactive medium, not a black box for calculations transcending the immediate. The model of the future, still rudimentarily limited to predictable events, reflects an “awareness” of location, of weather, of some environmental conditions, of a person’s posture or position. A pragmatic dimension can be associated with the interpreted cand omachines: “What does the user want to do?”—find a theater, take a train, reserve a ticket, dictate a text, initiate a video conference, etc. Inferring usage (not far from guessing the user’s intentions) might still be rudimentary. Associated with learning and distribution of data over the cloud, inference allows for better guessing, forecast, prediction, and becomes a component of the sui generis continuous planning process. The interconnectedness between the human and the device is extended to the interconnectedness over the network, i.e., cloud.
These are Internet devices that share data, knowledge, experiences. In traffic, for instance, this sharing results in collision avoidance.
From a technological perspective, what counts in this environment is the goal of reaching close-to-real-time requirements. For this, a number of methods are used: sampling (instead of reaching a holistic view, focus on what might be more important in the context), load-shedding (do less without compromising performance), sketching, aggregation, and the like. A new category of algorithms, dedicated to producing approximations and choosing granularity based on significance, is developed for facilitating the highest interaction at the lowest cost (in terms of computation).
It is quite possible that newer generations of such integrated devices will avoid the centralized model in favor of a distributed block chain process. Once issues of trust (of extreme interest in a context of vulnerability) are redefined among those who make up a network of reciprocal interest, anticipation and resilience will bind. The main reason to expend effort in dealing with a few aspects of this new level of computation is that it embodies the possibility of anticipatory computing. This is not to say that it is the only way to achieve anticipation performance.
In the evolution from portable wireless phones to what today is called a “smartphone,” these interactive mobile computing devices “learned” how to distinguish commuting, resting, driving, jogging, or sleeping, and even how to differentiate between the enthusiasm of scoring in a game and the angry reaction (game-related or not). A short list (incomplete, alas!) for suggesting the level of technological performance will help in further seeing how integration of capabilities scales to levels comparable to those of anticipatory performance. From GPS connection (and thus access to various dynamic knowledge bases), to sensors (accelerometers, gyroscope, etc.), communication protocols (facilitating WiFi, Bluetooth, near-field communication), everything is in place to locate the user, the device, the interconnected subjects, the actions meaningful within the context. Multi-core processors, large memories (not the infinite Turing machine tape, but by extension to the cloud close to it), and high performance input and output devices (cameras, microphones, touch screen, temperature sensitive surfaces) work in concert in order to support the generation of a user profile that captures stable as well as changing aspects (identity and dynamic profile). Models connect raw sensed data in order to interface (the ambient interface) the subject in the world and the mobile station. Information harvested by a variety of sensors (multimodal level of sensing) is subject to disambiguitization. It is exactly in respect to this procedure of reducing ambiguity that the mobile device distinguishes between the motorics of running, walking, climbing stairs, or doing something else (still within a limited scope). Example: The attempts to deploy physical therapy based on the mobile device rely on this level. The habit component compounds “historical” data—useful when the power supply has to be protected from exhaustion. Actions performed on a routine basis do not have to be re-computed. Other such strategies are used in the use of the GPS facility (path tracking, but only as the device moves, i.e., the user is on a bike, on a car, train, etc.). Over all, the focus is on the minima (approximate representations). Instead of geo-location proper, infer location from data (as in the person’s calendar: restaurant, doctor, meeting, etc.). In some ways, the mobile device becomes an extension of the perception dimension of the living.
Although there is nothing that this kind of aggregated computation has in common with quantum computation, the focus on minima is relevant. As we have seen, there is no need for excessive precision in the performance of most of the mobiles. (This is why sampling, load-shedding, aggregations, etc. are used.) Nevertheless, the user taking advantage of the on-the-fly translation of a phone/video conversation easily makes up the missing details (where sketching is important), or corrects the sentence. Images are also subject to such corrections. The metaphors of quantum computation, in particular the non-locality aspect, quite appropriately describe interactive processes, which no close algorithmic computation could perform. It is at this level where the once-upon-a-time classic texts of Bennett [63, 64], Bennett and Landauer [65], and others make evident the limits of an understanding of computation within the Turing machine model embodied in physical devices. Truth be told, no one has come up with a reassessment of the new context for open forms of computation.
I am inclined to doubt a statement such as “A computation, whether it is performed by electronic machinery, on an abacus, or in a biological systems such as the brain, is a physical process” [65, p. 58]. My position is that meaning is more important in the living than the outcome of any calculation is (should any take place). If there is computation within the living, chances are that it takes place differently from that on the abacus or in silicon. Moreover, I have doubts that the question “How much energy must be expended to perform a particular computation?” is very meaningful in respect to interactive computations. In an information processing environment, energy is not only that of the battery powering the device, but also of the interactions. Interactions in the living, as Niels Bohr suggested, continuously change the system. The neat distinctions of physics (often applied to living processes despite the fact that they are only partially relevant when it comes to life) are simply inadequate. Anticipation expressed in action has a specific energy condition corresponding to the fact that entropy decreases as a result of activity (Elsasser [58]).
Shannon’s data transmission theory (improperly called “information theory”) describes the cumulative effect of noise upon data. If a word or an image is transmitted over a channel, its initial order is subject to change, that is, it loses its integrity; or, in Shannon’s view, its entropy increases. The Second Law of Thermodynamics (Bolzmann) contains a formalism (i.e., a mathematical description) similar to that of Shannon’s law. But having the same description does not make the two the same, neither does it establish a causal relation. If we consider the genetic code, we’d better acknowledge that genetic messages do not deteriorate. The Laws of Thermodynamics apply, but not Shannon’s law of data transmission. Information stability in processes of heredity, as Elsasser points out, makes the notion of information generation within the living necessary. This generated information guides predictive, as well as anticipatory, action.
2.4.4 No Awareness of the Future
Together with the statement that computation can be performed in any physical system comes the understanding that computers are, in the final analysis, subject to the laws of physics. This applies to energy aspects—how much energy it takes to perform a computation—as well as to computer dynamics. Stepney [66, p. 674] delivers a description of how machines “iteratively compute from the inputs to determine the outputs.” Newton’s physics and Lagrange’s mathematics of the “principle of least action” are invoked and the outcome of the analysis is relatively straightforward: imperative languages (in Watt’s sense [67], i.e., “based on commands that update variables held in storage”) support Newton-based computations; logic languages (implementing relations) are Lagrangian. As such, the distinction does not really lead to significant knowledge from which computer science could benefit. But there is in the argument one aspect of relevance to anticipation: the time aspect. The underlying physical embodiment is, of course, described through physical laws. This applies to conventional computation as well as to quantum computation. To achieve even stationary condition, the computer would require awareness of the future (at least in terms of recognizing the end of execution time, if not the halting problem). Of course, no program is atemporal. For that matter, algorithms also introduce a time sequence (obviously different from that of programs).
When Turing modeled a calculation with pencil on paper on his abstract machine, his intention could not have been to ascertain a reductionist view. Rather, he focused on what it would take to transfer a limited human form of calculation— based on algorithms—to a machine. But outside that limited form remains a very large space of possibilities. Analog computation corresponds to another subset of calculations, with ad hoc rules reflecting a different heuristics. Neural networks, cellular automata, microfluidic processors, “wet computing,” optical computing, etc. cover other aspects, and sometimes suggest calculations that might take place in the living (membrane computing, for instance) without being subject to self-control, or even being reflected in one’s awareness. For all we know, neural networks dynamics, partially reflected in neural network computation, might even explain awareness and consciousness, but are not subject to introspective inquiry.
With all this in mind, we can, again making reference to our understanding of the difference between expectation, prediction, forecasting, etc., address the relation between computation in a physical substratum and that in a living substratum. A computer can predict its own outcome, or it can even forecast it. Everything driven by probability, i.e., generalizing from the past to the future, is physically computable. A physical machine can predict the functioning of another machine; it can simulate it, too. As a physical entity, such a machine is subject to the laws of physics (descriptions of how things change over time). A machine cannot anticipate the outcome of its functioning. If it could, it would choose the future state according to a dynamics characteristic of the living (evolution), not to that of physical phenomena (the minima principle). A machine, as opposed to a living medium of calculations, is infinitely reducible to its parts (the structure of matter down to its finest details, some of which are not yet fully described). Nothing living is reducible to parts without giving up exactly the definitory characteristic: self-dynamics. Each part of a living entity is of a complexity similar to that of the entity from which it was reduced. Within the living, there is no identity as we know it from physics. All electrons are the same, but no two cells are the same.
The Law of Non-Identity: The living is free of identity.
The living describes the world and itself in awareness of the act of describing.
The living continuously remakes itself.
2.4.5 The Mobile Paradigm and Anticipatory Computing
But let’s continue with more details of the mobile paradigm and the latter’s relevance to anticipatory computing. The first aspect to consider is the integration of a variety of sensors from which data supporting rich interactions originate (Fig. 21).
Fig. 21 Sensor integration with the purpose of facilitating rich interactions

Fig. 21 Sensor integration with the purpose of facilitating rich interactions

Distinct levels of processing are dedicated to logical inferences (while driving, one is far from the desktop; or, while jogging, is not in the office, unless walking on a treadmill) with the purpose of minimizing processing. Technical details—the physics, so to say—are important, although for our concerns the embodied nature of interaction between user and device are much more relevant. Anticipation is expressed in action pertinent to change (adapt or avoid are specific actions that everyone is aware of). It seems trivial that under stress anticipation is affected. It is less trivial to detect the degree and the level of stress from motoric expression (abrupt moves, for instance) or from speech data. Still, a utility, such as StressSense, delivers useful information, which is further associated with blood pressure, heart rhythm, possibly EMG, and what results can assist the individual in mitigating danger. The spelling of specific procedures—such as the Gaussian Mixture Models (GMM) for distinguishing between stressed and neutral pitch—is probably of interest to those technically versed, but less so for the idea we discuss.
El Kalioubi (whose work was mentioned previously) developed a similar facility for reading facial expression. In doing so, she facilitates the anticipatory dimension of emotions to a degree that this facility makes available information on attention— the most coveted currency in the world of computer-supported interactions. During a conversation we had (at SIGGRAPH 2010, Boston, when she was just starting her activity at MIT), she realized that MindReader—her program at the time—was merely making predictions under the guidance of a Bayesian model of probability inferences. Since that time, at Affidex, her focus is more and more on associating emotional states and future choices. It is easy to see her system integrated in mobile devices. Important is the realization that the description of physical processes (cause-and-effect sequence), and of the living process, with its anticipatory characteristics, fuse into one effective model. This is a dynamic model, subject to further change as learning takes place and adaptive features come into play.
In the physical realm, data determines the process (Landauer [68]). For instance, in machine learning, the structure of classifiers—simple, layered, complicated—is partially relevant. What counts is the training data, because once it is identified as information pertinent to a certain action, it will guide the performance. However, the curse of dimensionality does not spare mobile computing. Data sets scale exponentially with the expectation of more features. Many models excel in the number of features exactly because their designers never understood that the living, as opposed to the physical, is rather represented by sparse, not big, data. This is the result of the fact that living processes are holistic (Chaitin [69]).
At this time in the evolution of computation, the focus is changing from data processing to proving the thesis that all behavior, of physical entities and of organisms (the living) is either the outcome of calculations or can be described through calculations. This is no longer the age of human computers or of computers calculating the position of stars, or helping the military to hit targets with their missiles. Routine computation (ledger, databases, and the like) is complemented by intelligent control procedures. Self-driving cars or boats or airplanes come after the smart rockets (and everything else that the military commissioned to scientists). It is easy to imagine that the deep-Q network will soon give place to even higher performing means and methods that outperform not only the algorithms of games, but also of the spectacular intelligent weapons.
The Law of Outperforming Algorithms: For each algorithm, there is an alternative that will outperform it.
All it takes is more data, higher computer performance, and improved methods for extracting knowledge from data.
Thesis 3: Anticipatory computation implies the realization of necessary data (the minima principle).
Working only on necessary data (and no more) gives anticipatory computation an edge. It does not depend on technology (e.g., more memory, faster cycles) as does algorithmic computation.
Corollary: Predictive computation, as a hybrid of algorithmic and anticipatory computation, entails the integration of computation and learning.
Human level control is achieved not by outperforming humans playing algorithmic games, but by competing with humans in conceiving games driven by anticipation. Human-like understanding of questions in natural language is relevant to language at a certain moment in time (synchronic perspective) but lacks language dynamics (diachronic perspective).

2.4.6 Community Similarity Networks: How Does the Block Chain Model Scale up?

Without the intention of exhausting the subject, I will discuss a few issues pertinent to directions that pertain to the anticipation potential of interactive mobile devices. The tendency is to scale from individuals to communities. Autonomous Decentralized Peer-to-Peer Telemetry (the ADEPT concept that IBM devised in partnership with Samsung) integrates proof-of-work (related to functioning) and proof-of-stake (related to reciprocal trust) in order to secure transactions. At this level, mobile computation (the smartphone in its many possible embodiments) becomes part of the ecology of billions of devices, some endowed with processing capabilities, others destined to help in the acquisition of significant data. Each device—phone, objects in the world, individuals, animals, etc.—can autonomously maintain itself. Devices signal operational problems to each other and retrieve software updates as these become available, or order some as needed. There is also a barter level for data (each party might want to know ahead of time “What’s behind the wall?”), or for energy (“Can I use some of your power?”). There is no central authority; therefore one of the major vulnerabilities of digital networks supporting algorithmic computation is eliminated. Contracts are issued as necessary: deliver supplies (e.g., for house cleaning, or for a 3D print job). The smartphone can automatically post the bid (“Who has the better price?”), but so could any other device on the Internet-of-Everything. Peer-to-Peer in this universe allows for establishment of dynamic communities: they come into existence as necessary and cease to be when their reason for being no longer exists.
So-called community similarity networks (CSN) associate users—individuals or anything else—who share in similar behavior. A large user base (such as the Turing o-machine would suggest) constitutes over time an ecosystem of devices. FitbitTM (a digital armband) already generates data associated with physical activities (e.g., exercise, rest, diet). A variety of similar contraptions (a chip in the shoe, a heart monitor, hearingor seeing-aid devices) also generates data. The Apple WatchTM, or any other integrating artifact, scales way further, as a health monitoring station. To quote a very descriptive idea from one of the scholars I invited to the upcoming conference on Anticipation and Medicine (Delmenhorst, Germany, September 2015):

Real time physiological monitoring with reliable, long-term memory storage, via sophisticated “physiodiagnostics” devices could result in a future where diseases are diagnosed and treated before they even present detectable symptoms (McVittie and Katz [70]).

The sentence could be rewritten to apply to economic processes and transactions, to political life, to art, to education. The emphasis is on before, characterizing anticipation. Of course, understanding language would be a prerequisite. (As already mentioned, the AI group at Facebook [6] is trying to achieve exactly this goal.)
On this note I would argue with Pejovic and Musolesi [71] that neither MindMeldTM (enhancing online video conferencing), nor GoogleNowTM, or Microsoft’s CortanaTM (providing functionality without the user asking for it) justifies qualifier anticipatory. Nevertheless, I am pretty encouraged by their project (anticipatory mobile dBCI, i.e., behavior change interventions), not because my dialog with them is reflected in the concept, but rather because they address current needs from an anticipatory perspective. Indeed, behavior change, informed by a “smart” device, is action, and anticipation is always expressed in action.
Just as a simple example: few realize that posture (affecting health in many ways) depends a lot on respiration. Upon inspiration (breathing in), the torso is deflected backward, and the pelvis forward. It is the other way around during expiration (breathing out). Anticipation is at work in maintaining proper posture as an integrative process. Behavior change interventions could become effective if this understanding is shared with the individual assisted by integrated mobile device facilities—not another app (there are too many already), but rather a dialog facility.
I would hope that similar projects could be started for the domains mentioned above (economy, social and political life, education, art, etc.). Indeed, instead of reactive algorithmic remedies to crises (stock market crash, bursting of economic bubbles, inadequate educational policies, ill-advised social policies, etc.), we could test anticipatory ideas embodied in new forms of computation, such as those described so far. The progress in predictive computation (confusingly branded as anticipatory) is a promising start.
2.4.7 Robots Embody Predictive Computation (and Even Anticipatory Features)
Anticipatory computation conjures the realm of science fiction. However, neither prediction nor anticipation invites prescience or psychic understandings. The premise of predictive or anticipatory performance is the perception of reality. Data about it, acquired through sensors, as well as generated within the subject, drive the predictive effort or inform anticipatory action couched in complexity. Specifically: complexity corresponds to variety and intensity of forms of interaction, not to material or structural characteristics of the system. The interaction of the mono-cell (the simplest form of the living) with the environment by far exceeds that of any kind of machine. This interactive potential explains the characteristics of the living.
Spectacular progress in the field of robotics comes close to what we can imagine when approaching the issue of anticipatory computation. If the origin of a word has any practical significance to our understanding of how it is used, then robot tells the story of machines supposed to work (robota is the Russian word that inspired the Czech Karel CÌŒapek to coin the term). Therefore, like the human being, they ought to have predictive capabilities: when you hit a nail with a hammer, your arm seems to know what will happen. From the many subjects of robotics, only predictive and anticipatory aspects, as they relate to computation, will interest us here.
The predictive abilities of robots pose major computational challenges. In the living, the world, in its incessant change, appears as relatively stable. For the robot to adapt to a changing world, it needs a dynamic refresh of the environment in which it operates. Motor control relies on rich sensor feedback and feed-forward processes. Guiding a robot (towards a target) is not trivial, given the fact of ambiguity: How far is the target? How fast is it moving? In which direction? What is relevant data and what is noise? Extremely varied sensory feedback—as a requirement similar to that of the living—is a prerequisite, but not a sufficient, condition. The living does not passively receive data; it also contributes predictive assessments—feed forward—ahead of sensor feedback. This is why robot designers provide a forward model together with feedback. The forward (prediction of how the robot moves) and inverse (how to achieve the desired speed) kinematics are connected to path planning. The uncertainty of the real world has to be addressed predictively: advancing on a flat surface is different from moving while avoiding obstacles (Fig. 22).

Fig. 22 Interaction is the main characteristic of robots. The robot displayed serves only as an illustration. It is a mobile manipulation robot, Momaro, designed to meet the requirements of the DARPA Robotics Challenge. It consists of an anthropomorphic upper body on a flexible hybrid mobile base. It was an entry from the Bonn University team NimbRo Rescue, qualified to participate in the DARPA Robotics Challenge taking place from June 5–6, 2015 at Fairplex, in Pomona, California

Fig. 22 Interaction is the main characteristic of robots. The robot displayed serves only as an illustration. It is a mobile manipulation robot, Momaro, designed to meet the requirements of the DARPA Robotics Challenge. It consists of an anthropomorphic upper body on a flexible hybrid mobile base. It was an entry from the Bonn University team NimbRo Rescue, qualified to participate in the DARPA Robotics Challenge taking place from June 5–6, 2015 at Fairplex, in Pomona, California

Intelligent decisions require data from the environment also. Therefore, sensors of all kinds are deployed (to adaptively control the movement but also to make choices). To make sense of the data, the need for sensor fusion becomes critical. The multitude of sensory channels and the variety of data formats suggested the need for effective fusion procedures. As was pointed out (Makin, Holmes & Ehrsson [72], Nadin [73]), the position of arms, legs, fingers, etc. corresponds to sensory information from skin, joints, muscles, tendons, eyes, ears, nostrils, tongue. Redundancy, which in other fields is considered a shortcoming (costly in terms of performance) helps eliminate errors due to inconsistencies or to sensor data loss, and to compensation of variances. The technology embodied in neuro-robots endowed with predictive and partial anticipatory properties (e.g., “Don’t perform an action if the outcome will be harmful”) integrates recurrent neural networks (RNN), multilayered networks, Kalman filters (for sensor fusion), and, most recently, deep learning architectures for distinguishing among images, sounds, etc., and for context awareness (Schilling and Cruse [74]). Robots require awareness of their state and of the surroundings in order to behave in a predictive manner. (The same holds for wearable computers.) Of course, robots can be integrated in the computational ecology of networks and thus made part of the Internet-of-Everything (IoE).
2.5 Computation as Utility
Based on the foundations of anticipatory systems, the following are necessary, but not sufficient, conditions for anticipatory computation.
• Self-organization (variable configurations)
• Multiplicity of outcome
• Learning: performance depends on the historic record
• Abductive logic
• Internal states that can affect themselves through recursive loops
• Generation of information corresponding to history, context, goal
• Operation in the non-deterministic realm
• Open-endedness
In practical terms, anticipatory computing would have to be embodied (in effective agents, robots, artifacts, etc.) in order to be expressed in action. A possible configuration would have to integrate adaptive properties, an efficient expression of experience, and, most important, unlimited interaction modalities (integrating language, image, sound, and all possible means of representations of somato-sensory relevance) (Fig. 23).
Fig. 23 Adaptive dynamics, embodied experience, and rich interactivity are premises for anticipatory performance

Fig. 23 Adaptive dynamics, embodied experience, and rich interactivity are premises for anticipatory performance

In view of newly acquired awareness of decentralized interaction structures— i.e., pragmatic dimensions of computation—it can be expected that computation as a utility, not as an application (the ever-expanding domain of apps is rather telling of their limitations), would be part of the complex process of forming, expressing, and acting in anticipation. Achieving an adaptive open system is most important. Outperforming humans in playing closed-system games is not a performance to be scorned. But it is only a first step. The same can be said of conceiving and implementing a so-called “intelligent dialog agent” as a prerequisite for understanding natural language. It is not the language that is alive, but those who constitute themselves in language (Nadin [41]). Memory Networks might deliver within a closed discourse universe, but not in an open pragmatic context. Understanding what anticipation is could spare us wasted energy and talent, as well as the embarrassment of claims that are more indicative of advancing deeper into a one-way street of false assumptions. Instead, we could make real progress in understanding where the journey should take us.
Cigna CompassTM is a trademark of the Cigna Corporation
MindMeldTM is a trademark of Expert Labs
FitbitTM is a trademark of Fitbit, Inc.
AppleWatchTM is a trademark of Apple, Inc.
1. Mitchell, M.: Is the universe a universal computer? Science 298(5591), 65–68 (2002)
2. Deutsch, D.: Quantum theory, the Church-Turing principle and the universal quantum computer. Proc. R. Soc. London A 400, 97–117 (1985)
3. Hawking, S.: Stephen Hawking warns artificial intelligence could end mankind. Interview with Rory Cellan-Jones, BBC. Accessed 2 Dec 2014
4. Kurzweil, R.: The Singularity is Near. Viking Press, New York (2005)
5. Shannon, C.: Interview: Father of the Electronic Information Age by Anthony Liversidge, Omni, August (1987)
6. Minh, V., Kavukculoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015). Accessed 27 Feb 2015
7. Weston, J., Bordes, A., Chopra, S, Mikolov, T.: Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks (2015). Accessed 3 March 2015
8. Monfort, N.: Twisty Little Passages: An Approach to Interactive Fiction. MIT Press, Cambridge (2005)
9. Wolfram, S.: A New Kind of Science. Wolfram Media, Champaign (2002)
10. Zuse, K.: Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig. English: Calculating space, An MIT Technical Translation Project MAC, MIT, 1970. AZTEC School of Languages, Inc., Cambridge (1969)
11. Gödel, K.: Über formal unentscheidbare Sätze der Principa Mathematica und verwandte Systeme, Monatshefte für Mathematik und Physik. 38, 173–198 (1931). The first incompleteness theorem originally appeared as Theorem VI
12. Nadin, M.: G-Complexity quantum computation and anticipatory processes. Comput. Commun. Collab. 2(1), 16–34 (2014)
13. England, J.L.: Statistical physics of self-replication. J. Chem. Phys. 139, 12, 121293 (2013)
14. Carroll, L.: Through the Looking Glass. MacMillan, London (1871).
15. Newton, I.: Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) (1687)
16. Windelband, W.: Aufsätze und Reden zur Philosophie und ihrer Geschichte, 5. Präludien, Tübingen (1915)
17. Montgomery, R.: Gauge theory of the falling cat. In: Enos, M.J. (ed.) Dynamics and Control of Mechanical Systems, American Mathematical Society, pp. 193–218 (1993).
18. Mehta, R.: Mathematics of the Falling Cat (2012).
19. Kane, T.R., Scher, M.: A dynamical explanation of the falling of the cat phenomenon. Int. J. Solids Struct. 5, 663–670 (Pergamon Press, London, 1969)
20. Heisenberg, W.: Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik [On the clear content of quantum-theoretic kinematics and mechanics]. Z. Angew. Phys. 43(3–4), 172–198 (1927)
21. Fletcher, P.C., Anderson, J.M., Shanks, D.R., Honey, R., Carpenter, T.A., Donovan, T., Papadakis, N., Bullmore, E.T.: Responses of human frontal cortex to surprising events are predicted by formal associative learning theory. Nat. Neurosci. 4(10), 1043–1048 (2001)
22. Nadin, M.: Anticipation in Knowledge Media Design (2nd edn). Knowledge Media Design (Stefan, P., (ed.)), pp. 71–90. Oldenburg, Berlin (2006)
23. Kahneman, D. and Tversky A.: On the reality of cognitive illusions. Psychol. Rev. 103(3), 582–591 (1996)
24. Delfabbro, P.: The stubborn logic of regular gamblers: obstacles and dilemmas in cognitive gambling research. J. Gambl. Stud. 20(1), 1–21 (2004)
25. Gigerenzer, G., Muir Gray, J.A. (eds.): Better Doctors, Better Patients, Better Decisions: Envisioning Health Care 2020. MIT Press, Cambridge, (2011)
26. Sedlmeier, P., Gigerenzer, G.: Do studies of statistical power have an effect of the power of studies? Psychol. Bull. 105(2), 309–316 (1989)
27. Tversky, A., Gilovich, T.: The “Hot Hand”: Statistical reality or cognitive illusiion? Chance, vol. 2, no. 4., pp. 31–34. Taylor & Francis, London (1989, published on line 2012)
28. Miller, J.B., Sanjurjo, A.: A Cold Shower for the Hot Hand Fallacy. Social Science Research Network (Working Paper No. 518) (2014)
29. Hertwig, R., Ortmann, A.:The Cognitive Illusion Controversy: A Methodological Debate in Disguise that Matters to Economists. Experimental Business Research, pp. 113–130. Springer, New York (2005)
30. Tsetsos, K., Chater, N., Usher, M.: Salience driven value integration explains decision biases and preference reversal. Proc. Natl. Acad. Sci. U.S.A. 109(24), 9659–9664 (2012)
31. Bernoulli, J.: Ars Conjectandi (The Art of Conjecturing, in Latin) (1713)
32. von Glasersfeld, E.: Radical Constructivism: A Way of Knowing and Learning. The Falmer Press, London/Washington (1995)
33. Salimpoor, V., Benovoy, M., Larcher, K., Dagher, A., Zatorre, R.J.: Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat. Neurosci. 14, 257–256 (2011)
34. Ekman, P., Rosenberg, E.L. (eds.): What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, New York (1997)
35. Ekman, P.: Emotions Revealed. Times Books, New York (2003)
36. Gladwell, M.: The Naked Face. Can experts really read your thoughts? The New Yorker, pp. 38–49 (2002)
37. Arsham, H.: Time Series Analysis and Forecasting Techniques (2002).
38. Dayan, P., Kakade, S., Montague, P.R.: Learning and selective attention. Nat. Neurosci. 3, 1218–1223 (2000)
39. Nicolelis, M.A.: Actions from thoughts. Nature 409(6818), 403–407 (2001)
40. Nicolelis, M.A.L., Lebedev, M.A.: Principles of neural ensemble physiology underlying the operation of brain–machine interfaces. Nat. Rev. Neurosci. 10(7), 530–540 (2009)
41. Nadin, M.: The Civilization of Illiteracy. Dresden University Press, Dresden (1997)
42. Nadin, M.: Antecapere ergo sum. What price knowledge? AI & Society 25th Anniversary Volume: A Faustian Exchange: What is To Be Human in the Era of Ubiquitous Technology, vol. 28, pp. 39–50. (Springer, London, 2013)
43. Nadin, M.: Anticipation and risk—from the inverse problem to reverse computation. In: Nadin, M. (ed.) Risk and Decision Analysis vol. 1, no. 2, pp. 113–139. IOS Press, Amsterdam (2009)
44. Nadin. M.: Anticipatory computing: from a high-level theory to hybrid computing implementations. Int. J. Appl. Res. 1(1), 1–37 (2010) (Image based on earlier work: Hybrid Anticipatory Control Mechanisms (2002))
45. Taleb, N.N.: Learning to love volatility. Wall Street Journal (2012)
46. Wigner, E.: The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959. Commun. Pure Appl. Math. 13, 1–14 (1960)
47. Gelfand, I.M., Tsetlin, M.L.: Mathematical Modeling of Mechanisms of the Central Nervous System. Models of the Structural-Functional Organization of Certain Biological Systems (I.M. Gelfand, Editor; C.R. Beard, Trans.) MIT Press, Cambridge (1971)
48. Velupillai, K.V.: The Unreasonable Ineffectivenss of Mathematics in Economics. Working Paper No. 80 (2004). [(Retrieved February 28, 2015) and in Camb. J. Econ. (Spec. Issue Econ. Future), 29(6), 849–872 (2005)]
49. Hubel, D.H., Wiesel, T.N.: Receptive fields of single neurons in the cat’s striate cortex. J. Physiol. 165(3), 559–568 (1963)
50. von Foerster, H., Poerksen, B.: Understanding systems: conversations on epistemology and
ethics. Kluwer, New York (2002)
51. Wittgenstein, L.: Remarks on the Philosophy of Psychology, Vol. 1, G. E. M. Anscombe and G. H. von Wright (Eds.), G. E. M. Anscombe (Trans.); Vol. 2, G. H. von Wright and H. Nyman (Eds.), C. G. Luckhardt and M. A. E. Aue (Trans.). Blackwell, Oxford. (1980) Cited in J. Floyd: Wittgenstein’s Diagonal Argument: A Variation on Cantor and Turing.
52. Turing, A.M.: Intelligent machinery [technical report]. Teddington: National Physical Laboratory (1948) cf. (Copeland, B.J., Ed.), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life Plus the Secrets of Enigma. Oxford University Press, Oxford (2004)
53. Turing, A.M.: Programmers’ Handbook for Manchester electronic computer. Mark II (1951)
54. Gödel, K.: Some remarks on the undecidability results, Collected Works II, pp. 305–306. Oxford University Press, Oxford (1972)
55. Turing, A.M.: The ACE Report. A.M. In: Carpenter, B.E., Doran, R.W. (eds.) Turing’s Ace Report of 1946 and Other Papers. MIT Press, Cambridge (1986)
56. Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)
57. Eberbach, E., Goldin, D., Wegner, P.: Turing’s ideas and models of computation. In: Teuscher, C. (ed.) Alan Turing. Life and Legacy of a Great Thinker. Springer, Berlin/Heidelberg (2004)
58. Elsasser, W.M.: Theory of quantum-mechanical description. Proc. Natl. Acad. Sci. 59(3),738–744, (1968)
59. Latash, M.L.: Synergy. Oxford University Press, New York (2008)
60. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1, 3–28 (1978). (Elsevier, Amsterdam)
61. Zadeh, L.A.: Foreword. Anticipation—the end is where we start from (M. Nadin). Lars Müller Verlag (2003)
62. Nadin, M: Quantifying Anticipatory Characteristics. The AnticipationScope and the Anticipatory Profile. In: Iantovics, B., Kountchev, R. (eds.) Advanced Intelligent Computational Technologies and Decision Support Systems, Studies in Computational Intelligence, vol. 486, pp. 143–160. (Springer, New York/London/Heidelberg: Springer 2013)
63. Bennett, C.H.: Undecidable dynamics. Nature 346, 606–607 (1990)
64. Bennett, C.H.: Universal computation and physical dynamics. Physica D Nonlinear Phenomena 86(1–2), 268–273 (1995). (Elsevier, Amsterdam)
65. Bennett, C.H., Landauer, R.: The fundamental physical limits of computation. Sci. Am. 253 (1), 48–56 (1985)
66. Stepney, S.: Local and global models of physics and computation. Int. J. Gen Syst 43(7), 673–681 (2014)
67. Watt, D.A.: Programming Language Design Concepts. Wiley, Hoboken (2004)
68. Landauer, R.: The physical nature of information. Phys. Lett. A 217, 188–193 (1996)
69. Chaitin, G.: What is life? ACM SICACT News 4(January), 12–18 (1970)
70. MacVittie, K., Katz, E.: Biochemical flip-flop memory systems: essential additions to autonomous biocomputing and biosensing systems. Int. J. Gen Syst 43(7–8), 722–739 (2014)
71. Pejovic, V., Musolesi, M.: Anticipatory mobile computing: a survey of the state of the art and research challenges. ACM Comput. Surv. V(N) (Association for Computing Machinery, New York, 2015)
72. Makin, T.R., Holmes, N.P., Ehrsson, H.H.: On the other hand: dummy hands and peripersonal
space. Behav. Brain Res. 191, 1–10 (2008)
73. Nadin, M. Variability by another name: “repetition without repetition.” In: Anticipation: Learning from the Past. The Russian/Soviet Contributions to the Science of Anticipation, pp. 329–336. Springer (Cognitive Systems Monographs), Chem, 2015
74. Schilling, M., Cruse, H.: What’s next: recruitment of a grounded predictive body model for planning a robot’s actions. Frontiers Psychol 3, 1–19 (2013)

Posted in Anticipation, Post-Industrial/Post Literate Society

copyright © 2o19 by Mihai Nadin | Powered by Wordpress