Minds exist only in relation to other minds.

http://www.nadin.ws/archives/3091

Semiotic Engineering – An Opportunity or an Opportunity Missed?

S.D.J. Barbosa, K. Breitman (eds.), Conversations Around Semiotic Engineering, DOI 10.1007/978-3-319-56291-9_6
PDF

S.D.J. Barbosa, K. Breitman (eds.), Conversations Around Semiotic
Engineering, DOI 10.1007/978-3-319-56291-9_6
PDF

© Springer International Publishing AG 2017
S.D.J. Barbosa, K. Breitman (eds.), Conversations Around Semiotic
Engineering, DOI 10.1007/978-3-319-56291-9_6
http://www.tandfonline.com/doi/abs/10.1080/03081079.2017.1296255

Abstract

Semiotics has to be understood as the conceptual undergirding of any form of design and engineering. While it does not provide operational means, it rather demands understanding of design and engineering aspects in a broad sense. Without the underlying semiotics, design and engineering remain mere problem-solving activities, and therefore will fall short of achieving their formative function. Through design, interaction languages, pertinent to engineering, contribute to shaping culture. In the end, semiotics contributes to making such languages available. The world before the computer and the world after the computer, including the ubiquitous smartphone, are not only technologically different, but also essentially culturally different. With the smartphone we progress from mere data processing to machine learning based interactive computation. The hybrid human-interactive computation has anticipatory characteristics.

Introduction

With the advent of the digital computer—in particular the embodiment of the Turing algorithmic machine in the von Neumann architecture—the notion of human-machine interaction took on a new dimension. The transition from a physical knob to the virtual (i.e., the interactive visual representation) was different from that experienced during former changes of interaction modalities. The lever did not need an interface: it was the extension of the human arm. Once it morphed into the pulley, it lost some of its immediateness and transparency: you needed to imagine an arrow that represented the place where force would be applied in order to lift a weight. In the progression of machines, the language of interaction became more elaborate. The clock—at one point the “poster image” for the machine—had a semiotic interface between the gears and pulleys and the user. This interface translated the gravitation-based measurement of intervals. It indexed how long it took a cause (gravitational attraction) to have an effect (the fall of a body, dead or alive). Semiotics, in form of the clock face, created the illusion of time, much as today it creates the illusions of a variety of modeling and simulation applications. With the computer, the clock became a synchronizing mechanism. Of course, the digital display of time is quite different from that of the famous clocks (in the Old Town Square of Prague or the Rathaus Glockenspiel in Munich), associated with images of stars moving in cosmic space. You drive and the smartphone knows where you are through your coordinates in time and space, and what you are supposed to do and what not. During the night you’d better have your headlights on. Handwriting and driving are incompatible (day and night). And here you have Deus ex Machina taking care of you. The machine talks to you.

These preliminary illustrative remarks are intended as the background for the discussion of the extent to which semiotics, as we know it from de Saussure, Hjelmslev, Peirce et al. is significant and effective. Or if we need a better semiotics, adapted to the dynamics of human-machine interaction in the age of ever faster computations. More and richer interactions elicit better, i.e., adequate, semiotics. Most of the time, this is an implicit semiotics, to which designers and engineers have contributed— but not semioticians. Clarisse Sieckenius de Souza, who has remained dedicated to semiotic engineering, and who has been celebrated for her achievements, would argue that I am, if not wrong, at least not well informed. We always parted ways in our understanding of semiotics (see Nadin 2011), but not in the realization that semiotics is essential when approaching interactions with machines. For me, this opportunity to celebrate one of our own who practices semiotic engineering is not for restating incompatibilities, but rather for highlighting how her views, anchored in the semiotic system of natural language, eventually succeeded.

Believe it or not, ontology engineering—a field of extreme significance in the new phase of computation that recently began—is the victory à rebours (against the grain, we would say) of de Souza’s semiotic engineering. Indeed, in our days, a dedicated group of computer scientists is practicing ontology engineering in order to “open access to meaning” (in a way of speaking, of course) to machines meant to do more than data processing. Like Sieckenius de Souza, the ontology engineers operate in the language domain; and what makes their effort so impressive is the algorithmic computation of the activity of building digital encyclopedias (some for medical applications, some for energy management, some for financial transactions, etc.). The focus is on knowledge processing, often understood by them as independent of our many forms of representation (words, sounds, images, media aggregates, etc.). For them an image is what we see. On the smartphone, an image is processed in the knowledge domain of visual expression.

Machines, among them algorithmic computers, operate at the syntactic level. Therefore, in order to tell them what we mean when we program them, using artificial languages, to execute a certain command, ontology engineers reach back to definitions that are sui generis ontologies, i.e., they describe the existent. They work, for instance, within predicate logic, which of course is not the same as the logic of vagueness, as Peirce defined his semiotics. The dedicated effort of ontology engineers led to Siri (a personal assistant), Cortana, S-Voice, Google Now, and the like (too many to be all mentioned, some better than the others). Voice Attack freed the hands of gamers looking for game immersiveness (the voice initiates keystrokes). None of these speech recognition utilities, interfaced with applications (e.g., the e-mail program or the weather prediction app), understands what it means when we ask a smartphone “to do” something. They translate (via databases, for example) what they do not understand, checking against accepted definitions (actual uses of the word), and utilizing computable functions for this purpose. This particular form of semiotic engineering is, at closer examination, as primitive as programming at the machine language level. But very effective. To argue with success is at best comical. Of course, if ontology engineers understood semiotics—John Sowa (2000) is trying hard to convince them—the entire effort would be more successful by many orders of magnitude. In line with this observation, one can continue by stating that the smartphone is really a “dumb phone,” to which we attach, via ontology engineering, powerful semiotic functions facilitated by machine learning. Even if those who do that are most ignorant of semiotics most of the time. (Pat Hayes and his followers come to mind.) But let us not get to the end of the track before running a short marathon to prove the statements—as do those who never run, i.e. the commentators and critics, but are always ahead of everyone, the first to explain how they would have won! (As a practitioner of semiotics, Sieckenius de Souza would smile at this.) Having concretely applied semiotics (working for Apple’s Lisa computer, or for IBM, Siemens, DaimlerChrysler on applications different from smartphones), I am, as much as she is, suspicious of those who claim credit for using semiotic terminology, without really understanding semiotics. This is a good place to sketch a possible genuine semiotic application as it pertains to the ubiquitous cellular phone elevated to the rank of smart device.

From the Analytical to the Generative Level

Semiotics can be a powerful analytical tool. It was already deployed in the evaluation of the interaction between a user and a machine, as well as in the evaluation of the interaction of machines. Nokia—for those who remember the innovative company from Finland—“knew” the value of semiotics. It used to conceive, design, and produce their mobile phones—75% was manufactured in their factories. Other companies used semiotics in the evaluation of process interfaces within a machine. In the particular case of computers, which are rather conceptual artifacts than physical machines, a variety of means can be deployed in order to facilitate the interactions between the human being and the particular digital device. Way back in the history of computation, means and methods pertinent to interaction with other machines were taken over and tested. A whole lot of knobs, sliders and dials were used in the first computers to help the user “tell” the machine what was expected. (Initially many military applications, in which targeting implied fine tuning, dominated.) In our days, Engelbart’s “mouse”—nothing more than an interrupt device—in a variety of embodiments, is still present on the desktop with which uses interact via a language of visual commands. But other than that, the computer was emancipated from methods of physical control and tuning. Sutherland’s pointer (one, the Sketchpad of 1963, dedicated to engineering tasks (Sutherland 1963)) and the touch screen (with a long history, going back to the 1970s) made anything that could be displayed (a pixel, an image) a potential “inter-face.” The graphical user interface (famously known as GUI) replaced line commands. The history is sufficiently well known (Shneiderman 1983) so as not to be repeated here. However, windows, icons, menus, pointer—what became the WIMP paradigm, especially in personal computers— cannot be ignored when referring to the new devices that dominate our time. WIMP-based interactions use a virtual input device to control the visual space of commands, almost all compiled in menus. Actions can be performed even through gestures. A window manager, i.e., a semiotic interface, facilitates the interactions between windows and applications. Mobile devices, such as personal digital assistants (PDAs) and smartphones, use the WIMP elements with their own particular metaphors. Constraints in space, and especially the availability of sensors as input devices, led to a whole lot of new interaction techniques, labelled post-WIMP user interfaces. Touchscreen-based operating systems such as Apple’s iOS (iPhone) and the Android use the post-WIMP class of GUIs. They support styles of interaction using more than one finger in contact with a display.

With or without semioticians (most of the time without), the computer morphed from forms of rudimentary interaction via interfaces to being a semiotic entity— representations of objects and actions made into effective forms of interacting with the machine. The smartphone is a rather elaborate artifact, in which the embedded sensors constitute interfaces to the world in which we live and work. Through sensors, users are positioned in space and referenced to a timeline. Moreover, in a mapped world—including its history (if we consider the pretty astonishing EarthTimeLapse™ just released by Google) we realize how the context changes. Even further, in a world of real stores, restaurants, schools, self-driving cars, etc., the interactive machine (no longer an algorithmic device!) becomes the locus of many transactions. Through sensors embedded in the wearable device, an individual is identified as a human in action (walking, running, playing games, cooking, typing, driving, and much more). In the digital world, the user is a “simile” of him/ herself empowered to perform certain actions, but at the same time “incarcerated” in the world of competing opportunities. (The golden cage of the consumption economy!) If we take a strict sign-based semiotic perspective, the iPhone, or the Galaxy, or any competing brand (Blackberry, LG, Nokia, Huawei, etc.) can be seen as a sign—actually a supersign: a semiotic aggregate of a very large number of interrelated signs (Fig. 1).

Fig. 1 Smartphones—a large variety of semiotic applications as means of identification and interaction with the user

In what follows, we will provide details of a possible analytic approach based on
semiotics, independent of the smartphone manufacturer.

Semiotics Applied to HCI – An Evaluation Tool

We already stated that within a sign-focused semiotics, the analytic dimension dominates. In other words, this is semiotics applied after the design and implementation, not as a guide to it.

HCI Is an Example of Peircean Semiotics at Work

The sign is the unity of what is represented (the object), how it is represented (the representamen), and the open-ended process of interpretation (the interpretant). Let us now examine the signs involved in HCI. In other words, how do we understand design interaction informed by semiotic awareness? Years back (Nadin 1988), when I introduced semiotics to those seeking some help in addressing the issue of human-computer interaction (at the tutorial Interface Design, A Semiotic Paradigm, Applications on the Leading Edge, 4th Annual Pacific Northwest Computer Graphics Conference, University of Oregon, Eugene, OR, October 27–29, 1985) is the first on record regarding the subject), the take was, although methodic, intuitive. The sign definition, adopted from C.S. Peirce guided the entire approach: A sign is something, A, which brings something, B, its interpretant sign determined or created by it, into the same sort of correspondence with something, C, its object, as that in which itself stands to C (Peirce 1902).

I do not wish to rehash the example I used at that time (working as consultant for Apple, focused on the machine called Lisa). Instead I shall take the smartphone as the new “patient” seeking advice from a “doctor specialized in HCI.”

In human-computer interaction, we can consider the smartphone as the object (what is represented) and the operating system (choices are limited to Apple’s proprietary OS, to Google’s Android, BlackBerry, and to Microsoft Windows Phone) as the representamen. The desktop metaphor—appropriated from the semiotics-inspired icon-driven Xerox Star machine of 1981 to the Lisa (1984) and then to the Mac (and from there to every other machine)—is an example of a representamen. It stood for the office (files, folders, file cabinets, garbage can, etc.); and it stands today for the “housing” of applications (called apps, for the sake of abbreviation), ranging from text and data processing, to telecommunication (what used to be the function of a telephone), taking pictures and making videos, finding a location, calling up a service provider (such as the Uber or Lyft). The computer had a limited number of programs that performed desired functions. The smartphone is a social housing facility under siege. Everyone has a new app to offer—for banking, reading, interactive newspapers and magazines, music listening, movie watching, game playing, etc. Finally, the most important aspect is the deployment of apps (almost 80% of them are never or rarely used). A specific use—check blood pressure or cardiac rhythm, play a game, get a wake-up call, etc.—is a possible interpretation. In the act of doing something, which is the process of interpretation, the sign comes to life, acquiring meaning. Speech recognition is such an interpretation. Obviously, in such an interpretation (one application from many), a lot is left out—for instance, how speech commands turn into operations, how they call up associated programs, how learning (patterns of activity) take place. The smartphone is a not a mere computer with more functions and components (sensors, for example) than the desktop machine in the office. It became a smart typewriter that “understands” speech and drives a text processing program with associated layout functions and self-correction utilities (spelling, grammar, linking, etc.). Taking a picture (selfie or not) is also an interpretation from a very large number of possibilities associated with a digital camera, to which video and sound recording belong as well. Editing on the fly is also possible, as are various encodings and large file sharing.

The same smartphone offers its interpretation as a data-processing device, as a database management tool, and as a multimedia console. It can function as a game console, as a medical evaluation platform (communicating with the physician’s office) based on the specific apps a user interacts with. By the same token, the object can be an application: Photoshop (the metaphor of the darkroom carried over to the digital realm), database, text processing, visualization, e-commerce, among other applications. The representamen is the “representation” of the “language” one must command in order to achieve the desired performance. And the interpretation is the performance actually achieved. Sign processes—also called semioses (singular: semiosis) in the jargon of semiotics—are nothing other than the coming to life of the manufactured piece of hardware enticing more and more users, facilitating richer and more creative interactions. Their use is semiotics at work—even if those who designed the device, those who made it, those who market it, those who provide the infrastructure for their networking, etc., never heard about semiotics (as is usually the case). As designed artifacts, smartphones are the output of an activity—to design—which means to express in signs (de-sign, as in visual representations of such a device). Ontology engineers actually focus on designing tasks and the level of conceiving new meaningful entities.

One Sign – Three Functions

The unity of what is represented, how or through what that representation takes place (medium), and interpretation (the operation desired or actually performed, e.g., I want to process an image, write a letter, buy a car online, etc.) constitutes the sign. The three functions of the sign—representation, communication, and signification— can be understood only together (i.e., as an ensemble) (Fig. 2).

Fig. 2 The Smartphone as a sign: three semiotic functions define its future operations

If we choose the smartphone to be considered as a sign, it will stand for the design through which it eventually became the smartphone—iPhone, Galaxy, Blackberry, etc. There is a lot of high technology to account for, but also a lot of interaction design. In this representation, the smartphone’s aesthetic qualities are part of the semiosis. Remember when Apple sued Samsung for theft of aesthetic identity (the round corners, for example)? In reality, Apple is a marketing company: 189 suppliers, working at 789 locations, none owned by Apple, translate design specs into what became the success story known as iPhone. Proprietary refers to uniqueness, to protected means that give a product its edge over others. Smartphone manufacturers often use similar chips (such as those Samsung sells to Apple suppliers) and sensors, but almost never embody the same interaction specifications. Each generates its own space of potential meanings (Fig. 3).

Fig. 3 The “edge” of a competing smartphone. The Supreme Court of the USA involved in the dispute over claims of design and uniqueness (translated into money, which is a poor definition of their meaning)

Representation

A caveat: An unfortunate, simplified model of Peirce’s semiotics (due mainly to Charles Morris, but since then adopted by many pseudo-semioticians) popularized three forms of representation—iconic, indexical, symbolic—as three different types of signs (Fig. 4). Removed from the context of sign definition, these forms are mistakenly called signs—even by practicing semioticians. Why do I say mistakenly? The error is evident. In respect to space, you cannot speak of volume, for instance, without acknowledging its three dimensions in the measurement. It is wrong to simply say that the volume of a room is five square yards (meters for non-USA readers); you need to define the three dimensions of volume: width, depth, height. Accordingly, it is just as wrong to say that a particular form of representation is a sign without identifying object, representamen, and interpretant. You cannot characterize a sign only by how it represents the object without relating to the other two aspects: the kind of representation and the kind of interpretant.

Let’s progress from the definition level to the practical level. The actual embodiment of the smartphone is the representamen for the object represented by a computer endowed with many sensors and capable of interconnection (the post-WIMP mode of interaction). What kind of representamen is appropriate?

Fig. 4 The diagram explains the three distinct forms of representation (iconic, indexical, symbolic) characteristic of Peirce’s semiotics

You have to relate characteristics of the object: WHAT is represented to HOW these characteristics are represented, and to their open-ended interpretation. Are we looking at an object’s qualities (e.g., softness, color)? Are we looking at its necessary condition? (For example, water is necessarily the combination of hydrogen and oxygen; gravity will cause objects heavier than air to fall to Earth.) Are we focusing on its singular nature (i.e., unique, such as the uniqueness of each individual)? In the case of the smartphone, we could start at the intuitive level: make it pleasant, make it look like something we are familiar with. What should it be? The old telephone? Probably not. A circular form (like a big button)? A little animal? To find out what guides the decision making process of the designer, we need to define what is represented. The adopted form corresponds to studies in ergonomics (focused on what better fits in our hands) but makes possible variations (corresponding to individual characteristics, such as the difference in vision, left or right handedness, etc.).

What Is Represented?

In Peirce’s semiotics, based on a broader view of what objects (including here actions) are, we deal with uniqueness, formal qualities, and necessary character. One is chosen as representative. For the sake of clarity, here is the diagram indicating the triadic-trichotomic structure of any sign—be it part of visual representations, actions, abstract signs, etc., or any other representation (e.g., sonification).

The diagram is self-explanatory: the object represented and the interpretation of the representation are connected. If we choose a certain characteristic of the object as relevant to the action performed, this characteristic is acknowledged in the representation. For all practical purposes, the smartphone is a larger post-it, good for writing notes, but also for storing information, manipulating it, and displaying it. Ideally, it should be customizable, and chances are good that this will eventually happen (Fig. 6).

Of course, before this should happen, we need to better understand what makes the sign definition necessary. I shall take each possibility defined in Fig. 5 (the triadic-trichotomic structure of the sign) and see how it applies to the smartphone (of course, this is more explanatory than procedural at this juncture).

Fig. 5 The triadic-trichotomic structure of the Peirce-defined sign

Fig. 6 A future of customizable smartphones

Sin-sign Exemplified through a Signature—think of a password or a fingerprint—a sin-sign is an object of a singular nature. It can be imitated: you look at a signature and try to match the handwriting and the type of pen and ink used. When dealing with a signature, what you probably want is to make only one interpretation possible. For example, if someone wants to cash a check, the banker has to be sure that the signature of the endorser belongs to the appropriate person. If someone wants to access a file, that person should be entitled, and the HCI characteristic of the validation should be designed to make this clear to everyone: “Don’t even try if you are not entitled!” On the smartphone, the sin-sign aspect (i.e., singularity) is very important: the device has to know in “whose hand” it is, i.e., whether that individual is entitled to use the many functions (including bank operations) or not. It actually learns to distinguish between the legitimate user and any other accidental users. In our days, when identity theft is so prevalent, smartphone identification is probably more important than other functions.

Quali-sign A certain quality (e.g., softness, friendliness, pinkness) of an object or an action might stand for the entire object. Let’s say, the smiley: : ). It suggests that the object or action semiotically identified through this quality supports an interaction that is “friendly” (easy to use). The design of such an element involves understanding how, from among many sign characteristics, one can be selected to stand for the entire object or action it represents. Apple made the quali-sign its branding trademark—the company is more a design corporation than a science and technology driven production facility. But typically quali-signs cannot be protected. This explains many of the imitations (from the desktop metaphor it took over from Xerox to the windows desktop and more recently to the smartphone designs) that follow in the company’s footsteps.

Legi-sign The legi-sign says that something has to take place. If you triggered a shutdown procedure (on a PC, on a UNIX machine, or on a Macintosh), the semiotics of the process has to be simple and direct: no more and no less heavy than the semiotics of a switch (ON/OFF). Shutting down the smartphone, in no matter which of its many variations available in the market, is not a luxury. Battery life and power availability (when you need the smartphone most, it should be functional) are important considerations. Other actions that deserve the same attention: Does the “conversation” with Siri, S-Voice or Cortana, or Google Now end on its own? Does the use of the microphone or the recording device intelligently end and reconvene when necessary? These are only two examples of questions that need to be addressed.

Important: the three aspects of the object are independent and not reducible one to another. But at times we would like to have all of them represented in the sign because each has a different role to play during use. It is at this level that semiotics becomes critical: how to establish a hierarchy of aspects when resources are limited, moreover when the awareness of the user’s ability to navigate the huge space of possibilities becomes a major issue. The choice is also informed by the type of representations that will be used.

How Do We Represent? Indexically? Iconically? Symbolically?

Semiotic awareness involves understanding the different characteristics of the object or action represented. It also involves understanding the types of representation: indexical, iconic, symbolic. Please take note: These are types of representation, not types of signs. I explained this aspect in reference to the Lisa computer, on whose semiotic evaluation I worked. For the sake of clarity, I will repeat the visual argument as it pertains to the calculator omnipresent on computers and smartphones (Fig. 7).

Fig. 7 Types of representation adequate to the desktop

Indexical: The Marks Left by an Object

The definition is important for understanding that a language of interaction will have to provide integrated indexical signs. On the smartphone, the fingerprint is now a feature (payment via the smartphone is one application where the fingerprint is used) (Figs. 8 and 9).

Fig. 8 Examples of indexical representations: fingerprint, compass, weathervane

Fig. 9 The indexical fingerprint; face recognition as unique identifier

There is also the smartphone with eye scanners—not really convenient enough in order to be accepted—or biometric fingerprint scanners—where convenience also suffers. Recently, as augmented reality (AR) makes it into the app world of smartphones, facial recognition is offered as a feature. Semiotically, these alternatives are valid; but in the end it depends on the degree to which the additional security justifies the overhead in operations to achieve such security.

Iconic: Resemblance to an Object

The famous garbage can icon became part of our visual language decades ago. I shall not forget the experience I had with Steve Jobs. The garbage can on Lisa came with a slant lid on it. Explaining to the mercurial manager why a slant lid was not necessary proved to be an exercise in futility (“Lisa likes it with a lid!”—Lisa was Jobs’ girlfriend at that time). Jobs took a long time to understand the significance of semiotics. Paul Rand, the famous graphic designer who worked on the NeXT computer identity, helped in the process. He understood the value of semiotics. On many smartphones, if not a slanted lid, some other awkward icons populate the iconic world, making the interpretation more difficult (Fig. 10).

Fig. 10 Lisa computer trash can with useless lid

Let me add one example from the smartphone universe: In the Apple world, there is the wastbasket; on the Windows side, a recycle bin. On the Android smartphones, the “dumping” of data takes a different approach. The user is asked whether the data should be erased or not. This is, of course, a different semiotics. In reality, the iconicity is no longer meaningful, but since there is a “culture” of the operation of discarding data (files of all kind), designers build upon it (Fig. 11).

Fig. 11 Iconic variations

Symbolic

Example: in the convention of numerals (Roman, Arabic) standing for quantities: I, II, III, IV…; 1, 2, 3, 4…. To be clear: most of the time, we are in the symbolic representation domain. We share the meaning of most of our representations: words (which stand for objects), symbols, sounds, etc. The smartphone is the symbolic aggregate of many represented objects and actions. In the end, what distinguishes the various embodiments of the smartphone is the symbolic domain, i.e., the language of representations and actions they facilitate (Fig. 12).

Fig. 12 The symbolic is the dominant form of representation

Interpretant Process. Or Sign Closure
In other words: What do we do with such devices? This is the goal of the entire attempt to consider the semiotics of the smartphone (or any other device we conceive of or use). We can now combine what is represented and how it is represented.

What: the place where we dispose of what we do not need or desire. What we represent is neither a unique characteristic nor a singular characteristic.

How: It is most commonly represented iconically. It looks like a wastebasket. But it can be represented symbolically, in the action called Erase/Throwaway/Discard. However, the semiotics of the interaction makes sense only in the context of its specific use. Based on my evaluation of smartphones, this is the weakness of almost every device I had in my hands. The awareness of context, made possible by sensors, is ignored in the design. The smartphone always appears as cluttered, way too busy, regardless of what we do with it. It is evident that smartphone designers are rarely aware of the semiotic principles according to which less is more. The user usually fills in the missing, the suggested.

Interpreting the Interpretant
Semiotic awareness necessitates defining a desired interpretation (Fig. 13): the “Aha!”—the taking notice of something, the awareness of the consequences of our actions (such as with the ERASE function), and the possibility of individualizing available possibilities (customization as an advanced interpretation) (Fig. 14).

Fig. 13 Representations are interpreted. The pragmatic aspect of GUI

Fig. 14 Individualizing the smartphone—still a rudimentary understanding of the pragmatics of
individualized use

As opposed to the reactive model of usability tests (addressing “which user,” “the statistical average,” “the focus group”), semiotics suggests the possibility of achieving semiotic adequacy. This is the interplay among various kinds of signs (visual, verbal, tactile, etc.) (Fig. 15), that is, the sign process conceived with a clear cognitive goal and evaluated in cognitive terms: proactive as opposed to the reactive usability measurements.

Fig. 15 The various languages of representation. Sonification, the new kid on the block, is making progress

Rhema: a realization—the Aha! effect of something we realize spontaneously, such as the iconic representation of the garbage can. Once in use, things get a bit more complicated. On the Macintosh desktop, one could take the icon of a floppy disk and place it in the garbage can! The result was the Eject function, semiotically inadequate, but which, through use, became part of the Macintosh “language.”

Dicent: the calculator as an “icon” on the iconic interface. You use the representation of buttons as you would work with real buttons. This second level of iconic representation is a description of a description, etc. Semiotically, it is a primitive concept. But once computers without keyboards emerged (the great IBM Aha! Moment of attaching typewriters to computers), it proved to be quite an efficient means for HCI on pocket-sized gadgets. The virtual keyboard, available when needed, is the continuation for inputting text. So is the microphone, for speech commands.

On the smartphone, this tendency is usually abused. When youngsters have their thumbs surgically reshaped in order to play better, something is clearly wrong with how we make interactions with the device possible. Voice Attack is an inspired alternative. Given the potential of interactive computation, it becomes a challenge to the design and semiotic community to go beyond the icon of icon to new representations, probably embodied in some simple hardware interaction devices. The pen is one good example, although not always properly understood. The microphone, mentioned above, is yet another choice.

Argument: the level at which computations result from their own knowledge domain (visualization, simulation, etc.). Computational sciences are not some discourse about computers (including mathematics and logics), but are expressed in computational form. The HCI of this domain no longer limits itself to applications, but becomes part of the computation. Here HCI is dynamic, growing with the computational inquiry, and becomes part of the result. The entire area of adaptive interfaces (to which the gestural belongs) is a good example for this semiotic level. The smartphone definitely leads the development in this area.

Semiotic Adequacy

Semiotic means of all kinds are integrated in the process we call HCI—regardless whether pertinent to supercomputers, neural networks, or smartphones.

Machine learning improved not only speech recognition (around 4% margin of error), but also image recognition. In the sound domain, progress is even more impressive.

In order to evaluate the result of semiotic choices (what kind of semiotic processes should be considered) and the effectiveness of the semiosis we designed, we need to “run” the HCI “program,” not unlike the way we test various software solutions. Adequacy is a qualitative measurement—it is focused on meaning. Semiotic adequacy is established through basic semiotic operations.

Substitution, i.e., variation of the representamen: the photographic camera shutter replaced by the image of an eye (Fig. 16).

Fig. 16 Example of substitution

Insertion, i.e., an addition of representamina (plural of representamen) until the object is adequately represented: horizontal reference and indicator of functioning (Fig. 17).

Fig. 17 Example of insertion

Omission, i.e., leaving aside or removing sign interpretations that obscure the semiosis. In one example, the arrow is removed but the subsequent change of function is indicated; in the second, indicator of functioning, dial, and horizontal index are omitted (Fig. 18).

Fig. 18 Example of the semiotic operation of omission

Of course, these examples are more illustrations of how semiotic operations (insertion, substitution, omission) can be systematically pursued in order to optimize the interaction.

As opposed to the reactive model of measuring user performance—which is still the dominant evaluation method—semiotic adequacy is a method of fine tuning the semiotic elements involved in HCI. Adequacy reflects individual choices and can inform design decisions in the direction of individualization.

Aspects of Communication

The smartphone is the result of the progression of the morphing of the telephone and the computer: landline and connected device, mobile phone (a phone with an emitter and receiver at the end), cellular phone (part of an interconnected world of cells), and smartphone—a work still in progress. Around 2.5 billion smartphones are in use in our days; their number will double within the next 10 years. 60% of all time spent online in the USA involves the smartphone. The category of devices called personal assistants (PDA)—mobile units of all kind, functioning as personal information managers (some with phone functions integrated)—are different in nature from smartphones but related to them in terms of the technology used. Each of them is meant to support an intuitive interaction between user and device, i.e., straightforward communication. And each involves more and more machine learning— the capabilities associated with adaptive functions. Instead of requesting the user to perform certain operations, they detect patterns, they make inferences, and often reproduce desired operations. Let us examine here only the variety of situations occurring in the smartphone or PDA computation:

(a) The user and the smartphone/PDA in problem-solving interaction: let’s say the SCAN application. The image of the document will eventually become a word-processed file that can be further used in other integrated functions.

(b) The smartphone/PDA and the user in an interaction focused on what is computable and what is not. We can get weather reports on the device, but we cannot process data associated with weather prediction. (For this type of application, supercomputers are still necessary.) The same holds true for earthquakes. But given machine learning, the device can associate data from weather centers and suggest levels of danger.

(c) Communication of results: the outcome of smartphone computation can take the form of commands, such as remote control of appliances in a home, for instance; or the form of data for training a neural network; or the information (data associated with meaning) underlying forms of learning. Indeed, applications extending into AI are rapidly spreading into smartphone uses. Examples related to medical diagnostic, to diagnostic in general (what’s wrong with my car?), to evaluating alternative routes in logistics, to selecting stocks or other investment possibilities, are no longer an exception.

Semantics vs. Pragmatics

The smartphone is a very good example for understanding all the effort put in to achieve the semantic level of communication. When Sieckenius and her group refer to the specific aspects of how well designed some communicative aspects of a pull-down menu are, she makes us aware of the fact that semantics plays an important role. On the smartphone, the pull-down menu (still available) is not really progress. We’d better start looking into distributed means of communication to replace the tree structure in use today.

On the smartphone, more than on the Internet, users can define their goals without having translated meaning (“I want Sicilian pizza!”) into the garble of syntactic approximations. This is, after all, the goal of ontology engineers. And they delivered. Communication is a theme of semantics; syntax refers to representation. Pragmatics integrates expression, representation, and communication and results in knowledge. Indeed, meaning—what we do is the result of seeking meaning in our activity— is the domain in which computation—most certainly not in algorithmic form—will eventually mature. At that time, we will no longer deal with devices, but with cognitive energy as a resource (similar to the electric energy—the Mark Weiser metaphor from 1993—The world is not a desktop—discussed in (Nadin 1997)). Designing Agency, as Lockton suggested—involving Veronica Ranner, Gyorgyi Galik, Delfina Fantini van Ditmar, and Laura Ferrarello in his argument (Lockton 2015)—sounds good, but is more suggestive of what computation might one day become than directional. In reality, through devices accepting a rich variety of sensor input, the user becomes part of a hybrid human-machine entity: human intelligence and the facility to process data as the situation (context) requires. Sometime huge amounts of information (such as in financial analysis), other times small amounts of information, but significant. And most of the time very fast processes. The semiotics of such hybrid entities is a challenge that transcends technical feasibility. However, semiotics becomes relevant only if the perspective is pragmatic: Why do we enter into interaction with a computer? Based on this assessment, we can define the semantics of HCI and design based on a syntax that allows for a clear “language” of interactions.

The task of semiotic engineering in respect to the smartphone—a step in the direction of ubiquitous computing—goes beyond what semiotics, in the sense it is practiced today, can actually provide. Users do not want “computerese,” i.e., complicated operations and a lot of memorized commands, between them and the service they need. Here is where semiotics can be of immediate help. There is no doubt that the sign-based semiotics documented above could be a powerful analytical tool, but it will not help in doing away with the “in-between,” (i.e. buttons, commands, monitors, etc.) of users and machines. Not very much semiotic competence is needed to take note of the fact that smartphones are the success story of a technology to which semiotics contributed close to nothing—Nokia was impatient when it got rid of its semiotics experts. Most of the time, the iconic interface of the desktop expanded into this new world of interactions. Nevertheless, the failure of semiotics in respect to the design of the smartphone is actually its opportunity.

In order to remain viable, semiotics must remain focused on meaning—which in the end is the reason why devices such as smartphones are used. They enable human activities not possible without them. Just think about a real-time navigation system, about social interaction, about new forms of monitoring health, finances, etc.; about staying connected to the world in whatever form one might wish; about a very rich assortment of peer-to-peer transactions. Traditional business models compete with real-time shared services. New efficiencies are facilitated through the smartphone. They help in overcoming handicaps, as well as in the effort to augment interaction. With smartphones, even relying on ontology engineering (covering for semiotics), the syntactic level of computation was transcended. The interconnected smartphone is a medium for richer forms of pragmatic expression, for human self-constitution through activities never before possible. The future hybrid entity—the human-interactive computer—uniting the living and data processing, is pragmatically relevant and becomes necessary on account of its pragmatic dimension.

A Challenge to Semiotics

I have argued in favor of a new foundation for semiotics (Nadin 2012), one that builds upon what we know about the sign, but which focuses on semiotic dynamics, not on sign typology. Sign and sign theories as we know them (de Saussure, Peirce, etc.) are no longer adequate for engineering meaningful semiotic experiences. The level at which these take place is that of the time series; that is, sequences of signs making up a real-time narrative. As such, these narrations—how to perform operations, how to integrate applications—constitute quite a number of interrelated languages, each with a precise focus, and all together able to reach expressivity.

Medicine, which is one of the activities within which semiotics emerged—just think about symptoms and the art and science of diagnostics—is a good example of why narration and story became necessary. It is also pretty much related to how the smartphone became an unavoidable link in what is called eMedicine. There are quite a number of parallel streams of data—such as blood pressure, heart rate, body temperature. And there are physiological data—such as cholesterol and glucose levels, blood count, creatinine, bilirubin, etc. The aggregate expression, that is, what physicians would interpret as the health condition, abnormalities, or deficiencies, is the meaningful story. Indeed, the narrations represented by the streams of data make up the story to be associated with means and methods for healing or correcting imbalances. There is the analytical level: establish the condition at a certain moment in time (identified in the narration). And there is an anticipatory dimension: what can and should be done to avoid the story called obesity, hypertension, type-2 diabetes, lower back pain, dyspepsia, or any of the maladies that can be prevented.

Take the example of how sequences of signs (the streams of data) become the narration for a medical condition and, when interpreted by the physician, result in the story (e.g., diabetes means a metabolic disease in which the body’s inability to produce any or enough insulin causes elevated levels of glucose in the blood). Apply this understanding of semiotics to the human-computer hybrid for which we chose the example of the smartphone. There are parallel and concurrent streams of data, afforded either by sensors or interactions (local or through the network). To proceed with the search for a jazz concert in town, to reserve tickets, to find a ride (with a friend, with Uber, or with the subway), and to frame the music played within your memory of similar events and, finally, to share it would make up the story of smartphone-supported activity. We can already address the digital assistant (Fig. 19).

Fig. 19 Ontology engineering makes possible a pseudo-semantic level of interaction. Of course,
the device does not know what a restaurant is and even less what a Chinese restaurant means

Another example refers to performing certain operations, which on hardware devices (such as video cameras) were pre-determined in the material components (gears, speed control, etc.). With the smartphone, we can produce a slow-motion image (Fig. 20) by following a sequence of commands (the syntax of the operation) and expecting other commands, such as the slower motion effect, to be executed by the device.

Using an app such as Office Lens, we can take an image and generate a document, or even a whiteboard. The user-computer hybrid is involved in some of the operations. The rest is performed according to specifications corresponding to our experience with certain activities (Fig. 21).

Essential is the understanding that in this new phase of semiotic engineering, we conceive of effective interaction language. Interfacing among various integrated functions means, after all, reaching the pragmatic level of semiotics. There was never a better time for ascertaining the need for semiotics in a world where way too often we focus on data but fail to realize the meaning of what we do. Anticipatory characteristics, reflecting the individuality of each person, become possible within the frame of interactive computation.

Fig. 20 The syntax of the operation of filming in slow motion integrates button operations not
related to the activity

Fig. 21 A complete design procedure is the result of integrating the semiotics of the layout. This in itself is a visual language

References

Lockton D (2015) Let’s see what we can do: designing agency, December 2015, https://www. medium.com

Nadin M (2011) Information and semiotic processes. The semiotics of computation (review article). In: Pearson C (guest ed); Brier S, et al. (eds) Cybernetics and human knowing. A journal of second-order cybernetics, autopoiesis and cyber-semiotics, 18:(1–2), 153–175

Nadin M (1988) Interface design and evaluation. In: Hartson R, Hix D (eds) Advances in human-computer interaction, vol 2. Ablex Publishing, Norwood, pp 45–100

Nadin M (2012) Reassessing the foundations of semiotics: preliminaries. Int J Signs Semi Syst2(1):1–31

Nadin M (1997) The civilization of illiteracy. Dresden University Press, Dresden

Peirce CS (1902) On the definition of Logic, New Elements of Mathematics, 4: 20–21. (See also: https://inquiryintoinquiry.com/2012/06/01/c-s-peirce-%E2%80%A2-on-the-definition-of-logic/)

Shneiderman B (1983) Direct manipulation: a step beyond programming languages. IEEE Computer 16(8):57–69

Sowa JF (2000) Knowledge representation: logical, philosophical, and computational foundations. Brooks Cole Publishing Co., Pacific Grove

Sutherland IE (1963). Sketchpad. A man-machine graphical communications system. Submitted in partial fulfillment of the requirements for the degree of doctor of Philosophy, Massachusetts Institute of Technology, January 1963


Posted in Anticipation, Human-Computer Interaction, Semiotics

copyright © 2o16 by Mihai Nadin | Powered by Wordpress

Locations of visitors to this page