Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Designing a system requires both creativity and rationalization. These two processes are antagonist. The former is about divergent thinking (i.e., generation of ideas and brainstorming processes when it is done by a group of people). The later is about convergent thinking (i.e., analysis of ideas and synthesis into concepts, evaluation and prioritization of concepts). Generating concepts requires formalizing them in order to share them, and this is where modeling enters into play. We need to have the right conceptual tools in order to share ideas and concepts. This chapter introduces a few of these conceptual tools such as models.

What is a model? A model is a simplification of the reality (e.g., a system or an environment) we try to represent. Therefore a model is a simplified representation that puts forward a few salient elements and their relevant interconnections. If the model takes care of the interconnections part of the system, we need the simulation to take care of the interaction part. In other words, simulation brings the model to life.

Modeling and simulation (M&S), as a discipline, enables understanding of purposeful elements, their interconnections and interactions of a system under investigation. When developing a model, you are always facing a first tradeoff in choosing the right level of granularity (i.e., what is meaningful against what is unnecessary). If you stay at too high a level, you might miss interesting interactions during the simulation process. Conversely, if you want to model every detail, you might run into very complicated interactions extremely difficult to understand. The model of everything does not exist and will never exist. Stay focused on the purpose ! There are various kinds of models, and we will see the main distinctions that need to be understood in order to use and/or combine them appropriately.

Simulation is typically computer-supported and generally interactive, but not necessarily so (e.g., it could be paper-based or part of a brainstorming as a role-playing game). As already said, simulation is used to improve the understanding of interactions among various elements of the model it implements and simulates. It is also used to improve the model itself and eventually modify it (Fig. 7.1). Simulators enable people to be engaged into the activity that the system being modeled enables them to perform (e.g., driving a car simulator). Simulation can be useful for learning about a domain, such as flying, and simulators are typically upgraded as we learn more about this domain. In addition, simulation enables human operators to “experience” situations that cannot be experienced in the real world such as possible accidents . As shown in Fig. 7.1, modeling a system also takes into account people involved and interaction with other relevant systems interacting. The modeling process produces a model that can be run on a simulator, which in turn produces experimental data. These data are usually analyzed. Data analysis enables potential identification of emerging properties , which enables learning about system use. The M&S design cycle shows that modeling is a closed-loop process that in turn enables system re-design, modification of people practices/profiles and potential re-definition of the other systems involved.

Fig. 7.1
figure 1

The M&S design cycle

Simulation is imitation. I remember the first time I discovered the Concorde simulator at Aeroformation in Toulouse, France. The view was generated from a small camera moving over a giant model landscape that was fixed to the wall in a big hangar. The images from this camera were projected onto large screens in front of the cockpit windows . The original simulation setup was limited to a single airport obviously! Then came computer-generated images that totally changed the possibilities of flight simulation. Note that this kind of simulator was only used for training .

Simulation can also be used to explain and aid difficult decision-making processes . For example, M&S was very much used at Kennedy Space Center to figure out launch pad configurations prior to a shuttle launch. Current M&S tools, such as CATIA and DELMIA for example, enable the visualization of complex structures and functions (Coze et al. 2009). NASA engineers and management use resulting simulations as mediating representations and decision support tools.

Should I mention Disney and Universal, for example, who use simulation for a totally different purpose (i.e., entertainment). In addition to attempting to provide natural sensations and “real-world” experience, they also create fiction. It is amazing how people can manage both simulated “natural” things and fictious objects brought to life through simulation. Computers and software make this kind of mix possible, but it requires creativity and experience to make this kind of technology acceptable and even invisible, to people and provide them with a memorable experience.

There are several kinds of simulation that involve both real and simulated people and systems interacting among each other. We typically use the term human-in-the-loop simulation in this case . Various kinds of effects can be simulated such as visual scenes, sounds, motion, and odors. The videogame industry is progressing very fast to integrate those sensations.

M&S principles, techniques and tools for human-centered design are introduced in this chapter. An HCD framework called the AUTOS pyramid will support them .

A HCD Framework: The AUTOS Pyramid

Whenever you want to design an artifact, you need to fully understand three entities: Artifact, User and Task. Artifacts may be systems, devices and parts for example. Users may be novices, experienced or experts, coming from and evolving in many cultures. They may be tired, stressed, making errors, old or young, as well as in very good shape. Tasks may be required at various levels including regular manipulation, process control, repairing, designing, supplying, or managing a team or an organization. Each task corresponds to one or several cognitive functions that users must learn and use. The AUT triangle (Fig. 7.2) presents three edges:

Fig. 7.2
figure 2

The AUT triangle

  • Task and activity analyses (U-T): taskFootnote 1 analysis is probably the first design initiative that enables a designer to grasp the kinds of things users will be able to do using the artifact; in contrast, activity analysis is only possible to implement when a first concrete prototype of the artifact is available.

  • Information requirements and technological limitations (TA): task analysis typically results in a task model that contributes to the definition of information requirements for the artifact to be designed; however, there are always technological limitations that need to be taken into account—when they cannot be overcome, the task model needs to be modified.

  • Ergonomics and training (procedures) (T-U): working on the ergonomics of an artifact means adapting this artifact to the user; conversely, defining training processes and operational procedures that contribute to adapting user to artifact.

In addition to the AUT triangle, one must also take care of the organizational environment in which an artifact will be used . The organizational environment includes all agents that interact with the user performing a task using the artifact (Fig. 7.3). Three more edges are then added to form the AUTO tetrahedron:

Fig. 7.3
figure 3

The AUTO tetrahedron

  • Role and job analyses (T-O): introduction of a new artifact in the organization changes roles (i.e., cognitive functions) and consequently jobs, which need to be analyzed.

  • Social issues (U-O): when changes induced by the introduction of a new artifact are drastic, social issues are likely to occur.

  • Emergence and evolution of artifacts in the organizational environment (A-O): technology inevitably evolves and people get used to it, therefore a designer needs to take this evolution into account. At the same time, introduction of the new artifact creates emergence of new practices that need to be identified (i.e., emerging cognitive functions have to be discovered).

Of course, the AUTO tetrahedron needs to be tested in various situations. Consequently, the “Situation” is an important dimension taken into account, in the AUTOS pyramid (Fig. 7.4) . Four more edges are then considered:

Fig. 7.4
figure 4

The AUTOS pyramid

  • Usability/usefulness (A-S): the artifact being designed and developed has to be tested in large variety of situations;

  • Situation awareness (U-S): people need to be aware of what is going on when they are using the artifact;

  • Situated actions (T-S): the task is not context-free and has to be consolidated in a broad set of relevant and purposeful situations;

  • Cooperation/coordination (O-S): agents need to communicate, cooperate and coordinate within the organization according to a broad set of relevant and purposeful situations.

The AUTOS pyramid is proposed as a framework for categorization and development of HCD criteria and appropriate empirical methods . Each couple {criteria; methods} is associated with an edge of the AUTOS pyramid (e.g., the edge U-T is associated with task and activity analyses appropriate for analysis of perceived complexity of a system, device or user interface).

Human-in-the-Loop Simulation (HITLS)

By definition, HITLS requires human interaction with a simulator or a simulation facility. HITLS enables the investigation of situations that could not be investigated in the real world, such as incidents and accidents for example. When we think about HITLS, we immediately have in mind a flight simulator, but HITLS could be of any kind of tool including driving simulator, fashion design simulator, nuclear submarine simulator and so on. The development of HITLS requires methods and techniques for challenging systems and circumstances of the future (Boy 2011).

We will make a distinction between HITLS and simulated-human in the loop systems (SH-ITLS). The former directly involves real people in the simulation environment; the latter involves simulated people, avatars or human simulated agents. Of course, HITLS can a combination of both.

A primary purpose of HITLS is to effectively perform functions, roles and responsibilities allocation among various human and machine agents . Cognitive functions can be allocated to people or machines (typically software, as already described in the book). They are defined by their roles, contexts of validity, and resources that are cognitive or physical. For example, the PAUSA project (Authority sharing in airspace) goal was to gather insight into fundamental problems of human/automation integration and allocation of roles and responsibilities (Boy and Grote 2011) required to achieve significant capacity increases targeted for SESAR (Single European Sky Air Traffic Management Research program of the European Commission). This type of study is crucial to determine the future role of various agents in an airspace system. The focus was typically on trajectory-based operations at triple the traffic density and safety of today’s airspace system (Straussberger et al. 2008). The Multi-Sector Planning (MSP) concept provides a spectrum of redistributed roles and responsibilities among air traffic management team members including physical relocation. There is an urgent need for a methodology to evaluate ATM concepts and their variations, especially in high-density traffic environments. We need to develop methods and tools that enhance determination of roles and responsibilities of the MSP in relation to current actors (i.e., aircrews, controllers, and traffic management teams). The main problem is that future operations are underspecified. Prior best practices are typically not useful for future technology, organizations, and people to predict feasible performance issues , which makes HITLS an essential tool for evaluating future alternatives.

Prevot et al. (2010) developed an HITLS facility that enabled function allocation for ground-based automated separation assurance in NextGen. Other studies demonstrated the great utility of HITLS. For example, the FAA Free Flight Program successfully deployed the User Request Evaluation Tool (URET), Traffic Management Advisor (TMA), and Controller-Pilot Data Link Communications (CPDLC) to a limited number of Air Route Traffic Control Centers (ARTCCs) (Sollenberger et al. 2005). Enhanced HITLS capabilities can enable effective development of decision support tools (e.g., Traffic Management Advisor), procedural changes (e.g., Reduced Vertical Separation Minima), advanced concepts (e.g., Dynamic Resectorization), new software/hardware (e.g., Standard Terminal Automation Replacement System, Display System Replacement), and advanced technology (e.g., Global Positioning System).

HITLS enables understanding and development of relationships among technologies, organizations and people, and incremental refine configurations such as the Dynamic Airspace Configuration (DAC). DAC is an operational paradigm that represents a migration from static to dynamic airspace, capable of adapting to user demand and a variety of changing constraints (Kopardekar et al. 2007). DAC envisions future sectors to be substantially more dynamic and evolve fluidly with changes in traffic, weather, and resource demands. Traffic increase, mixed traffic (commercial, corporate, general aviation, unmanned aerial vehicles), and higher safety objectives require airspace to be globally and locally reorganized. The constantly evolving airspace system is one example of a complex multi-agent system , where emergence issues cannot be studied without a new generation of HITLS.

Simulated-human in the loop systems, such as MIDAS (Corker and Smith 1993), support exploration of computational representations of human-machine performance to aid designers of interactive systems by identifying and modeling human/automation interactions with flexible representations of human-machine functions. Designers can work with computational representations of the human and machine performance, rather than relying solely on hardware simulators with humans-in-the-loop, to discover problems and ask “what-if” questions about the projected mission, equipment, or environment. The advantages of this approach are reduced development time and costs, early identification of human performance limits, plus support for training system requirements and development. This is achieved by providing designers accurate information early in the design process so impact and cost of changes are minimal.

Multi-agent models and simulations are not new, even if many are still mostly created as ad hoc software. They have been used to study processes and phenomena such as communication, cooperation, coordination , negotiation , distributed problem-solving, robotics, organization setups, dependability and fault-tolerance. Such models enable exploration of alternatives needed in at least three major categories: (1) technology (to support such evolution and requirements), (2) organization (roles, context and resources, and multi-agent interaction models ranging from supervision to mediation and cooperation by mutual understanding ), and (3) people (in terms of human capabilities, assets, and limitations).

Design Life Cycle

The design of a system is never finished even if at some point, delivery is required. This is why maturity has to be assessed (Boy 2011, p. 432). The more a system is being used, the more new cognitive functions emerge and need to be taken into account either in system redesign, organization redesign and/or training and operations support . Nevertheless focusing on the early design phase, and high-level requirements before development , four main processes are important to be described and taken into account: creativity ; rationalization, information sharing and evaluation/validation.

Facilitating Creativity and Design Thinking

“The great pleasure and feeling in my right brain is more than my left brain can find the words to tell you.” This statement of Nobel Laureate Roger Sperry illustrates the spirit of creativity . Innovative design is sometimes irrational and subjective, strongly based on intuition and emotion. This requires an open mind and holistic thinking . For that matter, design is an artistic activity.

Tim Brown defines “design thinking ” as “a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.” (Brown 2008, p. 86). Brown, in his book “Change by Design”, states that design thinking is human-centered innovation that uses people’s needs in a technologically feasible and commercially viable way (Brown 2009).

Creativity involves taking risks since it is based on intuition , expressivity, least commitment and cultural blocks. According to Dreyfus (2002), intuition is a process that enables a person to do the appropriate thing without rationalization (i.e., without providing any explanation). In genetics, expressivity refers to variations of a phenotype in individuals carrying a particular genotype; in other words, creativity is expressed in terms of phenomena that are deeply rooted in the background of people generating this creativity. The “least commitment” attribute of creativity deals with the way ideas and concepts are generated; they can be described breath-first (a layered series of progressively refined shallow global descriptions) or depth-first (an assembled series of deeper local descriptions). Least commitment deals with both boldness and prudence. Creativity should tell you the right story at the right time. However, the analytical engineering culture may block storytelling. Constant searches for objectivity may block subjective thinking, and therefore design thinking .

Storyboarding usefully supports creativity, storytelling and design thinking. Storyboarding provides explicit visual thinking and enables one to share ideas concretely with others and refine them cooperatively. This is why cartoonists or similar should be part of design teams and support design thinking.

When such creative design is done by a group of people, it has to be orchestrated. The Group Elicitation Method (GEM) was designed to this end (Boy 1996a, b, 1997). Creativity was exploited in GEM by promoting innovation (the true original thinking), synthesis (combining information from different sources into a new pattern), extension (extrapolating an idea beyond current constraints and limitations), and duplication (improving or reusing an idea often in a new area or domain).

GEM is a brainwriting technique that can be computer-supported and enables contradictory elicitation of viewpoints from various (field) domain experts , augmented with a classification method that enables categorization of these viewpoints into structured concepts. Participants in a GEM session then score these concepts (i.e., each participant assigns a priority and an averaged consensus is computed for each concept). Ordered concepts are shared for a final debriefing where participants verify and discuss their judgments . A typical GEM session optimally involves seven experts (or end-users) and contributes to the generation of 10–30 concepts. Sometimes GEM is criticized because it provides very basic viewpoints from field people that cannot be abstracted into high-level concepts . It is important to note that an expert in the field who is familiar with abstraction-making facilitates the concept categorization phase . In addition, a structuring framework or model such as the AUTOS pyramid is very useful to further refine categorization of the produced concepts .

In the DGACFootnote 2 research effort on novelty complexity in aircraft cockpits (Boy 2008), we carried out three GEM sessions involving 11 airline pilots, 7 cognitive engineers, and two structured interviews of two certification test pilots. Forty-nine raw concepts were generated. They were refined into a concept map (CMap) presented later on in the chapter. A CMap is not only a graphical presentation of interrelated concepts of a domain (Cañas et al. 2001), but also a very good, integrating framework for relating concepts among each other. In this approach to novelty complexity analysis , if a CMap was generated from an extensive user experience gathering effort, it is still an initial contribution that provides useful and meaningful directions for further research and development. We used it to derive high-level criteria for analysis of novelty complexity .

Alternative methods have been used such as the Blue Sky approach developed by IHMC (The Florida Institute for Human and Machine Cognition) that is based on creativity-based workshops. For example on March 2–4, 2009, IHMC ran a Blue Sky workshop for the NASA Exploration Systems Mission Directorate (ESMD) at Johnson Space Center (JSC), as one of the many meetings of that kind devoted to development of operational concepts and designs of the Lunar Electric Rover (LER), now renamed Space Exploration Vehicle (SEV). The goal of that meeting was to “visualize innovative LER displays and controls that handle information and activities in a connected manner.” The main goal was to improve situation awareness and control of the LER by an astronaut. Based on experience of driving the LER and more specifically watching an experienced astronaut driving it, I proposed the “virtual camera ” concept during that Blue Sky workshop, which was very much discussed and subsequently developed (Boy et al. 2010). It took a while before the virtual camera became a crisp and acceptable concept; it took many graphical description, papers and simulations to provide main attributes and relevance for its actual implementation. The virtual camera concept is described in Chap. 8.

Modeling for Rationalization

Design thinking has also to be rationalized in order to be sustainable. The conceptual model of a system being designed needs then to be rational and objective, strongly based on facts and argumentation. This requires designers to be structured and logical. For that matter, design is a scientific activity. Consequently, modeling and simulation take on a different face.

Discrete event methods are used to model and simulate chronologies and sequences of events (Robinson 2004). Modeling approaches go from finite-sate machines, Markov chains in particular, to stochastic processes and queuing theory. They use computational techniques such as computer simulation, the Monte Carlo method, variance reduction and pseudo random number generator. Discrete event methods have been used in industrial engineering and network simulation.

Multi-agent methods have been used for a few decades to computationally simulate interactions among agents. It is important to understand that, when the number of agents is very large, interaction between agents typically generates emergent behaviors at the macroscopic level . The main goal of multi-agent simulations is often to identify these emergent behaviors.Footnote 3 These methods use game theory, complex system approaches, complexity science , homeostasis, adaptation , autopoeisis and non-linear system theories (Myerson 1991; Prigogine 1997; Mitchell 2008; Cannon 1932; Maturana and Varela 1980). Multi-agent methods contribute to simulation of complex phenomena and emergence of new behaviors. The chapter on complexity analysis provides useful inputs to better understand what these methods are based on.

When safety is at stake, reliability engineering is crucial. We always want to assess the risk (generally in terms of expectations of what can go wrong) and the consequences of a faulty life-critical system. Safety and dependability analyses are usually developed by taking into account failure scenarios, assessing probabilities of failure , ranking components according to their contribution to risk, and assessing magnitude of the consequences. There are several methods that can be used such as fault trees, states/events formalisms, Markov Graphs, and Petri Nets. These methods are very well-defined, easy to use, and supported by software as well as graphical representations. Issues arise when systems are very complex because these kinds of models become too complicated and difficult to maintain. Note that new solutions have been investigated, for example, Rauzy introduced a new states/events formalism, so-called Guarded Transition Systems that generalize both Block Diagrams and Petri Nets, and handle looped systems (Rauzy 2008).

Modeling for rationalization necessarily meets the “system approach that is both a way of thinking and a standardized international engineering practice, whose objective is to master industrial systems complexity in order to optimize their quality, their cost, their time to market and their performance” (Krob 2011). For Krob, the term “system” refers to both the industrial object realized through an industrial process and the highest point of view one can have when dealing with this industrial object. Associated with human-centered design, this system approach is in the broader sense sustainable.

Standardization is the ultimate challenge of engineering, usability and usefulness disciplines. More specifically, our fully distributed worldwide economy stresses the need for interoperability. However, what would be left to creativity if everything was standardized? Where will the local niches be that are so precious, and will determine the various cultures on this planet?

Modeling for Information Sharing

Good design requires information to be shared among various kinds of actors in both time and space, across various kinds of professions and corporate cultures.

Computer-supported cooperative work (CSCW) is now a discipline that enables the development of environments that support people in both their individual and cooperative work (Grudin 1994). Collaborative M&S methods associate technology, organization and people (TOP) as other M&S methods with a greater emphasis on communication, cooperation and coordination . Examples are e-mail, instant messaging, application sharing (document interoperability in particular), videoconferencing, collaborative workspace, task and workflow management, blogging (that supports knowledge capture and management), and wiki sites (that progressively synthesize knowledge on various topics).

Traceability methods enable documentation of verifiable design history including chronological, multi-location and multi-actor links. When we want to master the life cycle of a product, traceability is an important technique; in particular, requirements traceability is key. Traceability typically includes cross-referencing, standardization of typical specialized documents, and restructuring capabilities in case of requirement changes. These methods should enable linking requirements to system elements, in both directions; capturing design rationale including domain content, responsibility and accountability ; verifying consistency; requirements verification; and requirements change history.

Modeling for Evaluation and Validation

Workload is a model that supported a large set of evaluations in human factors and ergonomics . I began to learn and work with workload models during the early eighties when I was working on the two-crew-men cockpit certification together with DGAC and Airbus Industrie (Boy 1985). At ONERA, I developed the MESSAGE approach based on a multi-agent model of the aircrew, aircraft and ground systems. MESSAGE was a French acronym for Model of Crew and Aircraft Sub-Systems for the Management of the Equipments. The resulting model helped us to characterize various information densities that were useful in providing an interpretation in terms of workload and performance. Indeed, workload and performance models have been developed and used in experimental flight tests, and derive appropriate human factors measurements useful for certification purposes .

Situation awareness (SA) is a broad concept that tries to represent a high-level cognitive function . Despite a lot of work performed in the human factors field, especially by Endsley (1995a, b), SA still needs to be further defined and refined. Endsley derived the SAGAT method to assess situation awareness (Endsley 1988). This method is useful in domains where experience feedback is high . The relationship between competence and knowledge of human operators and the real situation at work is very important to understand. Expected SA is related to many things such as motivation , engagement , competence (skill set) and professionalism (airmanship), domain knowledge, regulatory rules, company culture, time pressure , physiological and cognitive conditions of human operators, unpredictability of environment changes and so on. Of course, there is always the “Black Swan ” (Taleb 2007): who could have predicted that the autopilot of the 1972 accident of a Lockheed Tristar of Eastern Airlines that landed in the Everglades killing 101 of the passengers and crew (Flight Safety Foundation, Aviation Safety Network).Footnote 4 The aircrew focused on the nose gear issue without noticing a severe altitude problem. This tunneling problem is not common but may happen, and it is crucial that Technology and Organization must be designed for People (i.e., the TOP Model must be used). Safety is usually expressed in terms of probability, based on frequency of occurrence. We know that when some situations are not frequent at all, this kind of safety approach will fail. In terms of probability distribution, the Gauss curve typically applies. This is great when situations are very well known in advance and are included in the analysis and subsequent models, but when a situation never happened before, what should we do? The only possibility is to anticipate possible futures , and simulate them the best we can.

Decision-making is another cognitive function that deserves attention in cognitive engineering, human-centered design and more generally in our socio-technical world. Making a good decision is always the matter of stating a clear objective, framing an answerable question, using an analytic model whenever possible and otherwise using simulation, analysis at an appropriate moment, determining the appropriate level of complexity, stating the best assumptions in the model, exhaustive output analysis, and finally establishing a sustainable application. Decision-making can be supported by appropriate simulation methods such as discrete-event simulations (Thesen and Travis 1989).

More generally, modeling and simulation enables the elicitation of working processes that cannot be analyzed using paper and pencil, and brainstorming. Graphical interfaces enable designers to quickly figure out through M&S, the main problems that are at stake in the current design processes. Incremental re-modeling is a good practice that supports collaborative work and design convergence. In addition, simulation enables the incorporation of measurable indices that can guide the design and development process . These indices are also models.

Evaluating the use of a system can be done by various kinds of methods. Evaluation methods typically belong to two categories: objective and subjective methods. Both categories lead to the definition, development and use of models. Subjective evaluation methods can be based on subjective scales, for example the Cooper-Harper scale (Harper and Cooper 1986). The Cooper-Harper scale was initially developed to assess handling qualities of aircraft by experimental test pilots during flight tests. In this case, the model behind the evaluation method is a scale ranging from 1 to 10, each level being defined and subjectively assessed by the pilots or an external observer. NASA’s TLX (Task Load Index) is another example of a subjective method for workload assessment (Hart and Staveland 1988; Hart 2006). NASA-TLX enables users to perform subjective workload assessments on operator(s) working with various human-machine systems. NASA-TLX is a multi-dimensional rating procedure that derives an overall workload score based on a weighted average of ratings on six subscales. 

These subscales include Mental Demands, Physical Demands, Temporal Demands, Own Performance, Effort and Frustration. It can be used to assess workload in various human-machine environments such as aircraft cockpits , command, control, and communication (C3) workstations; supervisory and process control environments; simulations and laboratory tests. Other similar methods are also used such as time-line analysis (Gunning and Manning 1980) based on a simple model of workload calculated by the ratio, available time tA on required time tR to execute a task (WL = tA/tR).

In contrast with subjective evaluation methods that are based on assessment performed by people, objective evaluation methods are based on physical measurements such as electroencephalograms, electrocardiograms, and eye gaze positions using eye-tracking technology. However, if these measurements are said to be objective, experimental results need to be interpreted to provide meaningful assessment in terms of workload, situation awareness or decision-making for example. Such interpretations are based on models. More specifically, eye tracking data can be interpreted using: a cognitive model which states that longer fixations of the visual scene on an object reflect subject’s difficulty to understand that object (or the objects’ features); or a design hypothesis: users will more easily understand color objects than gray ones (Stephane 2011).

Interaction, Complexity and Dynamics: Models and Theories

We saw earlier in this book that life-critical systems support our lives whether for safety, efficiency and/or comfort reasons. Now, if we want to model and simulate them, we need to know more about their main attributes. I choose to focus on three of these attributes: interaction, complexity, and dynamics. In this section, I emphasize how these attributes are already seriously grounded on various theories, models and representations… mostly in cognitive science, artificial intelligence , psychology, anthropology, philosophy and mathematics (Merleau-Ponty 1962; Dreyfus 1972; Gibson 1979; Maturana and Varela 1980; Winograd and Flores 1986; Suchman 1987; Varela et al. 1991; Hutchins 1995; Beer, to appear).

Interaction

There is always a degree of interaction with a given life-critical system. We interact with our car when we drive, and we interact with our clothes when we wear them; this is direct interaction. Direct interaction involves some kind of embodiment . The philosophy of mind emphasizes the role that the body plays in shaping the mind . However, there are life-critical systems that suffer from lack of such embodiment, or more precisely are remote and not accessible for most of us, and most importantly, cannot be controlled by us. Nuclear power plants and commercial aircraft belong to this category because they have a great impact on our lives, but we have no access to their control and management. We then need to trust the people and organizations that design them, build them, control them and eventually dismantle them. This delegation in the interaction can cause doubts when their design and use rationale are not clearly explained and understood. This is another case where modeling and simulation are important (i.e., providing information and knowledge in an accessible and participatory way).

Reason started a discussion on cognitive aids in process environments as prostheses or tools (Reason 1987). What is the level of embodiment of prostheses versus tools? It is interesting to notice that tools extend human capabilities (e.g., the hammer is an extension of our arm when we are hammering a nail, and we forget that it exists when we become proficient at hammering). The embodiment is perfect . Heidegger talked about zuhanden (“ready-to-hand”) to characterize such a phenomenon (Heidegger 1962). However, when the handle breaks for example, we cannot use the hammer normally and we need to figure out how to use the broken tool to perform the task (i.e., hammering the nail); the broken hammer is not embodied anymore. The task becomes more cognitive. A problem has to be solved. Heidegger talked about vorhanden (“present-at-hand”) to characterize such a phenomenon. Expertise is a matter of continuous adaptation of technology and people . Some of us are called “experts” because they have mastered specific tasks using appropriate specialized tools (usually technology, but they could be conceptual tools such as languages or methods and procedures). Hammers used by glaziers are not the same as hammers used by smiths! The symbiosis of people’s skill sets and specialized technology is incrementally compiled and refined from interactions between people and technology.

Interaction usually happens between at least two entities (e.g., molecules interact among each other, people interact with their environments, and more generally agents interact among each other). This leads to the definition of an agent as an entity who/that interacts with its immediate environment, which is usually characterized by other agents. In addition, the concept of agent is recursive (i.e., an agent can be a society of agent in Minsky’s sense (1985)). Agent’s structure is therefore mutually inclusive, like Russian dolls. The agent model fits very well with the systems engineering approach of systems of systems (Luzeau and Ruault 2008). Beer describes an agent and its environment as coupled dynamical systems; the agent in turn is composed of coupled nervous system and body dynamical systems (Beer, to appear). Kauffman (1993) proposed a seven-level model of self-organization and selection in evolution that helps structure our living world. It starts with the chemical level where multiple molecules end up as enzymes, going to the biological level where multiple genes lead to cells, which themselves lead to organs (development level). The neurological level creates the emergence of concepts and maps from multiple modules. Then the psychological level generates specialists from multiple minds, up to the sociological level where multiple cultures lead to organizations, and finally at the ecological level where multiple species lead to niches.

The environment of an agent has properties that determine possible actions of this agent. Gibson (1977, 1979) coined the term “affordance ” to denote the relationship between an agent (a human being) and its environment (i.e., some environment properties “suggest” some specific actions to the agent). The related field of study is called “ecological psychology”. Object’s affordances are defined as innate relationships (i.e., they are independent of agent’s experience, knowledge, culture, or ability to perceive, and whether it exists or not). Doorknobs and door handles are usually common examples when we have to explain affordances. A horizontal door handle suggests pushing because it is located at the same level of both hands of the person reaching it and in the direction of the movement of that person; therefore their affordance action property is “push”. A vertical handle on the side of a door suggests pulling because when the person grabs it, pulling is the logical movement following the gesture of the arm of this person; therefore its affordance action property is “pull”. In this case, there are no conventions, rules or procedures; affordances are innate relationships between a person and an object. However in many cases, affordances can be learned (e.g., we learn how to stop at a red light). This is what Norman called “perceived affordances”, which are related to object’s utility and can be dependent on the experience, knowledge, or culture of the human agent (Norman 1988). The perception-action nature of affordances typically specifies the skill-based behavior described by Rasmussen (1986). Of course, when this interaction level does not work, usually because there is no embodiment, the agent has to learn and develop appropriate skills to better handle his/her/its environment in the future.

Complexity Modeling

We know that it is extremely difficult, and most of the time impossible, to predict evolution of the behavior of a complex system. What kinds of modeling and simulation means can we use to represent, analyze, understand and ultimately master a complex system? Formal models and theories of complexity have been developed over the last few decades. I recommend reading a good survey presented in the air traffic management domain (Xing and Manning 2005). In this chapter, I concentrate on usable models, theories, methods and techniques that enable modeling and simulation of life-critical systems.

Pylyshyn (1985) referred to the equivalence between cognitive complexity and computational complexity , and compared cognitive processes to generic algorithms. The choice of an algorithm is often made under contradictory requirements, such as understandability, transferability and modifiability of an algorithm. For example, Card introduced KLM (Keystroke-Level Model) (6) and GOMS (Goals, Operators, Methods and Selection rules) to study text processing in office automation (Card et al. 1983). KLM and GOMS enable the prediction of the required time to perform a specific task . They assume task linearity (i.e., tasks can be hierarchically decomposed into sequences). GOMS belongs to the class of analytical models, and works well in very closed worlds. Kieras and Polson (1985) developed the Cognitive Complexity Theory (CCT) as an evolution of GOMS. They proposed several measures of interaction complexity such as the number of necessary production rules and the learning time, as well as the number of items momentarily kept in the working memory in order to predict the probability of errors.

Rasmussen (1986) proposed the SRK model to capture three types of behavior (i.e., Skills, Rules and Knowledge). He also developed an ecological approach based on five levels of abstraction hierarchy . Vicente (1999) used this approach to develop the Cognitive Work Analysis (CWA) approach. CWA supports ecological interface design and emphasizes design for adaptation . Javaux and De Keyser (1997) defined cognitive complexity of a human-machine situation (in which specific tasks are performed) as the quantity of cognitive resources that a human operator must involve to make sure that tasks are executed with an acceptable level of performance. However, the quantity of cognitive resources is a very limited way to assess cognitive complexity without taking into account qualitative issues. Van Daele (1993) made another distinction between situation complexity and the complexity of task and operational goals. He relates complexity to constraints blocking task execution, remote character of goals, multiple goals to be satisfied at the same time, interdependent goals and environment attributes, multi-determination, uncertainty and risks.

Norman (1986) proposed a generic model that takes into account human actions, learning, usability and possibility of errors. He proposed the following concepts: physical versus psychological variables; physical versus mental states; goal as a mental state; and intention as a decision to act to reach a goal. He expressed interaction complexity in terms of execution gulf and evaluation gulf. In particular, the distinction between physical and psychological variables reveals complexity factors related to interaction induced by use of physical system and the task to be performed.

Amalberti (1996) analyzed complexity by making a distinction between nominal and non-nominal situations. He related interaction complexity to action reversibility and effect predictability, the dynamics of underlying processes, time pressure , the number of systems to be managed at the same time, resource management when the execution of a task requires several actors, artifacts representation, risk, factors coming from insertion of safety-critical systems in cooperative macro-systems and factors related to the human-machine interface, users’ expertise and situation awareness .

In the MESSAGE approach (Boy and Tessier 1985), interaction complexity was assessed as information-processing difficulty in early glass cockpit developments . Several difficulty indices were developed including visibility, observability , accessibility, operability and monitorability. These indices were combined with tolerance functions, which were expressed as possibility distributions of relevant user-interface parameters. Subsequent work led to the development of interaction blocks to model interaction chains between agents in order to better analyze and understand the emerging interaction complexity of underlying operations (Boy 1998a). Interaction-block development requires the elicitation and specification of various interaction contexts, and therefore structuring various relevant situations. Five generic interaction-block structures were proposed including sequence, parallel blocks, loop, deviation, hierarchy and interaction-blocks leading to either weak or strong abnormal conditions (Boy 1998b).

Today, emphasis is on systems of systems, where the need for a multi-agent approach is mandatory in order to understand various phenomena such as emergence of new properties, cognitive and socio-cognitive attractors , various kinds of singularities and more generally non-linearities. If a complex system is represented as a society of agents interacting among each other, the prediction impossibility comes from agents’ constant adaptation and the nonlinear interactions among them. Analyzing complexity can be done by visualizing concepts such as number of basic elements, components or agents, their variety, and the internal structure of the system.

Causality is crucial, but we need to make a distinction between direct causality related to linear systems, and systemic causality related to nonlinear systems (Batten 2009). Small causes may produce large effects. Consequently, we will be looking for modeling and simulation tools that enable the visualization of agents’ status, and their relationships among each other. Complex systems have to be explored constantly in order to detect anomalies, symptoms, irregularities with expected behaviors and so on. The job of handling a complex life-critical system consists in observing appropriate cues, diagnosing and testing hypotheses. Epidemiologists developed methods to identify biological and behavioral causes of diseases, which isolate single causes for example. However, it is recognized that there are multiple causes and integrated models are often necessary.

People view complexity differently with respect to their background, domain competence and knowledge, and experience. For that matter, sharing and combining expertise is key. M&S tools that enable experts to share their educated common sense are good . Educated common sense can be represented by frames and metaphors that can be expressed in digital forms. Mastering the complexity of a life-critical system is often a matter of adjusting these frames and metaphors to the perceived reality, as well as sharing and combining them among experts. We already discussed the difficult question of “maturity”. Reaching mature perception, understanding and mastering of the complexity of a system requires “nurturing”, exactly like good parents nurture their children (Lakoff 2004). Nurturing has two aspects: empathy and responsibility . For that matter, M&S tools that enable a deep appreciation for another’s situation and point of view will be good. Well-orchestrated participatory storyboarding, for example, is a good tool for the design of life-critical systems.

Dynamics and Control

Dynamic systems are systems that evolve in time. We are often mainly interested in their stability. Such systems have been extensively studied by control theorists and people who had to automate machines in order to improve safety, efficiency and comfort. The more we automate dynamic systems, the more we must make sure the resulting automated system is reliable, robust and dependable. We currently certify commercial aircraft systems with a probability of failure lower that 10 6 per h. Human reliability remains a main issue that we can take from two different perspectives: the negative one that says that people are the problem and we need to anticipate any human error because people will fail, no matter what, with a probability of failure greater that 10 2 per h; and the positive perspective that says people can be the solution since they not only correct most of their errors, but they are unique problem solvers and anticipators when they are very well trained and experienced . I qualify this second perspective, “human involvement and engagement.” Of course, a mix of both perspectives prevails .

The concept of cognitive stability (Boy 2007) supports the concept of procedural interface (Boy 2002) that takes into account four main high-level requirements (i.e., simplicity of use as opposed to user-perceived complexity, observability /controllability , redundancy and cognitive support).

The interface of a system is characterized by a set of n observable states or outputs {O1, O2, … On}, and a set of m controllable states or inputs {I1, I2, … Im}. The interface is redundant if there are p outputs (p < n), and q inputs (q < m) that are necessary and sufficient to use the system. The remaining (np) outputs and (mq) inputs are redundant interface states when they are associated with independent subsystems of the overall system. These redundant states need to be chosen in order to assist the user in normal, abnormal and emergency situations. In aircraft cockpits , for example, several instruments are duplicated, one for the captain and another for the copilot. In addition, some observable states displayed on digital instruments are also available on redundant traditional instruments. Controlling a system state-by-state with the appropriate redundant information is quite different from delegating this control activity to an automaton. New kinds of redundancy emerge from the use of highly automated systems . Traditional system observability and controllability usually deal with the What system states. The supervision of highly-automated or software-rich systems requires redundant information on the “why”, “how”, “with what” and “when” in order to increase insight, confidence , and reliability: Why the system is doing what it does? How to obtain a system state with respect to an action using control devices? With what other display or device should the current input/output be associated?

Since life-critical systems are in general nonlinear systems, they will have several possible attractors (Ruelle 1989). We then talk about multi-stability. Sometimes these attractors are unknown in advance and strongly depend on initial conditions more generally on the history of the LCS. The brain as a complex dynamic system, whose structure does not change, can support many different attractors at the same time (e.g., concepts). Today, with the over-computerization of life-critical systems, cognitive engineers and human-centered designers should look for such attractors emerging from interactions in those human-machine systems they are designing and developing.

Design Evolution Versus Design Revolution

Is there a difference between designing from a blank slate and designing by modifying, and therefore improving, an existing system? The first distinction qualifies as design revolution , and the latter as design evolution . Improving the efficiency of a car engine is the effect of design evolution. Moving from horses to steam machines as transportation means, as well as moving from physical libraries and bookstores to the Internet , were design revolutions.

We need to be careful with the evolutionary course of modifications of a technology. For example, the incremental introduction of software in aircraft cockpits reached a point of revolution in the sense that increasing the number of artificial software-based agents led to a job change (i.e., moving from manual flight to systems management). It is always crucial to better understand what the emerging cognitive functions born from the interactions are with incrementally accumulated interactive software-based systems .

Therefore in any case, designers need to look for cognitive functions that emerge from the various interactions, whether it is manual control or system management. For that matter, modeling and simulation are required, human-in-the-loop simulation in particular .

Complexity and User Experience

Perceived complexity is about practice maturity. Product maturity analysis is also a matter of user experience with this technology . Is this technology appropriate for its required use? How and why, do or don’t users accommodate to and appropriate the technology? Answers to this question contribute to a better understood perceived complexity, and further developing appropriate empirical criteria . Perceived complexity is a matter of the relation between users and technology. Interaction with an artifact is perceived as complex when the user cannot do or has difficulty doing what he or she decides to do with it. Note that users of a new artifact still have to deal with its reliability and availability , which are not only technological, but are also related to tasks , users, organizations and situations. This is why novelty complexity analysis requires a solid structuring into appropriate categories .

It would be a mistake to consider that a user who interacts with a complex system does not build some expertise. He or she cannot interact efficiently with such an artifact as a naïve user. For new complex tasks and artifacts, there is an adaptation period, full stop! Why? The answer is usually very simple. Complex artifacts such as airplanes or mobile phones providing personal digital assistants, cameras and other services, are prostheses. The use of such prostheses requires two major kinds of adaptation : mastering capacities that we did not have before using them (i.e., flying or interacting with anyone anywhere anytime); and measuring the possible outcomes of their use, mainly in terms of social issues. All these elements are very intertwined dealing with responsibility , control and risk/life management. Therefore, criteria that will be given cannot be used for analysis by just anyone. They must be used by a team of human-centered designers who understand human adaptation .

Novelty Complexity in Aircraft Cockpits

Current literature does not address the issue of novelty that is naturally included in any design of interactive systems . We are constantly dealing with transient evolutions and sometimes revolutions without properly addressing the issue of coping with user-perceived complexity of new artifacts in interactive systems.

In this section, I use results of a study that was carried out in France under the oversight of DGAC (Boy 2008). The Group Elicitation Method (Boy 1996a, b) was used to identify the various ontological entities related to novelty complexity in aircraft cockpits . These ontological entities were further formulated into a concept map , or CMap (Cañas et al. 2001, 2005). The central concept of “novelty complexity” is connected to five sub-CMaps (Fig. 7.5). Relationships between the five first-level concepts characterize the edges of an AUTOS pyramid .

Fig. 7.5
figure 5

Perceived complexity in the center of the AUTOS pyramid (CM-0)

User experience (CM-1) concepts include training (expertise), trust, risk of confusion, lack of knowledge (ease of forgetting what to do), workload , adhesion and culture (Fig. 7.6). It induces several cognitive functions such as learning, situation awareness (that involves understanding, short-term memory and anticipation), decision-making and action (that involves anticipation and cross-checking). To summarize, a U-complexity analysis deals with user’s knowledge, skills and expertise on the new artifact and its integration.

Fig. 7.6
figure 6

User experience complexity (CM-1)

Artifact complexity is split into internal complexity and interface complexity. Internal complexity (CM-2) is related to the degree of explanation required for a user to understand what is going on when necessary (Fig. 7.7). Concepts related to artifact complexity are: flexibility (both system flexibility and flexibility of use); system maturity (before becoming mature, a system is an accumulation of functions—the “another function syndrome”—maturity is directly linked to function articulation and integration); automation (linked to the level of operational assistance, authority delegation and automation culture); and operational documentation. Technical documentation complexity is very interesting when tested because it is directly linked to the explanation of artifact complexity. The harder an artifact is to use, the more related technical documentation is required and therefore it has to provide appropriate explanation at the right time in the right format.

Fig. 7.7
figure 7

Artifact complexity (CM-2)

Interface complexity (CM-3) is characterized by content management, information density and ergonomics rules (Fig. 7.8). Content management is, in particular, linked to information relevance, alarm management, and display content management. Information density is linked to decluttering, information modality, diversity, and information-limited attractors (i.e., objects on the instrument or display that are poorly informative for execution of a task but nevertheless attract user’s attention). The “PC screen do-it all syndrome” is a good indicator of information density (elicited improvement-factors were screen size and zooming). A clear and understandable language was the focus of ergonomics rules, error tolerance , redundancy and information saturation were proposed as typical indicators. Redundancy is always a good rule whether it repeats information for cross-checking, confirmation or comfort , or by explaining the “how”, “where”, and “when” an action can be performed. Ergonomics rules formalize user friendliness (i.e., consistency, customization, human reliability, affordances , feedback, visibility and appropriateness of the cognitive functions involved). Human reliability involves human error tolerance (therefore the need for recovery means) and human error resistance (therefore the existence of risk to resist to). To summarize, A-complexity analysis deals with the level of necessary interface simplicity, explanation, redundancy and situation awareness that a new artifact is required to offer to users.

Fig. 7.8
figure 8

Artifact complexity (CM-3)

Organization complexity (CM-4) is linked to social cognition, agent-network complexity, and more generally multi-agent management issues (Fig. 7.9).

Fig. 7.9
figure 9

Situation and organization complexity (CM-4)

There are four principles for multi-agent management:

  • agent activity (i.e., what the other agent is doing now and for how long);

  • agent activity history (i.e., what the other agent has done);

  • agent activity rationale (i.e., why the other agent is doing what it does);

  • agent activity intention (i.e., what the other agent is going to do next and when).

Multi-agent management needs to be understood through a role (and job) analysis. To summarize, an O-complexity analysis deals with the required level of coupling between various purposeful agents to handle the new artifact. Situation complexity is usually caused by interruptions and more generally disturbances. It involves safety and high workload situations . It is commonly analyzed by decomposing contexts into sub-contexts. Within each sub-context, the situation is characterized by uncertainty , unpredictability and various kinds of abnormalities. To summarize, a S-complexity analysis deals with predictability of various situations in which the new artifact will be used.

Task complexity (CM-5) involves procedural adequacy, appropriate multi-agent cooperation (e.g., air-ground coupling in the aerospace domain) and rapid prototyping (i.e., task complexity cannot be properly understood if the resulting activity of agents involved in it is not observable). Task complexity is linked to number of sub-tasks, task difficulty, induced risk, consistency (lexical, syntactic, semantic and pragmatic) and the temporal dimension (perception-action frequency and time pressure in particular). Task complexity is due to operations maturity, delegation and mode management. Mode management is related to role analysis . To summarize, a T-complexity analysis deals with task difficulty according to a spectrum from best practice to well-identified categories of tasks (Fig. 7.10).

Novelty Complexity, Creativity and Adaptation

Besides providing user requirements, users can be involved in the design process, especially in early stages, if mockups or prototypes are available. We must not forget that designers’ and engineers’ main asset is creativity . They are the ones who propose solutions. In addition, the human-centered design team needs to take the above dimensions into account to figure out the complexity of these solutions. Since maturity is at stake here, I claim that when high-level requirements are right from the beginning , subsequent developments, when they are carefully carried out, are not likely to lead to deep revisions when the artifact needs to be delivered. For that matter, user-perceived complexity needs to be tested from the very beginning of the design, when first ideas of the artifact are becoming drawable or writeable, and all along the design and development process .

Fig. 7.10
figure 10

Task complexity (CM-5)

In addition to being familiar with domain in which a new artifact will be tested, professionals who will have to analyze novelty complexity are required to have a clear awareness of and knowledge about the various relationships among user-perceived complexity and cognitive stability . Perceived complexity is more related to the “gulf of evaluation”, and cognitive stability to the “gulf of execution”, in Norman’s terminology (Norman 1986). Even if adaptation is an asset of human beings, their lives are better when technology is adapted to them. Therefore, novelty complexity analysts need to better understand co-adaptation of people and technology in the perspective of increasing cognitive stability. Cognitive stability is defined by taking the physical metaphor of passive and active stability that respectively involves static and dynamic complexity. These concepts were taken into account to support human-centered design and have led to the following major principles: simplicity, observability and controllability , redundancy and cognitive support (Boy 2002).

Let us take a biological approach to understand complexity of interactive systems by finding out the salient parts and their interrelations. As already seen in Chap. 5, complexity is intimately related to separability . When a doctor administers a medication to a patient, he or she has to know the secondary effects of this medication (i.e., acting on a part may have an effect on other parts). When a part (e.g., the respiratory system), is failing, medication is usually provided to treat the lung disease, but this medication may have an impact on other parts of the body (i.e., the whole system). Of course, we will always attempt to separate what is separable in order to simplify! But there will be an end to this separability process. There are “atomic” parts that are not at all separable. These atomic parts live by themselves as a whole, eventually requiring interaction with other atomic parts. The problem is then to figure out how complex they are by themselves and what kind of complexity their interrelations generate. Designers and users of a system may not see the same parts and interrelations, just because they do not have the same tasks to perform, or the same goals to achieve, with respect to the system. They do not decompose the system in the same way because they do not have to understand the logic of the system in the same way. Separable parts and their interrelations can be seen as a conceptual model . The closest designer’s conceptual model is to a user’s conceptual model , the better. Therefore, people in charge of analyzing novelty complexity need to be aware of relevant parts and overall maturity evolution in terms of the AUTOS pyramid .

AUTOS-Complexity Criteria

Key criteria have been derived from the 62 elicited conceptsFootnote 5 on novelty complexity presented on the above CMap . They were categorized to fit with the five AUTOS complexity sets of criteria that follow :

  • A-complexity: interface simplicity, required explanation, redundancy and situation awareness.

  • U-complexity: user’s knowledge, skills and expertise.

  • T-complexity: task difficulty according to a spectrum from best practice to well-identified categories of tasks.

  • O-complexity: required level of coupling between the various purposeful agents to handle the new artifact.

  • S-complexity: predictability of the various purposeful situations.

These criteria may be dependent on each other . For example, analysis of the required explanation (A-complexity criterion) to handle a new artifact is a matter of maturity of the underlying technology. If it is mature, then complexity can be hidden from the user, otherwise it must be shown with the right level of required explanation. Consequently, a user needs to understand the provided explanation, and therefore have appropriate expertise (U-complexity criterion) and rely on current best practice (T-complexity criterion). Sometimes, a right coupling among actors dealing with the artifact (O-complexity criteria related to cooperation and coordination of activities for example) in predictable situations (S-complexity criterion) simplifies its usage. This example demonstrates the need for designers to master various categories of novelty complexity criteria and their possible interrelations .

Using novelty complexity criteria is a matter of either expert judgment or decomposition into indicators that enable designers to reach measurable variables. The former methods are usually called subjective, the latter are said to be objective. At some point, subjectivity always enters into the picture! Objective methods are either based on qualitative measures or require strong interpretation of quantitative results in the end. In order to facilitate the job of human-centered design teams, 63 indicators {Ij} were developed from the elicited concepts, and related to novelty complexity criteria using a CMap . To summarize, a criteria Ci is a combination Ci of several indicators {Ij}. It is advised to run one or several brainstorming or GEM sessions, to determine appropriate combinations with domain experts . Such combinations can be modified along the analysis process as more knowledge is acquired on AUTOS-complexity of the new artifact being analyzed. The analysis is based on data acquisition methods varying from kinds of recording (e.g., parameters, verbal protocols and video) to observation, interviews and debriefings.

Modeling and Simulation Methods and Tools

Computer-aided design (CAD) is a software-based technique that enables designers and engineers to draft mockups and prototypes. It is initially geometry-based, but along the years it became full design support almost for any system, whether cars, aircraft, power plants, houses, kitchens, furniture or clothes. In the early days, people were drafting systems on paper using pens; today CAD supports the same kind of job using software. CAD now enables design and development of both structure and function of a system, bringing it to “life” by simulation. Even more important CAD led to development of the integrated design and development processes during the whole life cycle of a product (i.e., from first idea to design, manufacturing, delivery, marketing, operations, maintenance and dismantlement). CAD is also very important for documentation purposes , also during the whole life cycle of a product. Consequently in this section, we will talk about CAD not only as a drafting technique and tool, but also and mainly as a life cycle support system.

Drafting Objects

What is a design object? A design object is defined by three main attributes: a shape (or a structure), a behavior (or a function) and possible connections with other objects. There can be more attributes such as materials, processes, regulations related to its use and so on.

The first thing that we do to explain a complex idea is draw a picture! The more realist and accurate the picture is, the better we will be understood by our audience. We keep presenting statements, drawings and pictures on PowerPoint slides to make sure that our audience gets what we want to communicate. However, these presentations are not only for communication purposes , by making them we tend to rationalize what we think; the presentations we make are a kind of modeling and simulation of our purpose. CAD goes a step further by providing even more accurate features in space (2D and 3D), time (dynamics), and many other dimensions such as constituency, reliability and cost of various elements.

Today, CAD systems provide more than drawing capabilities. They provide possibilities for computer animation, online documentation, and traceability. For example, CATIA, developed by Dassault Systemes, is certainly a reference for CAD. Dassault developed CATIA internally in 1977 to support the development of the Mirage fighter airplane. Today, CATIA is used by many big manufacturers and suppliers allover the world.Footnote 6

Integrating Objects into Systems

Design objects can be progressively integrated into engineered systems (i.e., designed tiles and rivets are progressively integrated into the wing of a spacecraft model). In addition, the local dynamics of a design object should be integrated into the design of a resulting embedding system. The “sum” of the dynamics of the various objects provides emerging dynamics to the embedding system. At this point, the emerging behavior can be either automatically generated by simulation, or directly programmed from an external source. Of course, we would like to be able to trust the modeling and simulation capability and discover emergent behaviors from the CAD system. This is not always the case, but this is a challenge that today’s technology might enable us to overcome in many cases. This is why understanding non-linear systems and complexity science is crucial in design.

Systems are often interfaced with other systems, but most importantly people have to interact with them. This is where human-system integration (HSI) enters into play, and the earliest the best! The first step of course, involves end users in the design of high level requirements using methods and techniques such as UML (Unified Modeling Language), but also human-in-the-loop simulations (HITLS) where end users will be able to generate emerging behaviors (we will present and discuss HITLS later in this chapter). The most important thing to remember at this point is that modeling and simulation techniques are important to rationalize HSI and enable the discovery of possible emerging behaviors to be further tested in real world experiments, like flight tests.

Integrating Systems into Systems

In the same way as objects are integrated into systems, systems are integrated into bigger systems, and so on. This is why the concept of systems of systems is so important to understand (Maier 1998; Carlock et al. 1999; Krygiel 1999; Sage and Cuppan 2001). In the military domain, the systems-of-systems approach has already been promoted mainly for enhancing interoperability and synergism.

Networks develop very fast on our planet to the point that the concept of systems of systems has become a tangible reality. The Global Earth Observation System of Systems (GEOSS) “enables us to envision a world where more people will be fed, more resources will be protected, more diseases will be mitigated or even prevented, and more lives will be saved from environmental disasters.”Footnote 7 GEOSS is planned to be able to integrate data coming from many thousands of individual Earth observation technologies around the globe portraying various integrated ecological systems. GEOSS “architecture contains about 3,000 elements that are involved in earth science research: observation sources, sensors, environmental parameters, data products, mission products, observations, science models, predictions , and decision-support tools. The science models use observations from space-based instruments to generate predictions about various aspects of the environment. These predictions are used by decision-makers around the world to help minimize property damage and loss of human life due to adverse conditions such as severe weather storms. The architecture is developed using both traditional and nontraditional systems engineering tools and techniques ” (Martin 2008)

In the first place, a system of systems (SoS) may not appear to be fully structured and functional. SoS development is evolutionary in the sense that functions and purposes are incrementally added, removed and modified with experience in the use of the system. Emergent behaviors will incrementally appear ; some functions and purposes will be created , others will become obsolete, and others will merge or split. This is an organizational learning process that will take place during the life cycle of the SoS; the SoS is intrinsically complex and adaptive. Each system in the SoS is independent and useful in its own right. Some systems may be redundant with others, and coordinated, to ensure the global stability of the SoS.

Integration of systems into a final system (e.g., aircraft systems into the aircraft itself) requires significant planning , preparation, time and resources. It is always the best solution to use a single facility for the integration, with competent people, organized and tested processes. Leadership is key; the on-site integration leader must be empowered for the operational community, supported by an SoS framework with sufficient resources and authority. A traceability process should be put in place and used effectively by competent personnel. Integration does not go without issues, last-minute problems and failures ; contingency plans and schedules must be available. This is another reason to plan ahead during human-in-the-loop simulations of the integration process itself.

For all these issues and reasons, the Orchestra Model (developed in the Chap. 2) is entirely purposeful here. A common frame of reference (music theory) has to be set up for all actors to understand each other . Task assignments (scores ) must be coordinated at the highest level (the composer). The integration leader must understand the overall SoS (by analogy with music, the symphony) and coordinate the actual integration with authority. Competent performers not only know their own disciplines perfectly, they also have a trans-disciplinary skill set and knowledge.

Discrete Event Simulation

A discrete-event simulation (DES) enables construction of a conceptual framework that describes the system (modeling), performing experiments using a computer implementation of the model (simulation), and draw conclusions from output that assist in subsequent decision-making (analysis). It is basically a chronological sequence of events that mark system states changes. The sequence of events is linear unless there is an abnormal situation that forces branching into another sequence of events. Generally, events are sequenced by a clock, which enables for a step from one event to another. DES can use several lists of events, which can eventually be chained appropriately (e.g., simulating nominal conditions or off-nominal conditions).

DES is attractive because it enables designers and modelers to compress or expand time (locally or globally), control sources of variation, avoid errors in measurement, stop and review, restore system state, replicate, and control the level of detail.

When you start a DES model, a few questions should come to mind, such as “is the system deterministic or stochastic?”; “static or dynamic?”; “continuous or discrete?” If the system is stochastic, some state variables are random. If the system is dynamic, time evolution is important. In the Monte Carlo simulation model for example, simulation is stochastic but time evolution is not important.

Another example can be the use of a discrete event simulation model to identify and understand the impact of different failures on the overall production capabilities in a chemical plant (Sharda and Bury 2008). What mattered in this case was to understand key equipment components that contribute towards maximum production loss and to analyze the impact of a change policy on production losses. DES was applied to enhance decision-making (i.e., a change policy in terms of new equipment installation or stock level increase for the failure prone components).

A DES model was also applied to study the impact of the effects of alternate team configurations and system designs on situational awareness in multi-unmanned vehicles control (Nehme et al. 2008). Such approach enables quantifying operator situational awareness by using data to more accurately predict metrics such as mission performance and operator utilization. In this case, DES allows us to avoid expensive, requiring time-consuming user studies.

DES is sometimes referred to time-line analysis (TLA), which are sequences of events are put on a time line. Time line analysis methods were developed and used to solve various problems such as aircraft cockpit certification (Boy and Tessier 1985). In human factors, TLA was initially conceived to better understand human operators’ performance and workload . It is better used to identify changes that impact process performance. TLA is mostly used in accident analysis, but is also a very good modeling and simulation method for design.

Conclusion

Modeling and simulation is a crucial approach for the development and integration of human-centered design, which goes farther from its initial drafting intention. M&S can support the whole life cycle of a life-critical system from facilitating creativity and design thinking , to rationalization, information sharing, and finally evaluation and validation. It needs to be understood and managed at the highest hierarchical level of the organization. M&S is particularly interesting to analyze and design interaction among the various human and machine agents of an LCS, better understand its complexity, as well as its dynamics and control. M&S enables mastery of system novelty. M&S should not only be supported by computer-aided design , but also used for system integration ; discrete event simulation and human-in-the-loop simulation as integrating parts of M&S.