Keywords

1 Introduction

Ambient Intelligence (AmI) refers to a new paradigm of interaction where humans and other “smart players” interact with, and are supported by, their smart surroundings [1]. Smart players refer to entities acting in smart surroundings. Examples include humans, animals, smart factory machines, autonomous robots, and other intelligent autonomous systems. Examples of intelligent autonomous systems are home robot vacuums, security and surveillance systems, smart home energy management systems, or enhanced digital media equipment. They are designed to work on their own to support a domain specific task, in most cases without depending on infrastructure support coming from their operation environment. Smart surroundings are everyday living or operation environments of these autonomous systems. Smart surrounding are physical spaces that have been instrumented to provide natural interaction capabilities and useful behaviors such as rule based automation, statistically learned adaptations, etc.

The concept of Ambient Intelligence has been widely adopted. It is proved to be effective in terms of making life easier [2] supporting healthier living [3], or reducing the in-house energy consumption [4, 5]. Much valuable research reported successful deployments of AmI within various application domains such as independent living, energy-aware production, or smart health.

To make AmI happen, first everyday objects must transform into networked information appliances [6]. This is done by augmenting everyday objects and devices with sensing, communication, and networking technology to support a specific task. Next, available information appliances and smart players form together “ad hoc ensembles”. A generic architecture supporting ensemble creation is presented in [8]. This last step orchestrates entities available in a smart surrounding to offer a coherent behavior [8] and collective intelligence [9] (cf. Fig. 2). In doing so, a composition of systems is created. The coherently acting devices that implement a higher level collective intelligent behavior are compositional parts of Ambient Intelligence. We use the concept of meta systems [10] to refer to such compositions. In other words, Ambient Intelligence can be considered as a meta-system with intelligence that governs multiple information systems (information appliances) to support various autonomous systems (the smart players) residing and acting within the same physical space. Figure 1 shows the relationships among different components of ambient intelligence environment.

Fig. 1.
figure 1

AmI is a meta system composed by the physical space, networked information appliances, sensors, and infrastructure that are seamlessly integrated into the physical space with the autonomous intelligent systems – the smart players – residing and interacting with it [8].

Fig. 2.
figure 2

N autonomous intelligent systems designed to support specific task domains. They are just collocated and have no awareness of coexistence. In some cases, they just have some interactions (left part). The right part shows an instrumented ambient that becomes habitat to a multitude of autonomous intelligent systems that dynamically orchestrates them into a meta system with collective intelligence [8].

The important question that arises here is: how would users interact with such a meta-system that implements a higher level collective intelligent behavior [11]? Please note the conceptual difference to the situation where people would interact with N stand-alone systems, each exposing a limited intelligent assistance for a very specific task domain. For this later case, many useful user interfaces have been presented [12, 13]. But can we use traditional UIs to interact with above explained meta systems?

We use the term Meta User Interface (MetaUI) to refer to user interfaces that support performing tasks at the meta system level. While a significant amount of important research covers interaction issues on the device level, to the best of our knowledge no research has studied the interaction between a human (smart player) and a meta system. In this paper, we present the user centric analysis and design of a MetaUI that supports performing operations on a meta system. Our major contribution is the analysis, design, prototypical implementation and evaluation of a 3D based meta user interface for ambient assisted living scenarios.

2 Related Work

The open literature includes several approaches for interacting with intelligent systems. The work of Ardoti et al. [15] surveys a large number of these approaches with extensive analysis of their limitations, advantages and usage. For example, natural speech and gesture based interfaces are commonly used to interact with smart objects such as the work in [14]. Much research has been done in the area of context recognition which is a key element for implementing context aware interaction. The work in [7] discusses group activity recognition. An approach for measuring location is presented in [20]. Anwar Hossain et al. [19] present an adaptive interaction framework based on quality of context information to address wrong automations and to deal with uncertainty. However, the focus of related work is on directly interfacing single parts of an environment such as a smart TV or intelligent kitchen devices. In contrast, we aim at interfacing the meta system as a whole, rather than focusing on controlling its compositional parts. By doing so, we address the common problems of interaction with smart environments such as over-automation, missing ability to override default behavior, lack of predictability and observability etc., which are rather attributes of/subject to meta system interaction.

Many researchers have studied the topic of interaction with intelligent systems and discussed interaction issues. In this work, we analyze these studies to elaborate requirements, which then will be used to design an appropriate MetaUI. The lack of control and over automation have been reported as a major weakness of fully automated interaction [16] because people do not accept a fully automated environment, and in fact want to always be in control. As Sheridan states in his study [16], over automation negatively affects the acceptance of automated systems. According to [16], there are 10 degrees “to express the level of automation in an adaptive system”. Since Ambient Intelligence is a concept related to automation, in order to ensure user acceptance, a Meta User Interface must have the ability to change the level of automation (cf. Table 1, Requirement #1). Another basic requirement for any kind of user interface is to provide perceivable affordances. Users must be able to figure out possibilities of interaction, as soon they face anything they want to work with [6]. This is regardless of the interaction is automated or explicit interfaces have disappeared. Therefore, user interfaces for AmI need to be explorable in a way they can understand how they can work their smart surrounding (cf. Table 1, Requirement #2). Another issue is the relatively low reliability of automatic system behaviors that lead into distrust [17].

Table 1. Requirements elaborated to design Meta User Interfaces

Whether a specific automatic system behavior is reasonable or not, users need to be aware of their existence. They want to be informed when important things go on in their spaces [18] (cf. Table 1, Requirement #3). This is because, if actions initiated by the system are not visible to users, or when users fail to explain what exactly triggered certain automatic behaviors, they might be confused [6]. Further, a lack of visibility and understandability can cause negative mental responses such as anger [6]. In addition, it can lead into incomplete or incorrect mental models, which would negatively affect interaction performance and cause misunderstandings. Therefore, when interacting with Ambient Intelligence users need some means of support for visibility and understandability, in a way they perceive and reason about automatic behaviors of their surroundings (cf. Table 1, Requirement #4). Users also want to predict how the Ambient Intelligence will react upon certain user activities or in case certain events would happen. Empirical studies provide scientific evidence that the lack of predictability leads into distrust [17]. Thus, another requirement for the proposed MetaUI is to support users with predicting automatic behaviors of Ambient Intelligence (cf. Table 1, Requirement #5). In a recent research [19] we discussed that a mixed-initiative approach is the key to increase user acceptance and trust in AmI environments (cf. Table 1, Requirement #6).

Considering these evidences, we propose an interaction approach to overcome the mentioned weaknesses. Next we explain the architecture of our system.

3 Design of a Meta User Interface

To design the MetaUI, we conducted an empirical user study to understand tasks a MetaUI needs to support. The study and the results have been presented by Khojasteh and Shirehjini (2014) in [21]. In this section, we describe the architectural components of the proposed system for interfacing meta systems that expose a collective intelligent behavior.

Notice that we explicitly distinguish between those tasks performed on the level of single devices, which we refer to as device level tasks, and tasks that are internal to a meta system. An example for device level tasks is when an elderly person turns on his smart TV to play a social game with his grandson. Therefore, operations such as turning on and off devices, or changing the behavior of a smart entity (e.g. assistive home robot) are not subject matter of meta interaction, because the scope of the interaction does not go behind affecting a single autonomous intelligent system. In contrast, other tasks such as adjusting the level of automation for an entire house are influencing the attributes of the meta system, thus can be considered as operations performed on the meta system level. The Meta UI is composed of the environment 3D representation, a behavior manager, to create, alter and delete behaviors, and an action manager to supervise the active, previously active or soon to be activated behaviors (cf. Fig. 3).

Fig. 3.
figure 3

The proposed meta user interface provides an image of Ambient Intelligence meta systems. It is designed to support visibility, predictability, overriding default behavior, conflict handling, and perceived control in presence of system initiated automated behaviors (implicit interaction).

3.1 Behavior Manager

The behavior manager provides necessary functionalities to create new behaviors. In addition, it allows for the meta system to download or learn additional behaviors. A behavior in our system is a set of actions that the meta system executes to satisfy a set of post conditions. In order to describe a new behavior user needs to declare a set of preconditions and post conditions from the list of all the rules in the environment. A rule refers to an environmental or temporal event or events related to user actions. For example “if a person enters the room” is a rule and can be the precondition to some specific behaviors that user requires the system to accomplish if someone enters the room. Using the pre and post conditions, representation of environment state, before and after the completion of the behavior can be automatically visualized (cf. Fig. 3) User can make use of this component by tapping on the automation button as shown in Fig. 3.

Through the behavior manager users can edit downloaded behaviors, or create new rules from the scratch. Furthermore, it assists users with overriding and changing existing behaviors. The behavior manager maintains three behavior lists. These lists represent all the automatic responses that the meta system can currently offer. This section along with the next part satisfies Requirement #2, since it allows users to explore the system behaviors.

3.2 Action Manager

In order to make the meta system level actions visible and predictable (Requirements #4, #5), the MetaUI implements an action manager component, containing the three behavior lists. For each behavior represented in the lists, a declaration part can be represented, which shows the preconditions and post conditions for that behavior along with the rule that activates the behavior. As you can see in Fig. 3, the “Medicine Reminder” behavior is activated when the “time event” happens and alters the environment from the state that is depicted in before section to the state that is represented in the after section.

The first list that is shown in the lower left part of Fig. 3 represents the behaviors that took place in the past, which are either done successfully or has been terminated by the user or the system due to confliction, dissatisfaction or directly by the user. The second list contains the behaviors that are currently taking place and altering the environment. User is supposed to be able to conclude the reason of each alteration in the environment and match them to the currently performing behaviors using the declaration part of behaviors. The 3D visualization of before and after states helps towards this goal. The third list represent the behaviors that will probably take place in the future, we use the term probably because the possibility for a behavior to take place depends on the conditions of the environment, and each behavior has a probability at any specific time, which is a number in the range of 0 to 100. This probability is used to augment the corresponding icons of each behavior, as shown in the upper left part of the Fig. 3.

Furthermore there are options represented for each behavior, users can cancel or postpone an ongoing behavior, prevent the system from activating a behavior and perceive the reasons for terminated past actions in case of a confliction.

Each list would appear on the screen if the user taps on the buttons which are labeled as past, now and future, user can also make the lists disappear by tapping again on the same buttons.

4 Evaluation

As a means to usability evaluation a cognitive walkthrough was selected to evaluate our design. Using cognitive walkthrough early prototypes can be evaluated, the ease of learning can be estimated and the reasons to possible errors can be discovered. The evaluation was performed by a group consisting of the writers of this paper and two usability experts from Ambient Intelligence Laboratory of Sharif University of Technology. As the first step of the process, we distinguished the users of our Meta User Interface, and the identified main tasks to evaluate, and for each task we defined the correct action sequence. Afterwards the experts stepped through each task to evaluate the possibility to achieve them. For each task experts provide success or failure stories as to why the expected users would either choose or fail to choose the action as we assumed in the action sequence.

The process and the results are depicted in Table 2. After analyzing the results we figured out that the first step, which was to find the list icon and tap on it, had failed; meaning that the users either cannot figure out the availability and existence of the behavior lists, or cannot interpret the icons corresponding to each list. This can be fixed using one of the two options:

  1. 1.

    Redesign the icons.

  2. 2.

    Make the lists visible at the startup.

The first approach might seem more facile, but to prevent any future usability problems such as the ones that we captured, we changed the prototype and applied the second option (Table 2).

Table 2. The cognitive walkthrough shows that the first steps were problematic

Thus a second prototype (cf. Fig. 3) was built in order to overcome the above mentioned problems. The differences that distinguished this prototype from the last are as follows.

  • The Action Manager is visible by default, it can disappear later on as user wants.

  • The Action Manager contains a one-columned list.

  • The one-columned Action Manager is decomposed horizontally into 3 sections corresponding to past, present and future actions.

As you might notice, the Action Manager is one-columned in the latter prototype; since the Action Manager is visible by default, having only one column leads to covering less space. Moreover, the list decomposition is horizontal thus the list can represent the actions whilst having one column and less width as a result, which allows the Action Manager to cover a smaller space.

We conducted a second walkthrough, with the same user assumptions and the same task. The new prototype leads to changes in action sequences for each tasks, therefore we altered the sequences accordingly.

5 Conclusion

In this paper, we have presented a Meta User Interface that is designed to satisfy 6 main requirements that are considered essential according to our literature study. The design of the User Interface and the evaluation of the prototype were explained. There are aspects of this system that are planned for future work. First, a task migration feature will be included in the system. Another future work is a wizard of Oz evaluation of the system with the features against a system without them. Also, automatically generating visualizations for behaviors will be considered in the future work. Also we tend to include some features to enable the behaviors to be downloaded and installed on the system; therefore the user can install new behaviors on her system as she can install apps on her smart phone.