Keywords

1 Introduction

User interfaces (UIs) are a central aspect in any ICSs and critical for user’s acceptance. According to the life cycle of a technology presented by Norman [1] there is a transition point where technology satisfies basic needs and customers are much more interested in efficiency, pleasure, and convenience. Therefore, UIs associated with a technology should be adjusted along time to meet user’s expectations. Adjustment along time is also important while designing UIs because user’s specificities are not always static, they change over time.

Even with the advances of user-centered approaches, design (e.g. universal design [2]) and usability (e.g. universal usability [3, 7]) issues persist [4]. According to Meiselwitz et al. [5] the challenges to achieve universal usability are associated to gap in users’ knowledge and both technological and user diversity. Several approaches have been developed to take diversity into account when developing ICSs. Some consider that universal solutions should be considered while others ponder the use of specific designs targeted to specific user profiles as a way of embracing users’ diversity. Both approaches have advantages and disadvantages. In one hand, designing for all users pose several challenges. One of the biggest challenges pointed out by Huh et al. [3] is finding the right balance between supporting all users and bringing enough profit to the designers. This is related to the fact that extreme users tend to be easily ignored due to their small number or simply because their existence is not always known. In the other hand, design for specific users or situations might deal better with extreme users but usually at the expense of compromised universality/globality.

This paper outlines a generic approach that allows the application of specific design guidelines to any individual or set of ICSs at runtime. The approach is based on the analysis of the graphical user interface (GUI) using both computer vision-based (Sikuli-based [6]) and affective computing-based approaches. Therefore, this allows the approach to be generic and without the need for ICS(s)’ source code access. This enables developers and designers to improve the use of existing ICSs by empowering them to redefine existing GUIs and, consequently providing support for a wider diversity design inclusion. GUI adaptation without source code access poses several challenges as the information access and interaction should be made using computer vision-based techniques. In addition, the addressed GUI should be hidden or augmented to cope with the introduced modifications.

The article is structured as follows. Section 2 describes background concepts and presents some related research. Section 3 presents the proposed approach illustrated with an example in Sect. 4. Finally, discussion and conclusions are presented in Sects. 5 and 6.

2 Related Work

Developing ICSs that address diversity of users pose several challenges. The term “universal design” describes the concept of designing all products to be ideally usable by everyone. Universal usability [7] was introduced with the goal of facilitating the use of interfaces. From those concepts several principles, guidelines, heuristics and standards were developed [8,9,10]. However, those approaches are sometimes not sufficiently complete to meet usability needs of specific users [4]. They might even be contradictory [11].

Those challenges lead to the proposal of a new set of solutions to support diversity mostly based on computer vision. The ISI (Interactive Systems Integration) tool [12] enables the integration of several GUIs of different ICSs into one new GUI adapted to user’s characteristics. However, features such as collecting the state of the original GUIs are still primitive. The work of Dixon et al. enables the identification of some GUI widgets [13, 14]. From this identification it is possible to build GUI pixel-based enhancements [13]. One example presented by the authors is the automatic translation of a GUI using Prefab, a tool using computer vision algorithms. The approach provides GUI pixel-based enhancements (to a set of GUI widgets) but not its redefinition. We argue that supporting attentive real-time GUI redefinition is of major importance to foster design for diversity. Other works [15, 16] like SUPPLE [17] enable automatic GUI generation adapted to a person’s specificities but it does not enable GUI redefinition of existing ones.

Several solutions from the computer vision field provide basis for GUI’s widgets identification (e.g. OpenCVFootnote 1, CVIPtoolsFootnote 2). For example, neural networks YOLO [18] is a real-time object detection solution that can detect over 9000 object categories from an image or video. Unfortunately, it was not applied to GUI widgets. We believe that this approach can be successfully applied to support GUI redefinition, but we are unaware of any work with this purpose or any annotated GUI widgets database essential for this purpose.

Emotion has gain importance in Human-Computer Interaction. Affective Computing [19] in particular addresses several challenges concerning with computers and emotions (e.g. ability to recognize and express emotions). The work of Tao et al. [20] and Poria et al. [21] provide an adequate review. Works in the area of affective interfaces consider emotion in the design (e.g. [22]). For example, the work of Mori et al. [23] reports results that aim to improve the understanding about what design techniques are more important to stimulate an emotion on the user. Alexander et al. [24] outlined the plan for an interface that adapt like humans to the non-verbal behavior of users. The idea has similarities with our approach however our focus is on GUI redefinition based on user’s emotions but also based on other user’s features (e.g. disease, experience).

3 GUI Redefinition and Design Guidelines

Prior work done by Gaganpreet et al. [25] developed an emotional state estimator that provides input for runtime GUI redefinition in the context of life critical robot teleoperation. This work is a complement to the Prefab’s approach because users’ emotions can be considered. However, the GUI redefinition is done manually case by case and without any guidelines.

Complementing those works enables a generic GUI redefinition for all approach fed by the user emotional state. This is beneficial because it enables designers and developers to keep existing design and source code but without preventing them to redefine the GUI to the diversity at runtime. Those advances might be further improved if user’s specificities are considered. In addition, the development of a Design Guideline Provider (see Fig. 1) suggesting automatic GUI redefinitions based on both emotional user’s states and specificities represent also an improvement.

Fig. 1.
figure 1

GUI redefinition approach.

Figure 1 presents the architecture of our GUI redefinition approach. It is composed by three main components that enable the proposed redefinition. They are:

  1. 1.

    Emotional State Estimator that identifies user’s emotional states from their physical monitorization (e.g. via affectivaFootnote 3 and imotionsFootnote 4);

  2. 2.

    User’s Specificities Identifier that identifies the user’s profile and personality (Myers-Briggs Type IndicatorFootnote 5) from the answers provided to the questionnaire and from results of the test task performed;

  3. 3.

    Design Guidelines Provider that suggests design guidelines for the GUI redefinition based on the identified emotional states and user’s specificities.

The output of the Design Guidelines Provider supports an automatic GUI redefinition at runtime. The old GUI is hidden running in a virtual machine and only the new GUI is presented to the user. We follow the approach made by Silva et al. [12] to enable a transparent GUI redefinition for the user. Silva et al. developed ISI, a tool that enables the creation of a new UI abstraction layer integrating different ICSs without accessing their source code. The proposed integration aims to improve end user interaction. The tool uses enriched ConcurTaskTree models and selected scenarios to generate Sikuli scripts. Each script is then associated to a widget of the new GUI. The interaction with a widget of the new GUI triggers the execution of the associated Sikuli script that will perform the task on existing ICSs.

This work enables an innovative approach that address diversity more effectively. For this an example of an initial set of guidelines (mapping between user’s specificities/emotional state and redefinition rules) for the Design Guidelines provider are presented below.

Design Guidelines for Some Detected User’s Specificities

  1. 1.

    Novice: step by step with tutorial;

  2. 2.

    Expert: automation;

  3. 3.

    Parkinson: selection via gaze, increase size;

  4. 4.

    Blind: translation rules;

  5. 5.

    Deaf: augmented captions.

Design Guidelines for Some Detected User’s Emotional States

  1. 1.

    Frustrated: more effective interaction;

  2. 2.

    Confused: clarification, more feedback;

  3. 3.

    Stressed: use additional communication channels (e.g. scent, relaxing music) to appease the user;

  4. 4.

    Overload: split information into different communication channels (e.g. visual, audio, haptic).

This is only an example of initial set of guidelines. Machine learning algorithms will be developed to identify better guidelines based on the combination of the set of inputs detected (user’s specificities and emotional states) and users’ reaction. The resulting guidelines will be used as a basis for better automatic GUI redefinition.

We aim at enriching GUIs with emotional intelligence. This means that in the same way a person usually adapts to its interlocutor while interacting with him/her. Our approach enables a GUI to automatically adapt at runtime to the user that is interacting with it.

4 Example

This section as the purpose of illustrating the value of the approach with a concrete example where the opening for user diversity is made clearer.

Consider an application running on a tablet while its user is travelling by train. Due to the vibrations of the train the user becomes frustrated because he can’t always hit the desired widget at the first attempt. In addition, the analysis of the interaction made revealed that the user has motor difficulties. Therefore, the input provided to our Design Guidelines provider (frustrated mental state and motor difficulties) lead to design guidelines suggestions such as:

  • increase the size of icons;

  • associate wrong clicks to the closer widget;

  • enable gaze interaction.

Those guidelines are then automatically applied at runtime for the GUI redefinition. The GUI is updated when user emotional state changes are detected.

5 Discussion

Most contributions tend to develop models, applications or technologies to improve the design and development of new and better ICSs. This work aims to facilitate the improvement of existing ones without accessing their source code. One can argue that (old) existing ICSs tend to disappear being replaced by new versions. The fact is that several systems or old versions are still in use today. Some of them will eventually be updated in the future but the remaining ones will not. This might be explained by two reasons: (i) absence of updated versions; (ii) the user does not update it.

Several ICSs do not enable personalization nor were designed to consider diversity. Furthermore, as integration is also considered, this approach enables the application of specific or universal solutions to the set of GUIs to be integrated. Ultimately, this can lead to a new way of designing and developing ICSs where they are used, as a whole, in task and user centered design approaches. For instance, several pieces of existing software with different purposes can be merged together into a new GUI where the goal is not to expose the functionalities of the system but rather use them to empower the user to accomplish easier tasks execution.

The design of GUIs to be run on top of existing ICSs raise implementation challenges but we believe that the advantages are strong:

  1. 1.

    GUIs enhanced with emotional intelligence adjusting themselves to the user who is interacting with it;

  2. 2.

    Enable design for diversity to be automatically applied at runtime to new and existing GUIs.

Challenges of using computer vision-based techniques such as disambiguation in the identification of widgets can be reduced with anchors (UiPathFootnote 6 software follow this method). Theme variations (e.g. color, text font, size) might introduce difficulties while running the Sikuli scripts. These can be reduced by introducing transparencies in the figures provided to the scripts.

It is important to note that, although some parts are already implemented, the presented approach is an ongoing work. The solution and guidelines must be evaluated with user studies. For instance, GUI runtime modifications in one direction can be fast when detecting user confusion but should be slow when the user emotional state is going back to user “normal” state. The identification of an adequate delay to be used in this case is an example of the importance that the user studies will have.

6 Conclusions

This paper presents an approach that aims at enabling developers and designers to change any existing GUI at runtime based on both user’s specificities and emotional state. This will ultimately enable designing for diversity to be broadly applied. In addition, an example of a set of design guidelines based in this innovative approach is presented.