Keywords

1 Introduction

Nursing programs are designed to teach students the knowledge, skill and attitude needed to provide nursing care to patients of various ages, genders, cultures and religious backgrounds [1]. Students can gain these skills through patient interaction and during a student’s program of study he/she is often presented with numerous opportunities to interact with patients in a variety of environments such as hospitals, clinics, and community settings. Nursing programs provide as much patient interaction as possible however, there is a need for students to practice their skills outside of patient interaction [2]. A mobile virtual patient is a virtual human as a virtual patient that is implemented on a mobile device such as a tablet or smartphone. We designed and developed a mobile virtual patient prototype for nurse training (Fig. 1) and implemented it for two different mobile platforms, a web-based and mobile virtual patient. We also implemented two different interaction modalities, Texting I/O and Speech I/O (Fig. 2), to investigate the effects of interaction style for mobile virtual patient training. The purpose of this research was to design a prototype mobile virtual patient and investigate the effects of mobility and screen size of the mobile virtual patient. This prototype will provide mobility and ease of access so that it can be used anywhere 24/7, without the need of specialized equipment. This paper presents the implementation details of our mobile virtual patient as well as a user study to evaluate the effects of using different mobile platforms and multi-modal input. This user study consists of investigating the effects of mobility and screen size among three devices a tablet, desktop and smartphone which will be divided between subjects as the main effect. This study will also investigate the interaction effects of I/O style within subjects using text and speech. This experimental study had a 3 × 2 mixed design with 3 device types as between-subject conditions and 2 I/O Styles as within-subject conditions. The results of this work will provide a solution for the use of mobile virtual patients as well as provide information as to how platforms and input/output modalities can affect usage in practice.

Fig. 1.
figure 1

A model of our Mobile Virtual Patient for Nurse Training prototype.

Fig. 2.
figure 2

Our Mobile Virtual Patient for Nurse Training interacting with a user through Texting I/O (left) and natural language processing or Speech I/O (right).

2 Related Work

2.1 Simulation-Based Nursing Training

The need for additional practice can be achieved through simulations techniques that represent patient interaction. “Simulation is a technique -not a technology- to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner” [2]. There are numerous simulation models that have been used by nursing students. In paper-based cases students read scenarios that are either linear to aid in learning the interaction process [3]. Standardized patients are where an actor learns a patient scenario and acts like a real patient in order to simulate patient-nurse interaction, sometime the actor is replaced by a student [4, 5]. Mannequins allow students to train techniques that may be difficult using other simulation models [2, 5, 6]. Virtual Patients are interactive computer simulations that present students with a nursing scenario [6,7,8,9,10,11,12,13,14].

2.2 Virtual Patients

Virtual patients which are a computer based simulation of a virtual human modeled as virtual patient using a nursing scenario that allows for dynamic patient interaction, designed to supplement clinical training. Virtual humans are 3D and in some cases 2D computer based visual representations of humans [6,7,8,9,10,11,12,13,14]. Virtual humans can be autonomous agents, which are controlled by a computer, or avatars, which are controlled by a real human [6,7,8,9,10,11,12,13,14]. Virtual patients have the advantage of not requiring an actor, being modifiable with new scenarios, and providing standardization so that all students interact with the exact same scenario. Though most current implements are large and require an area that is dedicated to the virtual patient [7, 8, 15, 16]. With increases in technology, virtual humans are becoming widely used for marketing, education, training and research. There are a number of studies that focus on virtual patients [7,8,9,10,11,12,13,14], there have even been studies on mobile learning [17,18,19], however there have been few studies that focus on studying the use of virtual patients on mobile platforms.

2.3 Mobile Learning Platforms and Studies

A study conducted by Taylor et al. investigated developing a mobile learning solution for health and social care practice [17]. The program scaled over five years+ to introduce mobile learning into health and social care. Their research demonstrated that there is a potential for these platforms to be more widely used across the higher education sector to bridge the gap between the classroom and work-based learning. Another mobile learning study by Lea et al. investigated enhancing health and social care placement learning through mobile technology [18]. They conducted a three-year study of research on their mobile learning project. From this project they concluded that success in mobile learning needs to be based on a clear set of principles to ensure effective pedagogy for both staff and students.

3 Mobile Virtual Patient Prototype Design

In this section, we provide a description of our prototype. For our prototype, we based the interaction on an existing nurse-patient interaction scenario. This scenario called for a 52-year-old male with iron deficiency anemia and no defining characteristics. The scenario starts off with the practitioner asking how the patient is, and the patient replies that he is tired all the time. The practitioner asks questions about when the symptoms began, frequency, medication history, about headaches, light sensitivity, fevers, level of dehydration, weight changes, viral symptoms, breathing, swelling, lightheadedness, and pain/discomfort in stomach. The patient replies that this is a source of pain, while the practitioner asks more details about the stomach, etc. In the design of our virtual patient, while the responses of the virtual patient are designed to answer a wide range of questions in this scope of this scenario, it is designed to me more dynamic so that the student practitioner can ask questions in any order, a variety of questions, and in a variety of ways. The design of this virtual character is described in Sect. 3.3. We wanted the prototype virtual patient to serve as an interactive conversation agent [20] with animated gestures, that responded using speech or text output, further detailed in Sect. 3.4.

3.1 Pedagogical Frameworks Used in Design of the System

Nursing students need to learn the knowledge, skill and attitude to be able to provide nursing care to patients of various ages, genders, cultures and religious backgrounds. Virtual patients are simulations designed to be used as a training tool. Thus, it is important to look at the learning process that nurses follow. There are two major frameworks that we looked at these include the Miller Triangle and the RTI Triangle. The miller triangle illustrates George Millers framework for clinical assessment [21], used to evaluate, diagnose and treat patients. The pyramid’s base or tier one starts with ‘knows’, meaning the student has the knowledge required to carry out professional function correctly. Tier two is ‘knows how’ refers to the competence of a student ability to perform the function that they know. Tier three is ‘shows how’ which represents the students’ performance when interacting with a patient. Tier four is the final stage ‘does’ which refers to the actions of a student when they are actually working with patients. Virtual patients can be used to allow students to practice the ‘knows how’ tier and with modern designs of virtual patients that log interaction can allow students to demonstrate what they have learned thus completing the ‘shows how’ tier the miller triangle.

Another way to view the nursing learning process is by the RTI training triangle [14]. The RTI learning triangle is a learning framework designed to allow nursing students to acquire and practice skill safely in a virtual environment. The base tier of this training triangle starts with pedagogical stage where students familiarize themselves about nursing practices and interactions. The base tier is normally complete in a classroom environment. In tier two students work in a virtual environment where they acquire and practice skills that they will need to for their profession. The second tier could be completed using a virtual patient system. The last tier is on the job where student finish practicing and validating the skills which they have learned.

3.2 Platform for Mobile-Based Virtual Patient Prototype

The model and animations of the virtual patient were created using Reallusion’s iClone 4 version 4.3.1928.1 [22]. A model from iClone was used but notified to match no defining characteristics (Fig. 1). The virtual patient’s voice was generated using Microsoft SAPI 12 4 [23] text-to-speech generator. We used the ‘Mike’ voice because it sounded closer to a middle-aged man. To control the animation, we used Java’s Flex Builder version 3.5 [24]. Flex Builder also provided the framework for receiving I/O interaction from the user. The Flex builder was linked to a MySQL [25] database which contained all the virtual patient’s questions and responses. The database was setup using the question-resolution algorithm provided by Clemson University [REF]. This prototype virtual patient was designed to run in a web browser so that the virtual patient could be accessed via the internet. This web based virtual patient provided the foundation and idea for creating a mobile virtual patient (Fig. 2) which was implemented using a Texting I/O interaction style (left) and a natural language processing or Speech I/O interaction style (right). The mobile virtual prototype had a few challenges that we had to overcome. Due to most virtual characters’ environments being created for the desktop, at the time this was developed, there were no platforms or controller methods to enable the event-driven input/output and interruption for a virtual human interaction flow to run on the web or on a mobile device. As such, we used video files in a novel way to create simulate the interactive responses of the virtual character and to enable the interaction with a web-based virtual patient. To create a realistic virtual patient that did not require as much processing power to overcome a limitation of processing power on a mobile device, we pre-rendered the virtual patients’ animation responses and saved each as a video file. Each video file was played for the appropriate response using Android’s media player. Another issue was having a limited screen real estate for a mobile device. To see the detail of the mobile virtual patient, the upper half of the virtual patient was only displayed (Fig. 2).

3.3 System Design of Virtual Patient

The system design consisted of the six major functions shown in Fig. 3. We implemented the input listener to listen for input provided by a user. When a user asked a question, the input listener would update the state control letting the system know that a question has been asked. The text version of the input listener functioned using android’s OnClickListener API. When a user clicked the send button the input listener would fetch the input string and update the state control (Fig. 3). The speech version of the input listener functioned using androids ReconfitionListener API. The listener was on a loop waiting for user input. The current state of the listener was displayed using color coded boxes that the listener updated (Fig. 3). We set up the state control to keep track of the current state of the system and of the input and output. After a question had been asked by a user, the state control would send that user’s question to the question matching algorithm. The question matching function was setup using the question resolution algorithm provided by Clemson University. The code provided by Clemson was written using C++ code that was linked to a MySQL database. The C++ code was converted to Java so that it would run on an Android Device. The MySQL database had to be converted to work as a SQLite database. This was done by taking the main SQL commands and inserting them into a java wrapper that could execute SQL code. The first part of the Question Resolution Algorithm provided by Clemson University generated a serious of synonyms from the nursing scenario questions. The synonyms created, were divided into word pairs, or bigrams. The generated files created a MySQL database that I used to create a SQLite database on an Android device. The created database is used to compare the questions asked by the user with the bigrams in the database. After the question matching algorithm matched the ‘asked’ question with the correct response, the system updated the state control with the response found. The virtual patient animation view was updated to display the current state of the mobile virtual patient’s animation for the appropriate response to the screen. The current state of the mobile virtual patient’s animation was controlled by the control thread. The control thread maintains the virtual patient in an idle state until the state control updated the state with a response that had been found. When a response was found, the control thread would update the animation so that the mobile virtual patient would respond with the correct response. When mobile virtual patient finished responding the control would default back to the idle loop until another response was returned.

Fig. 3.
figure 3

Functionality of system states for our Mobile Virtual Patient prototype.

3.4 I/O Styles

The user would be able to provide input to the mobile virtual patient using either text or speech input (Fig. 2). A built-in microphone was used for each device used for speech input. The speech input was then synthesized into text, and then is filtered through the question resolution algorithm. For the text-based input a QUERTY keyboard was used. We chose these input types to study whether similar training effects occur when using typing and texting. Texting with a virtual patient may be more private and less socially awkward when training in a public location. The prototype needed to be accessible so that it could be used from home, school or anywhere with an internet connection. The text interface provided the user with input and output (I/O) boxes as well as a send button (Fig. 2, left). The input box allowed the user to type or text a question, they would then press the send button. When the send button was pressed, the input listener received the input. After the input was processed and a response was found, the input question and output response were printed to the output box for the user to read. The mobile virtual patient’s animation would also be updated to move as if he was saying the response that is displayed in the output box.

4 Experimental Design

4.1 Experimental Design and Procedure

We conducted an experiment to gather empirical data to determine how users physically interact with objects in the real-world, when asked to select a target group of objects. Our intent was to understand the actions that are more intuitive for users to inform the design of our physically-based volumetric selection technique. Our experimental study was approved by the University of Wyoming IRB. The experimental study was a 3 × 2 mixed design, with three device types as between-subjects conditions and two I/O styles (Sect. 3.4) as within-subjects conditions. The conditions were counter-balanced to reduce order effects, where each participant was randomly assigned to one order:

  • Tablet-1st: Speech interaction, 2nd: Text interaction

  • Tablet-1st: Text interaction, 2nd: Speech interaction

  • Smartphone-1st: Speech interaction, 2nd: Text interaction

  • Smartphone-1st: Text interaction, 2nd: Speech interaction

  • Desktop-1st: Speech interaction, 2nd: Text interaction

  • Desktop-1st: Text interaction - 2nd: Speech interaction

Participants completed a consent form and a pre-questionnaire. Each participant completed two interactions with the virtual patient in the order of conditions that was assigned. Participants completed a post-questionnaire after each interaction. Once all conditions were completed, participants answered a final debriefing interview.

4.2 Apparatus

The experiment utilized three different device types: a tablet, a smartphone, and a desktop. All the devices were designed to operate using Google’s Android operating system. The tablet used was a Toshiba Thrive. The Thrive has a 10.1″, 1280 × 800 resolution, 16:10 aspect ratio screen. The tablet ran on a Tegra 2 dual-core processor with 1 GB DDR RAM and 16 GB of internal storage. The tablet ran Android 3.2, Honeycomb operating system. The smartphone used was a Motorola DROID. The DROID has a 3.7″, 854 × 480 resolution, 16:9 aspect ratio screen. The smartphone used an ARM corex A8 wih 256 MB RAM and 512 MB of internal storage. The smartphone ran android 2.3, Gingerbread operating system. The desktop setup used the same Toshiba Thrive listed above as a base though was hooked up to a mouse, keyboard and a 23″ 1920 × 1200 16:10 aspect ratio so participants were provided with a full desktop.

4.3 Measures

Demographic information was collected, such as age, gender, ethnicity, major and occupational status, by a questionnaire using a seven-point scale (1 = never used before, 7 = a great deal). The questionnaire was also used to collect information about participants’ usage with virtual humans, virtual patients, and 2D or 3D applications. Examples of these questions included, but not limited to: ‘To what extent have you worked in a health 26 care setting with real patients?’ and ‘To what extent have you been exposed to Virtual Patients.’ The questions also asked how familiar users were with tablets, smartphones and computers, such as ‘To what extent do you use a computer in your daily activities?’. Performance measures were automatically logged to the device during each trial. These measures include response time, time between questions asked, the question asked by the user as well as the responses provided by the virtual patient system. These measures were collected and stored to the device memory to identify trends in participant’s interaction with the mobile virtual patient.

A post experiment questionnaire used a 7-point scale (1 = Not at all, 7 = A great deal) to determine the ease of use, screen size satisfaction, realism, beneficial to nursing, perception of learnability, enjoyment of use and preferred input style. Examples of these questions are ‘Would you use this application as a learning tool?’ and ‘How much did you feel like you gained real patient-interaction experience from using this system?’. These questions provided the needed input for determining the effects of mobility and screen size has on the mobile virtual patients ease of use, screen size satisfaction, realism, beneficial to nursing, perception of learnability, enjoyment of use and preferred input style. The final stage before end of the experiment was the debriefing interview. The debriefing interview consisted of a series of questions. An example question is ‘which would you prefer mobile phone, mobile tablet, laptop, pc, large screen or another device? why?’. These debriefing questions allowed for participants to provide feedback that may have been missed by the post experiment questionnaires. The purpose of these types of questions is to gain insight on participants’ responses.

5 Results

This study investigated the effects of mobility and screen size using a mobile virtual patient. The qualitative data is analyzed by first summing up the measures and then calculating the mean and standard deviation. The quantitative data was analyzed using a repeated measure analyses of variance (ANOVA) statistical test for each measure.

5.1 Participants

A total of 30 students, teachers and professionals (26 females, 4 males) participated in the study. The participant’s background consisted of 2 nursing instructors, 5 nursing students, 18 professionals, and 5 additional students from other disciplines that participated in this study. All participates were over 18 years of age (M = 38.2, SD = 13.22), had 20/20 or corrected 20/20 vision and used English as their first language. Volunteers were recruited from Wyoming hospitals, the University of Wyoming Fay W. Whitney School of Nursing and by word of mouth.

5.2 Performance Results

Questions Asked by Participants.

The performance measures collected showed that participants asked more questions using speech (M = 11.95, SD = 8.69) than with text (M = 11.10, SD = 5.91). When comparing devices participants asked the most questions while using the tablet (M = 13.15, SD = 9.73), followed by the desktop (M = 10.67, SD = 6.96) and smartphone (M = 10.62, SD = 4.87). The performance measures collected showed that participants asked longer questions using speech (M = 4.95, SD = 1.33) than with text (M = 4.58, SD = 0.99). When comparing devices, participants asked the longest questions using the desktop (M = 5.15, 1.19), followed by the tablet (M = 4.67, SD = 0.82) and the smartphone (M = 4.32, SD = 1.20).

Error Rate of VH-Questions Having Unknown Responses.

The performance measures collected showed that participants error rate, questions having unknown responses, was highest while using speech (M = 2.39, SD = 1.57) compared to text interaction (M = 1.75, SD = 1.59). When comparing devices participants had the highest error rate using the smartphone (M = 2.25, SD = 1.36), compared to the desktop (M = 1.97, SD = 1.27) and the tablet (M = 1.90, SD = 1.29). Devices by interaction style: smartphone-text (M = 2.50, SD = 1.51) had the highest error rate followed by desktop-speech (M = 2.33, SD = 1.07), tablet-speech (M = 2.30, SD = 1.89), smartphone-text (M = 2.00, SD = 1.15), desktop-text (M = 1.60, SD = 0.92) and tablet-text (M = 1.50, SD = 0.92).

5.3 Usability and User Experience Results

Ease of Use.

A repeated measures ANOVA showed a significant main effect of ease of use by device type F(2, 28.64) = 5.45, p = 0.01, η2 = 0.29, Power = 0.80 (Fig. 6.1 and Table 6.1) and a significant interaction effect of I/O style F(1, 27) = 15.10, p = 0.001, η2 = 0.0.36, Power = 0.96 (Fig. 4), but no significant effect for interaction effect of I/O style by device type F < 1. A main effect showed that participants had the greatest ease of use with desktop (M = 6.22, SD = 0.42), followed by the tablet (M = 5.80, SD = 0.94) and the smartphone (M = 5.30, SD = 0.70). An interaction effect of I/O style showed a higher ranking of ease of use for text (M = 5.77, SD = 0.79) than speech (M = 5.37, SD = 0.95).

Fig. 4.
figure 4

Ease of Use mean ratings of Devices by Interaction Style.

Screen Size Satisfaction.

A repeated measures ANOVA showed a significant main effect of screen size satisfaction between devices F(2, 32.85) = 39.32, p < 0.001, η2 = 0.71, Power = 1.00 (Fig. 5) and a significant interaction effect of I/O style by device type F(2,20.05) = 5.00, p = 0.01, η2 = 0.768, Power = 0.77, but a significant effect for interaction effect of I/O style F < 1. The main effect showed the highest satisfaction with the desktop (M = 6.40, SD = 0.75) followed by tablet (M = 6.25, SD = 1.02) and the smartphone (M = 3.90, SD = 1.25). When comparing interaction effect of I/O style by device type the highest was tablet-text (M = 6.70, SD = 0.48) and the lowest was smartphone-text (M = 3.50, SD = 1.08).

Fig. 5.
figure 5

Satisfaction mean ratings of Devices by Interaction Style.

Learnability.

A repeated measures ANOVA showed a significant main effect of participant’s feeling they could learn nursing interaction skills using the virtual patient F(2,56.63) = 14.11, p = 0.004, η2 = 0.33, Power = 0.88 but no significant effect for an interaction effect of I/O style F(1, 1.34) = 3.719, p = 0.06, η2 = 0.12, Power = 0.46 nor interaction effect of I/O style by device F(1, 1.34) = 1.38, p = 0.27, η2 = 0.09, Power = 0.27. The main effect showed the highest rankings on the tablet (M = 5.45, SD = 0.99), followed by the desktop (M = 5.32, SD = 0.87) and smartphone (M = 3.93, SD = 1.15).

Enjoyable to Use.

A repeated measures ANOVA showed a significant main effect of participant’s enjoyment while using the virtual patient F(2,51.48) = 14.87, p = 0.002, η2 = 0.37, Power = 0.93 but there was no significant effect for interaction effect of I/O style F < 1 nor interaction effect of I/O style by device F < 1. The main effect showed a higher ranking of enjoyable to use for tablet (M = 5.72, SD = 1.03) followed by the desktop (M = 5.52, SD = 0.74) and the smartphone (M = 4.13, SD = 1.21).

Preference of I/O Style.

A repeated measures ANOVA showed a significant interaction effect of I/O style F(1, 29.01) = 7.17, p = 0.01, η2 = 0.73, Power = 0.73 (Fig. 6) but there was no significant main effect by device F(2, 67.51) = 7.54, p = 0.06, η2 = 0.18, Power = 0.54 nor interaction effect of I/O style by device F(1, 29.01) = 7.17, p = 0.01, η2 = 0.73, Power = 0.42. The interaction effect of I/O style showed a higher ranking of preference of I/O style with text (M = 5.33, SD = 1.53) than speech (M = 4.62 SD = 1.30). During the debriefing interview 14 participants responded that they preferred text compared to 10 that preferred speech and 6 that had no preference. Several comments were: “my preference would depend on my location”, while many participants that preferred speech stated that “texting is slow” or “talking is easier”. Some of the responses from participants that preferred text commented: “typing is easier than speaking to a computer” and “prefer texting if in a public location”. During the debriefing interview, 20 participants responded that they would like to use the tablet, compared to 13 wanting to use the desktop and 7 want to use the smartphone. (Note: users where allowed to pick more than one device) Some of the participants’ responses when asked about their preferred device were: “tablet, for visual assessment” and “tablet, prefect middle ground for size and portability”.

Fig. 6.
figure 6

Preference mean ratings of I/O Type by Device.

Virtual Patient as a Training Tool.

For all devices, when participants were asked “how effective do you believe this system will be for training or practice?” where 30 of 30 participants responded positively. They responded with comments such as: “yes, since we currently watch boring movies for practice and training”, “it would be a great tool, since we currently watch boring movies for practicing and training”, “good for training”, “would be really effective”, “very beneficial”, “would work well for specialty areas”, and “really effect because, it is hard to find people to practice with”.

Benefits for Nursing Students.

A repeated measures ANOVA showed a significant main effect of participant’s perception of the virtual patients being beneficial to nursing F(2,40.86) = 4.98, p = 0.01, η2 = 0.27, Power = 0.77 (Fig. 7) but there was no significant effect for interaction effect of I/O style F(1,1.41) = 1.99, p = 0.17, Power = 0.28, η2 = 0.07 nor an interaction effect of I/O style by device F(2, 1.41) = 1.07, p = 0.37, η2 = 0.07, Power = 0.21. The main effect showed the highest rankings on the desktop (M = 6.23, SD = 0.73) followed by the tablet (M = 6.20, SD = 0.95) and the smartphone (M = 5.15, SD = 0.89).

Fig. 7.
figure 7

Benefit for Nursing Students mean ratings of Devices by Interaction Style.

5.4 Co-presence

A repeated measures ANOVA showed a significant main effect of mobile virtual patient co-presence by device, F(2, 48.38) = 3.45, p = 0.046, η2 = 0.20, Power = 0.60 (Fig. 8) and a significant interaction effect of I/O style F(1, 9.475) = 5.747, p = 0.024, η2 = 0.18, Power = 0.64 but there was no significant effect for interaction effect of I/O style by device F < 1. The main effect showed that participants rated co-presence the highest on the tablet (M = 5.08, SD = 1.14) followed by desktop (M = 4.13, SD = 0.69) and smartphone (M = 4.10, SD = 1.18). The interaction effect of I/O style showed higher rating of co-presence for text (M = 4.61, SD = 1.04) than speech (M = 4.25, SD = 1.16).

Fig. 8.
figure 8

Co-presence mean ratings of Devices by Interaction Style.

6 Discussion

6.1 Performance Results

The performance measures collected showed that participants asked more questions using speech than with text. When comparing devices participants asked the most questions when they were using the tablet, followed by the desktop and smartphone. While this data is interesting it is hard to analyze due to a few participants asking for additional time when using a virtual patient. The increased number of questions asked with the tablet and smartphone could be due to this addition time provided. Although, it also could be related to engagement or participants enjoying the mobile virtual patient and wanting to ask as many questions as possible. In any of those cases, the virtual patient is beneficial for increased training practice. The performance measures collected showed that participants asked longer questions using speech than with text. This could be due to that is may have been easier to ask longer questions or that when using a mobile device and texting, typical texting tends to use short-handed diction. When comparing devices, participants asked the longest questions when they were using the desktop, followed by the tablet and the smartphone. Participants may have asked longer questions on the desktop because they were comfortable using that device.

The performance measures collected showed that participants error rate, or questions having unknown responses, was highest while using speech input. This could have in part due to errors in the speech recognition though it should be noted that users asked more in-depth questions while using speech, at time causing the virtual patient not to have a valid response. When comparing devices participants had the highest error rate using the smartphone, compared to the desktop and the tablet. This could be due to the smartphone being a smaller device and having a keypad that users may not have been accustomed to. It was noted that many users would move the smartphone closer to them while speaking which may have cause background noise.

6.2 Usability and User Experience

Usability: Ease of Use, Satisfaction, Learnability.

On all devices and input styles participants responded positively for ease of use however, the desktop was reported as having the highest overall ease of use, followed by the tablet and the smartphone. The average age for participants in this study was 38.2 which may have attribute to all 30 participants owning a computer. Only 19 of the participants owned a mobile device with only 40% of participants that where given the smartphone device owning a smartphone. The ease of use could have also been lower on the smartphone due to it being a smaller device, with a smaller screen and a keypad that users may not have been accustomed to using, as a few smartphone participants stated: “I don’t normally text” and “texting is slow”. Participants responded positively about their satisfaction with the screen size on the tablet and desktop however, the smartphone the responses were neutral. The smartphone ranked the lowest is to be expected with the smartphone having the smallest screen size of 3.7″ however; it ranked noticeably lower than the tablet. Some insight on this was provided by participants’ responses in the qualitative data. When asked which about device participants prefer, they responded with statements like “tablet, it is mobile and still large enough to see detail”, “the smartphone might be too small” and “tablet or larger for visual assessment”. These types of statements reveal that the smartphone did not show as much visual details about the virtual patient. It was also noted that, on the tablet and desktop, multiple participants reported on visual characteristic of the virtual patient, this was never reported on the smartphone.

Participants using the tablet and desktop reported positively on their perception of learnability while using the virtual patient however, users on the smartphone had a neutral response. The neutral response by participants on the smartphone may be due to is smaller screen size since attention to detail is an important aspect nurse training [1] and it can be difficult to see the mobile virtual patients detail on the small screen provided by the smartphone.

User Experience: Enjoyment and Preferences.

Participants showed they enjoyed working on the tablet the most. This follows the trend of the tablet ranking highly among the previous areas of ease of use, screen size, co-presence and learnability. The participants ranked the tablet below the desktop for ease of use however; the simplest products are not always the most enjoyable. The participants ranked the smartphone lowest for enjoyable to use and this is likely due to its ranking the lowest among ease of use, screen size, presence and learnability.

Users preferred I/O style by participants across all devices was the text. This could be due several factors such as someone being present in the room with the participant while they were working with the virtual patient. Another factor could have been speech recognition and synthesis or the text-to-speech voice used. Like with all speech recognition software there are sometime errors in determining what the user is saying and the participants seemed to notice this and provided comments like “more unexpected responses with the speech version”. Other users were more comfortable texting with comments like “easier to type than speaking to a computer”.

Participant’s responses during the debriefing interview showed that more participants preferred text than speech. There were many useful comments such as: “my preference would depend on my location” and “prefer texting if in a public location”. Comments like this show that both I/O styles may be viable depending on where the mobile virtual patient is going to be used. It should also be noted that a proctor was in the room while the participants were interacting with the mobile virtual patient, this may have caused more people to prefer text since they were not using the virtual patient privately. Participants responses during the debriefing showed that the tablet was the most preferred device. This may be due to the participants ranking the tablet highly among the previous areas of ease of use, screen size, co-presence and learnability. Some participants’ responses when asked about their preferred device were: “tablet, for visual assessment”, “tablet, prefect middle ground for size and portability”, and “I feel that a tablet is the ideal device for a mobile virtual patient as it is large enough to see detail, yet small enough to be used as a mobile device”.

Benefit for Nurse Training.

When participants were asked questions about the mobile virtual patient being beneficial to nursing the participants responded positively on all devices and I/O styles. The smartphone was rated the lowest may be contributed to the smaller screen size not providing as much detail of the mobile virtual patient. Noticing visual symptoms when diagnosing a patient, is an important skill for nurses to learn [1]. However, the smartphone has a hard time providing visual assessment due to its small screen. There were a lot of positive quantitative responses collected from participants. A teacher stated that it would provide “opportunities to verify practice”, that “it would be useful in that students need to learn to form questions” and that it would be beneficial to “use in online classes”. Another teacher stated that “it would provide a standardized scenario for students” and “it would allow us to have more off-campus practice for students”. From the feedback provided shows that a mobile virtual patient would be a beneficial simulation tool for nursing students to interact with.

6.3 Co-presence

Co-presence was reported to be the highest by participants using the tablet device, which came as a surprise, as it was hypothesized that the desktop display would provide a higher co-presence with the larger screen size. The desktop had a significantly larger screen at 23″ compared to 3.7″. There are no responses that provide insight to why the co-presence was reported higher on the table than the desktop, which had a larger screen than the tablet. One theory is that the tablet is held closer to a user than the desktop monitor, which could make it seem like effective screen size larger on the tablet, creating an immersive effective. Another theory is that many people interact over video conferencing applications, such as Skype or FaceTime, and may relate to that type of interaction in a more human-to-human interaction way. There were also responses like “tablet, for visual assessment” which shows that participants liked the way the virtual patient on the tablet looked. The desktop and smartphone where rated similar by participants for co-presence. This could be that a user holding the device may increase presence however, another study would need to be done to investigate this.

7 Contributions and Conclusion

The purpose of this research was to design a mobile device-based and a web-based virtual patient with dynamic discourse interaction for nurse training to determine if mobile device platforms are sufficient to incorporate dynamic interaction of a conversational agent. We used this prototype and conducted a user experiment to investigate the effects of mobility and screen size has on a mobile virtual patient. We measured performance and participants’ ratings on ease of use, screen size satisfaction, co-presence, learnability, being enjoyable to use and being beneficial to nursing students. The contributions of this evaluation can be summarized as follows:

  • Mobile devices are sufficient to incorporate dynamic interaction of a virtual conversational agent or virtual human.

  • Mobile virtual patients are beneficial for nursing students to provide nurse training.

  • Co-presence is higher when interacting with a mobile virtual patient using a tablet.

  • Speech I/O facilitates users to ask more detailed questions and engage the users more in nurse training when interacting with a mobile virtual patient.

  • Text I/O is the preferred input style for interacting with mobile virtual patients.

  • Ease of use was highest for interaction with a mobile virtual patient on a tablet.

  • A tablet provided the highest results for interacting in a realistic manner.

The main conclusion that can be made from this study was that on all devices mobile virtual would be beneficial to nursing students and thus could be used as a learning device. All devices were acceptable platforms for interacting with a mobile virtual patient for nurse training. The tablet provided participants with the best experience with a large screen size, highest co-presence and enjoyable to use. Therefore, when training with a mobile virtual patient use a tablet with Speech I/O for the best training outcomes or Texting I/O as the more preferred method when there are people around, on the go, or when a user feels self-conscious about their training performance interacting with either a smartphone or tablet. The results from this research can be used by future researchers to continue to investigate mobile virtual patients and usage.

8 Future Work

There are several directions of research that could follow from this work. This study determines that text input is favored I/O style among users however, it does not compare different ways of implementing speech input. An extension of this work would be to run a study evaluating several versions of implementing speech input for a mobile virtual patient. Another follow-up study would be to investigate the I/O style and proxemics and density of people to users training. Another area of work for a mobile virtual patient are learning outcome associated with different scenarios for virtual patients and learnability. Another aspect would be to investigate long-term training effect and frequency of use when having access to a mobile virtual patient for training.