Abstract
This work explored how immersive technologies like virtual reality can be exploited for improved motor learning. While virtual reality is becoming a practical replacement for training that is otherwise expensive, dangerous, or inconvenient to deliver, virtual simulations can also enhance the learning process. Based on the concept of ‘attention computing’, we developed and tested a novel ‘gaze-adaptive’ training method within a virtual putting environment augmented with eye and motion tracking. To our knowledge, this work is the first application of attention computing and adaptive virtual reality to sports skill training. Novice golfers were randomly assigned to either standard putting practice in virtual reality (control) or gaze-adaptive training conditions. For gaze-adaptive training, the golf ball was sensitive to the participant’s gaze and illuminated when fixated upon, to prompt longer and more stable pre-shot fixations. We recorded the effect of these training conditions on task performance, gaze control, and putting kinematics. Gaze-adaptive training was successful in generating more expert-like gaze control and putting kinematics, although this did not transfer to improved performance outcomes within the abbreviated training paradigm. These findings suggest that gaze-adaptive environments can enhance visuomotor learning and may be a promising method for augmenting virtual training environments.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Immersive technologies, like virtual reality (VR), augmented reality, (AR) and mixed reality (XR) are becoming popular mediums for delivering training in industries like defence [24, 25], surgery [26], rehabilitation [39], and sport [5, 49, 50]. VR is attractive for training providers because it delivers high levels of immersion [6, 44], can increase motivation to train [44, 57], and enables individuals to actively practice for complex and dangerous situations [46]. There is a growing literature that has sought to test whether VR training is an effective replacement for physical practice [35]. Findings have been mixed, with some studies providing evidence for its effectiveness [49, 50, 64, 70], while others have raised concerns [20,21,22, 33, 36, 45, 48]. This literature is challenging to synthesise because the fidelity of the VR environment and the suitability of the task in each instance will determine [20,21,22].
Instead of using immersive technologies simply as a replacement for existing training, there is also an opportunity to open up new pedagogical approaches. Indeed, VR allows the user to experience environments which are not physically possible in the ‘real world’, and can allow learners to get automated feedback on their performance by watching themselves from a third person perspective [24, 25] or even to adopt the perspectives of other people [63, 83]. One particularly promising avenue for improved pedagogy is adaptive VR. Adaptive VR refers to a method of changing the stimulus, the task, or the environment in response to ‘personalised’ inputs from the user [73, 88]. Previous adaptive methodologies have focused on task performance, such as increasing task difficulty when the user reaches a certain level of proficiency [88]. This approach is a common feature of computer games and promotes engagement and enjoyment in a task [34, 71]. Providing a constantly adapted level of challenge also appears to accelerate learning, as articulated in the Challenge Point Hypothesis [19]. For instance, Gray (2017) [89] tested the effectiveness of a VR baseball batting simulator with an adaptive training condition in which ball speeds were increased following successful hits and decreased following misses (‘strikes’). Compared to standard virtual practice or real-world batting practice, the adaptive condition generated greater pre-to-post improvements in hitting performance and even transferred to superior batting statistics in league play.
A related method of making XR environments sensitive to the user is via affective computing or attention computing, an emerging field of artificial intelligence aimed at detecting human cognitions and emotions from facial expressions, eye movements, body language, gestures, and speech patterns [15, 67, 68]. Here, psychophysiological indicators of cognitive and affective states can be used to determine individualised training adaptations in VR. For example, a recent pilot study by Ben Abdessalem and Frasson [4] successfully adapted the challenge of a computer game based on EEG indices of the user’s cognitive load. Notably, though, there is little published work in this area, particularly in the context of XR training applications. In the present work, we report the development and testing of a novel methodology for combining attention computing and adaptive-VR for sports training, in which simple in-game cues are generated in response to individual eye tracking data.
Previous studies have illustrated how eye tracking in XR can be used to cue better performance. Wolf et al. [86] demonstrated that a closed-loop support system in an AR memory task could infer users’ intentions from their hand and eye movements. The system could then alert the user to potential execution errors before they happened. Similarly, Liu et al. [40] found that cueing users to correct courses of action during a sequential task (e.g., putting books back on a library shelf) could be improved by dynamically updating the set of displayed cues in the VR headset based on hand and eye proximity. These studies illustrate how feedback loops can be used to enhance performance during VR and AR tasks, but they are notably restricted to simple choice tasks, rather than complex visuomotor skills.
Researchers have previously suggested that integrating eye tracking within VR could be highly beneficial for training sporting skills due to the important role the visual system plays in goal-directed action [58]. Directing vision towards relevant information at the right time is crucial for executing dynamic motor skills [7, 13, 27, 38, 47], and a large body of literature has demonstrated that teaching expert-like gaze patterns through eye movement training can accelerate visuomotor learning and performance [18, 32, 51, 55, 80]. Eye movement training can be delivered in several ways. Firstly in the form of gaze cueing, in which visual prompts are given to direct the trainee’s attention towards relevant areas of the visual scene [30]. Secondly, through showing the trainee the typical gaze patterns of domain experts, which is known as either eye movement modelling [31, 72] or feed-forward eye movement training [61, 80, 85]. Finally, the trainee can be shown their own eye movements as a form of feedback, which is known as feed-back eye movement training [24, 25, 81].
The present study performed a pilot assessment of a gaze-adaptive virtual environment which delivered feed-back eye movement training during golf putting. Golf putting is a visuomotor skill that has received considerable attention in the skill acquisition literature [12, 65] and has been used extensively for a particular form of eye movement training, known as ‘quiet-eye training’, which aims to develop longer fixations on the target (ball) before shot execution [29, 52, 53, 81]. Participants were asked to practice putting in a simulated golfing environment in which the ball was illuminated in response to a sustained fixation, as a cue to better visuomotor control. It was predicted that a short period of practice in the gaze-adaptive environment would elicit more expert-like eye movement profiles and putting kinematics, as well as possible improvements in putting accuracy.
2 Methods
2.1 Design
We used a mixed experimental design, in which participants were randomly assigned to one of two independent training groups and completed three VR putting conditions (pre-training, post-training, and high-pressure). Pre- and post-training assessments were conducted to determine whether the training intervention impacted performance. A high pressure putting test (see Procedures below) was also used, as previous literature has indicated that the benefits of eye movement training may be most apparent under conditions of heightened stress or anxiety [54]. Participants additionally completed a real-world putting task to assess whether any improvements transferred to physical performance conditions. The primary outcome measures were putting performance (radial error of the ball from the hole), gaze control (quiet eye duration) and motor control (clubhead kinematics).
2.2 Participants
Twenty-four novice golfers (11 female, 13 male; Mage = 21.0 ± 1.2; range = 19-24), were recruited using convenience sampling from a university undergraduate population. Qualification as a novice was based on having no official golf handicap or prior formal golf putting experience (as in [52, 53]. We chose to sample from a novice population to provide more sensitivity to the gaze cueing intervention and avoid any ceiling effects in performance,a measurable change in performance from a short intervention was unlikely in more experienced players. Participants were provided with details of the study and gave written informed consent on the day of the testing visit. Ethical approval was obtained from the departmental Ethics Committee prior to data collection. The study, and collection of data, took place between December 2022 and March 2023. Authors had access to participants’ identifying information during the data collection, but this was destroyed when the study was completed.
2.3 Task and materials
2.3.1 VR golf putting
The VR golf putting simulation was a bespoke testing environment developed using the gaming engine Unity 2019.2.12 (Unity technologies, CA) and C#. The same task has been reported in previous papers [20,21,22] and the construct validity of a previous version has been tested and supported (see [21] for more details). In the task, participants putted from 10ft (3.05m) to a target the same size and shape (diameter 10.80cm) as a standard hole. Because achieving a realistic ball drop into the hole proved challenging, participants were directed to aim for the closest proximity to the target, and told that the ball would not actually fall into the hole. To aid immersion in the task, the sound of a club striking a ball was provided concurrent to the visual contact of the club head with the ball and there was ambient environmental noise.
The putting simulation was displayed using the HTC-Vive Pro Eye headset (HTC, Taiwan), a 6-degrees-of-freedom, consumer-grade VR-system with 110° field of view. This headset has previously been validated for movement task research [56]. The Vive headset has built-in Tobii eye tracking, which employs binocular dark pupil tracking, sampling at 120Hz across the full 110° field of view of the HMD, to an accuracy of 0.5°. Gaze was calibrated in VR over 5 points prior to each block of putts. The SRanipal SDK was used to access eye position data. A ‘raycast’ function was then used to determine when foveal eye gaze was located on the ball. When this occurred, the white golf ball changed colour (to light red) to indicate that it was being fixated on. The VR putter was animated by attaching a Vive sensor to the head of a real golf club (position sampled at 90Hz, accuracy 1.5cm). VR graphics were generated on an HP EliteDesk PC running Windows 10, with an Intel i7 processor and Titan V graphics card (NVIDIA Corp., Santa Clara, CA). The accuracy of both the Vive eye tracking system [62] and the head and controller position tracking [56] have been previously validated for research purposes.
2.3.2 Real world golf putting
Real-world putts were taken on an indoor artificial putting green from a distance of 10ft (3.05m) to a regulation sized hole (10.80cm). All participants used a Longridge milled face putter and a standard size (4.27cm diameter) white golf ball. The participants were able to gain visual feedback on all the trails due to the landing position of the ball being apparent Fig. 1.
2.4 Measures
2.4.1 Quiet eye period
The quiet eye, first reported in golf putting [74], has been identified as an important visuomotor control variable in interception and aiming tasks [38]. Longer fixations directed to target locations have been shown to consistently discriminate successful from unsuccessful aiming movements, and elite performers from their novice counterparts [38, 47, 75]. Quiet eye duration was, therefore, selected as a measure of optimal gaze control during the putting task.
Following previous studies, the quiet eye period was operationalised as the final fixation initiated prior to the critical movement (clubhead backswing) that was directed toward the ball [52, 53, 77]. Quiet eye fixations had to begin prior to this critical movement, but could continue throughout and even after the movement, provided they remained on the ball (e.g. as in [9]. An automated method of quiet eye analysis was used, which has previously been reported in [20,21,22]. This method was implemented in MATLAB R2018a (Mathsworks, MA), and all the analysis code is freely available from an online repository (https://osf.io/jdums/). Gaze data was recorded in the form of three dimensional position coordinates (x,y,z) specifying where the gaze vector intersected with a virtual object (e.g., ball, club, floor). Positional coordinates were initially denoised using a second-order lowpass (30Hz) Butterworth filter (following [16]). Fixations events were then detected using a spatial dispersion algorithm from the EYEMMV toolbox for MATLAB [37], based on a minimum duration of 100ms and maximum spatial dispersion of 3° of visual angle [76]. The swing of the putter was automatically recognised based on changes in x-plane velocity of the Vive tracker. The final fixation on the ball beginning prior to this event was selected as the quiet eye fixation. If the final fixation before swing onset was not located on the ball, quiet eye was scored as zero.
There has been some debate in the literature about the inclusion of zero values in quiet eye analyses. Some researchers have argued that instances of ‘zero quiet eye’ should be excluded from analyses because they represent an absence of the behaviour of interest rather than a duration of no time. By contrast, other researchers consider that zeroes capture important information that attentional control was poor, and therefore have been included in some studies [82, 84]. To account for these issues, we analysed quiet eye in two ways. Firstly, we examined effects on the duration (in milliseconds) as a continuous variable, with instances of zero quiet eye excluded from the analysis. Secondly, we treated quiet eye as a binary variable, present or absent (1, 0), to capture any important variation related to the frequency with which quiet eye was not used.
2.4.2 Putting performance
Putting performance was assessed based on radial error of the ball from the hole, as is common in recent studies on quiet eye and targeting tasks [28, 60, 82]. The two-dimensional Euclidean distance (in cm) between the centre of the ball and the centre of the hole was measured automatically in the virtual environment, to provide a continuous measure of accuracy. Putts landing on top of the hole were assigned an error of zero. For the real-world condition, the equivalent distance was manually measured with a tape measure following each attempt. If the ball landed in the hole, a score of zero was recorded. On real-world trials where the ball hit the boundary of the artificial putting green (90 cm behind the hole), the largest possible error was recorded (90 cm) (as in [52, 53]).
2.4.3 Putting kinematics
As in previous studies [20,21,22, 42, 43, 52, 53], two kinematic variables were calculated to index the quality of the ball contact and putter swing: clubhead velocity at contact and mean clubhead accelerations during the downswing. Reduced clubhead accelerations during the downswing phase and velocity at contact have previously been linked to greater putting expertise [65] and as a result have been used as an indirect measure of visuomotor control in putting studies [10, 52, 53]. Movement of the putter head in x, y, z planes (corresponding to the plane of the swing, the plane perpendicular to the swing, and up and down respectively) was recorded by the virtual environment and then de-noised using a five-point moving-average lowpass filter (10Hz, [17]. Then, the velocity of the putter head (first derivative of position with respect to time) at the moment of contact with the ball was calculated to index quality of impact, and mean accelerations during the swing (second derivative of position with respect to time) were calculated for the whole downswing of the club. Analyses in the supplementary online files (https://osf.io/jdums/?view_only=c2ea057aef0e4624b4285d847c73f06b) indicated that both variables predicted performance outcomes in our dataset.
2.4.4 State anxiety
The Immediate Anxiety Measurement Scale (IAMS; [69]) was used to measure cognitive and somatic anxiety during the post-test and high-pressure conditions as a manipulation check of the pressure induction. The IAMS is comprised of three items measuring the intensity and direction of cognitive anxiety, somatic anxiety, and self-confidence experienced by the participant. Summed cognitive and somatic anxiety intensity scores are reported here as an overall indicator of anxious states. Participants ranked each construct on a seven-point Likert scale from 1 (not at all) to 7 (extremely) on intensity, and from -3 (very debilitative) to +3 (very facilitative) for direction. The IAMS provides valid and reliable cognitive and somatic anxiety scores which have been validated against the multi-item anxiety questionnaire (CSAI-2) [69], and was selected over long anxiety measures such as the Competitive State Anxiety Inventory-2 [11] due to its simplicity and brevity.
2.5 Procedure
Participants attended the VR lab for a single visit lasting ~30 minutes. The details of the experiment were explained to participants who then gave written informed consent. After checking that participants had not experienced VR sickness before, they were fitted with the VR headset. Initially, participants completed 2 familiarisation putts before performing 10 baseline putts (pre-training), during which time putting performance and gaze data were continuously recorded. Participants were then randomly assigned to one of two separate training groups to receive their respective training interventions (gaze-adaptive training and control training; see Training protocol). After training, all participants completed another 10 putts in VR (without any gaze cues) to investigate the differences in performance between the two training groups (post-training). Both training groups then proceeded to complete a further 10 real-world putts, where performance data was collected manually. Finally, all participants performed 10 putts in a heightened pressure situation (see Pressure manipulation). Participants completed the IAMS questionnaire immediately prior to the post-training and high-pressure conditions. After completion of all conditions, participants were thanked for participating and debriefed. See Fig. 2 for an outline of the experimental protocol in a schematic format.
2.5.1 Training protocol
Participants in each training group (gaze-adaptive and control) completed 30 practice putts. During the gaze-adaptive training, the colour of the golf ball was manipulated to cue a longer pre-shot fixation. When the participant located their gaze exactly upon the golf ball, it was illuminated with a light red colour. The participant was instructed to try and keep the ball illuminated red for 2-3 seconds prior to initiating the putting stroke. According to Vine et al, [81], this is the optimal quiet eye duration for improved putting performance. The control group had the opportunity to continue to practise putting within the VR environment, but with no gaze-related cues.
2.5.2 Pressure manipulation
We used techniques from previous research (e.g., [79] to create performance incentives and social-evaluative threat to generate a high pressure putting condition. Participants were informed that they were taking part in a competition where the individual with the best performance during the anxiety condition would receive a £50 cash reward. Participants were told their scores were going to be placed upon a leader board and compared to the other participants taking part. Finally, all participants were informed that their previous 10 putts in the real-world environment would put them in the bottom 30% when compared to those who had already completed the competition. This was designed to further raise pressure on them to perform well and increase their anxiety about being at the bottom of the leader board.
2.6 Data analysis
Data analysis was performed in RStudio v1.0.143 [59]. Linear mixed effects models (LMMs) were used to examine the effect of group (gaze-adaptive versus control) and condition (pre-training, post-training, high-pressure) on the primary outcome variables using the lme4 package for R [3]. Model fit checks and model comparisons were performed using the ‘performance’ package [41] and were used to determine the best random effects structure for each analysis. We report R2 and standardised beta effect sizes for the LMMs and follow Acock's [1] rule of thumb for interpreting std. beta that < .2 is weak, .2–.5 is moderate, and > .5 is strong. All analysis scripts and raw data are available on the Open Science Framework at: https://osf.io/jdums/.
3 Results
3.1 Performance
3.1.1 Performance within virtual reality
To examine the effect of the training on overall performance accuracy, we fitted a linear mixed model to radial error with condition, groupFootnote 1, and their interaction as fixed effects, and participant as a random effect. The model's total explanatory power was weak (conditional R2 = .04) and the part related to the fixed effects alone was small (marginal R2 = .01). Within this modelFootnote 2, the effect of Condition [Post] (β = -0.01, p = .87; Std. beta = -0.02) and Condition [Anxiety] were both non-significant (β = -0.04, p = .62; Std. beta = -0.07). The effect of Group (β = 0.12, p = .15; Std. beta = 0.21), the interaction of Group on Condition [Post] (β = -0.12, p = .27; Std. beta = -0.20) and Group on Condition [Anxiety] (β = -0.10, p = .333; Std. beta = -0.17) were all statistically non-significant. These results (see Fig. 3, top) indicate that there was little effect of the gaze-adaptive training on golf putting performance.
3.1.2 Real-world performance
To examine whether the gaze-adaptive intervention impacted real golf putting, we fitted a linear mixed model to predict real-world radial error, with training group as a fixed effect and participant as a random effect. The model's total explanatory power was substantial (conditional R2 = .31) but the part related to the fixed effects was very small (marginal R2 = .002). The effect of Group was statistically non-significant and small (β = 3.31, p = .730; Std. beta = 0.09), suggesting that those receiving the gaze-adaptive training did not perform better during real-world putting assessment.
3.2 Gaze control
3.2.1 Final fixation duration
To examine the effect of the gaze-adaptive training on gaze-control, we fitted a linear mixed model predicting quiet eye duration with condition and group as fixed factors (and participant as random effect). The model had a conditional R2 of .24 and marginal R2 of .04. Within this model, the effects of Condition [Post] (β = -90.78, p = .41; Std. beta = -0.11) and Condition [Pressure] (β = -121.13, p = .28; Std. beta = -0.15) were statistically non-significant. The effect of Group was also non-significant (β = -246.27, p = .19; Std. beta = -0.30). There were, however, significant interaction effects of Group by Condition [Post] (β = 632.88, p < .001; Std. beta = 0.76) and Group by Condition [Pressure] (β = 338.87, p = .03; Std. beta = 0.41).
Follow-up t-tests with a Bonferroni-Holm correction indicated that a training-induced increase in quiet eye duration for the gaze-adaptive group, but not the control group, accounted for the interaction effects. No differences in quiet eye duration between baseline and post, baseline and pressure, or post and pressure were detected for the control group (ps = .84). There was, however, a significant increase from baseline to post in the gaze-adaptive group (p = .001), followed by a significant reduction during the pressure condition (p = .01). There was no difference between baseline and the pressure condition (p = .15). After alpha corrections, there were no differences between the two groups at either of the three timepoints (ps > .15) Fig. 4.
3.2.2 Final fixation present/absent
We fitted a logistic mixed model to predict the presence of a quiet eye fixation (i.e., of any duration) during the VR portion of the experiment (i.e., during the baseline, post, and pressure conditions). The model (with participant as random effect) had a total explanatory power was large (conditional R2 = .28) but the marginal R2 was small (.02). Within this model, all fixed effects were non-significant, indicating no influence of the training or the pressure manipulation on the presence of a quiet eye fixation. There was no effect of Condition [Post] (β = 0.28, p = .46; Std. beta = 0.28) or Condition [Pressure] (β = -0.38, p = .27; Std. beta = -0.38). There was no effect of Group (β = -0.76, p = .17; Std. beta = -0.76), and no interactions between Group and Condition [Post] (β = 0.09, p = .85; Std. beta = 0.09) or Group and Condition [Pressure] (β = 0.74, p = .11; Std. beta = 0.74), suggesting that while fixation durations were affected by the intervention, the presence/absence of a final fixation was not.
3.3 Putting kinematics
3.3.1 Clubhead velocity at contact
To examine the effect of the gaze-adaptive training on putting kinematics, we fitted a linear mixed model predicting clubhead velocity at contact with participant as random effect. The model had a conditional R2 of .38 and large marginal R2 of .14. Within this model, the effect of Condition [Post] (β = -0.34, p < .001; Std. beta = -0.40) and Condition [Pressure] (β = -0.55, p < .001; Std. beta = -0.64) were both statistically significant. The effect of training group was not statistically significant (β = 0.12, p = .55; Std. beta = 0.14), but significant interaction effects for Group by Condition [Post] (β = -0.26, p = .04; Std. beta = -0.30) and Group by Condition [Pressure] (β = -0.37, p = .004; Std. beta = -0.43) were observed.
Despite descriptively larger reductions in velocity in the gaze-adaptive group, follow-up t-tests with a Bonferroni-Holm correction indicated non-significant differences between groups at baseline (p = .95), post (p = .95) and pressure (p = .63). Significant reductions from baseline to post, post to pressure, and baseline to pressure were displayed by both groups (ps < .03). Taken together, these results suggest a general reduction in clubhead velocity, which were somewhat larger in the gaze-adaptive group, as indicated by the large interaction effects.
3.3.2 Mean clubhead acceleration
Next we fitted a linear mixed model to mean club vertical accelerations with participant as random effect. The model had a conditional R2 of .47 and marginal R2 of .09. Within this model, the effect of Condition [Post] was non-significant (β = -0.71, p = .14; Std. beta = -0.15), but there was a significant effect for Condition [Pressure] (β = -1.04, p = .03; Std. beta = -0.22). The effect of group was non-significant (β = 1.17, p = .36; Std. beta = 0.25) but there were strong and significant interaction effects of Group by Condition [Post] (β = -2.98, p < .001; Std. beta = -0.63) and Group by Condition [Pressure] (β = -2.95, p < .001; Std. beta = -0.63) (see Fig. 5, right).
Follow-up t-tests with a Bonferroni-Holm correction indicated that for the gaze-adaptive group there was a clear reduction in clubhead accelerations from baseline to post (p < .001) and baseline to pressure (p < .001), but no difference between post and pressure (p = 1.00). There were no significant changes between conditions for the control group (ps > .12) and no between group differences for each of the three time-points (ps > .51). These results suggest benefits of the gaze-adaptive training for the development of more expert-like putting kinematics.
4 Discussion
This study explored the potential of using gaze-adaptive cues in VR to accelerate visuomotor skill acquisition. We developed a novel golf putting training simulation which cued learners towards ‘expert-like’ patterns of visuomotor control based on in-situ gaze data. In summary, the gaze-adaptive training generated improvements in both gaze control (quiet eye duration) and motor control (putting kinematics) over and above that of the VR control group, suggesting that it may accelerate visuomotor skill learning. We did not, however, observe accompanying performance improvements from pre to post training or during a transfer test, which may require more extended training periods. Hence, we can make preliminary inferences that this method potentially offers advantages for enhancing visuomotor skills, but additional research is necessary to validate and refine this approach. This is one of the first papers to operationalise affective computing and adaptive VR principles in the context of sport skill training. The findings have implications for using VR and other XR technologies for training sport skills, but also provide a proof of concept for the application of affective and attention computing methodologies to training and education in other settings such as health, industry, and aviation.
Results indicated clear changes in eye movements in the gaze-adaptive (but not the control) group, which were indicative of more expert-like visuomotor control. Specifically, the gaze-adaptive group showed an increase in quiet eye duration of around 500ms from baseline to post-training. This is a smaller increase than some more explicit quiet eye training regimes in previous work [52, 53], but still a meaningful change that is comparable with some training studies [81]. Longer quiet eye durations are linked to more goal-directed control of visual attention typical of task experts [38, 78, 79] and have been reliably associated with better performance outcomes [38], so the observed changes in gaze control could lead to improved performance in the long run.
In addition to improved control of visual attention, we also observed a move towards more expert-like putting kinematics in the adaptive VR group. There was a general improvement (reduction) in velocity at contact for both groups, but with no clear benefit favouring the gaze-adaptive group (although changes were descriptively larger; see Fig. 5, left). For clubhead accelerations during the downswing, however, there were significant reductions from baseline to post for the gaze-adaptive group, which were then maintained during the pressure condition. By contrast, there was no change in clubhead accelerations for the control group. These changes are typical of the development of putting expertise [14, 65] and mirror motor skill developments that have been previously observed following eye movement training [8, 52, 53]. It has been suggested that maintaining better gaze control enables the sensorimotor system to self-organise effectively and therefore accelerates visuomotor control towards a more expert-like state [66]. While more expert-like putting kinematics do not necessarily lead to better performance outcomes, the observed changes in movement from a simple intervention over just 30 putting trials suggests significant potential for accelerating visuomotor skill learning using this methodology.
While previous eye movement training research has been successful in generating performance improvements, as well as better maintenance of performance under pressure [32, 54, 79], we did not observe any effects on golf putting accuracy as measured by radial error. Indeed, there was no significant change in performance for either group, although there did appear to be a positive trend for the gaze-adaptive group (see Fig. 3, top right). Given that the number of practice repetitions was quite low compared to previous studies – [52, 53] used 360 training trials over three days – it is likely that the training period was insufficient for generating detectable changes in putting accuracy, a skill which takes considerable time to develop. Therefore, the gaze-adaptive cueing method may require a more extensive training period, taking place over multiple sessions.
Moreover, there was no observable transfer of training from the gaze-adaptive VR training to the real-world putting scenario. The absence of clear learning effects in the VR environment makes the lack of transfer unsurprising. It is well-established that motor skill transfer is most pronounced when the contexts are highly similar [2], which raises an intriguing question regarding the degree of similarity between real and virtual renditions of motor skills. For instance, Yarossi et al. [87] have proposed that the brain perceives VR as a distinct "context," leading to the establishment and maintenance of separate sensorimotor representations. Consequently, achieving transfer between real and virtual versions of a task may be a challenging endeavour and future applications of VR training methods will need to explore how well transfer can be achieved.
The simple demonstration of affective and attention computing principles provided here illustrates a number of practical applications of this approach. XR technologies provide the ideal platform for affective computing by using biometric sensors embedded in headsets to determine changes to virtual environments. This approach can then be used to enhance training and education through personalised learning environments, tailored user feedback, and intelligent tutoring systems. Detecting affective states from eye movements or other biosignals can also enable stress and cognitive load monitoring, emotionally-intelligent avatars, and more immersive and emotionally engaging learning experiences. In sport, for instance, lightweight and unobtrusive AR technologies could optimise training programmes, through providing real-time feedback to performers about their attentional focus or distractibility, based on eye movement indicators. In the context of golfing skills, for example, intelligent monitoring systems could be designed to improve an individual's pre-performance routine, enhance the precise 'coupling' of their hand and eye movements, or even to support their reading of a course (e.g., by cueing eye-gaze towards significant topographical features that had not been attended to). Drawing on previous work in aviation [24, 25, 61], expert gaze patterns could then be used as a feedforward learning cue, while automated data algorithms could be used to index complex (and often undetected) underlying states (e.g., changes in anxiety, cognitive load, or task expertise). Hence the first steps demonstrated here illustrate the far-reaching potential of these technology-enhanced training principles.
A clear limitation of this initial study is the limited duration of the gaze cueing intervention. We chose to use an abbreviated training duration for this initial work to examine if performance benefits were likely to occur in more extended training protocols, before any comprehensive long-term interventions are implemented. Although thirty trials proved adequate to induce beneficial alterations in quiet eye duration and putting kinematics, it was probably inadequate to yield enhanced performance results. It is also unclear if the observed benefits to gaze control would persist over time. Consequently, further studies are needed to confirm if the changes in gaze and motor control would lead to performance effects over a more protracted training period (e.g., a multiple-session study) with follow-up retention tests and if they can be easily applied to other skills. While the present results are promising, it also remains to be established if this type of personalised and automated gaze-adaptive approach is as effective as more formal eye movement training regimes, so future studies may look to directly compare these methods.
5 Conclusions
VR is a highly promising training tool in many industries; not only can it replace existing training that is more expensive or dangerous, but it can open up new pedagogical opportunities. Drawing on recent affective computing and adaptive VR innovations in the literature, we explored one of these opportunities in the form of a simple gaze-adaptive golf putting environment. This work was the first application of attention computing and adaptive VR to sports skills. The results were promising, as gaze-adaptive cues generated benefits for both gaze (quiet eye) and motor control (club kinematics) after only 30 training putts. Gaze-adaptive methods like the one in this study could be easily implemented into existing XR training applications as a way to improve movement profiles or training outcomes. These applications could be relevant for a multitude of related contexts, such as for visually guided skills in surgery, weapons handling, driving, and piloting aircraft or drones.
Data availability and code
All relevant data and code is available online from: https://osf.io/jdums/
Notes
A manipulation checking analysis confirmed that the pressure manipulation was successful in increasing participants self-reported anxiety (see supplementary files for details: https://osf.io/jdums/?view_only=c2ea057aef0e4624b4285d847c73f06b).
The model intercept (i.e., reference category) corresponded to Condition = Baseline and Group = Control. This is the case for all analyses so is not reported each time.
References
Acock AC (2014) A Gentle Introduction to Stata, 4th edn. Stata Press, Texas
Barnett SM, Ceci SJ (2002) When and where do we apply what we learn?: A taxonomy for far transfer. Psychological Bulletin 128(4):612–637. https://doi.org/10.1037/0033-2909.128.4.612
Bates D, Mächler M, Bolker B, Walker S (2014) Fitting linear mixed-effects models using lme4. arXiv:1406.5823 [Stat]. http://arxiv.org/abs/1406.5823
Ben Abdessalem H, Frasson C (2017) Real-time Brain Assessment for Adaptive Virtual Reality Game: A Neurofeedback Approach. In: Frasson C, Kostopoulos G (eds) Brain Function Assessment in Learning. Springer International Publishing, pp 133–143
Bird JM (2019). The use of virtual reality head-mounted displays within applied sport psychology. J Sport Psychol Action 1–14. https://doi.org/10.1080/21520704.2018.1563573
Bowman DA, McMahan RP (2007) Virtual reality: How much immersion is enough? Computer 40(7):36–43. https://doi.org/10.1109/MC.2007.257
Brams S, Ziv G, Levin O, Spitz J, Wagemans J, Williams AM, Helsen WF (2019) The relationship between gaze behavior, expertise, and performance: A systematic review. Psychol Bullet 145(10):980–1027. https://doi.org/10.1037/bul0000207
Causer J, Holmes PS, Williams AM (2011) Quiet Eye Training in a Visuomotor Control Task. Med Sci Sports Exerc 43(6):1042–1049. https://doi.org/10.1249/MSS.0b013e3182035de6
Causer J, Hayes SJ, Hooper JM, Bennett SJ (2017) Quiet eye facilitates sensorimotor preprograming and online control of precision aiming in golf putting. Cogn Process 18(1):47–54. https://doi.org/10.1007/s10339-016-0783-4
Cooke A, Kavussanu M, McIntyre D, Ring C (2012) Psychological, muscular and kinematic factors mediateperformance under pressure. Psychophysiology 47:1109–1118. https://doi.org/10.1111/j.1469-8986.2010.01021.x
Cox RH, Martens MP, Russell WD (2003) Measuring Anxiety in Athletics: The Revised Competitive State Anxiety Inventory–2. J Sport Exerc Psychol 25(4):519–533. https://doi.org/10.1123/jsep.25.4.519
Craig CM, Delay D, Grealy MA, Lee DN (2000) Guiding the swing in golf putting. Nature 405(6784):6784. https://doi.org/10.1038/35012690
de Brouwer AJ, Flanagan JR, Spering M (2021). Functional Use of Eye Movements for an Acting System. Trends Cogn Sci. https://doi.org/10.1016/j.tics.2020.12.006
Delay D, Nougier V, Orliaguet J-P, Coello Y (1997). Movement Control in Golf Putting. Human Mov Sci 16(5):. https://doi.org/10.1016/S0167-9457(97)00008-0
Filippini C, Di Crosta A, Palumbo R, Perpetuini D, Cardone D, Ceccato I, Di Domenico A, Merla A (2022) Automated Affective Computing Based on Bio-Signals Analysis and Deep Learning Approach. Sensors 22(5):5. https://doi.org/10.3390/s22051789
Fooken J, Spering M (2019) Decoding go/no-go decisions from eye movements. J Vis 19(2):5–5. https://doi.org/10.1167/19.2.5
Franks IM, Sanderson DJ, Van Donkelaar P (1990) A comparison of directly recorded and derived acceleration data in movement control research. Human Mov Sci 9(6):573–582. https://doi.org/10.1016/0167-9457(90)90017-8
Grant ER, Spivey MJ (2003) Eye Movements and Problem Solving: Guiding Attention Guides Thought. Psychol Sci 14(5):462–466. https://doi.org/10.1111/1467-9280.02454
Guadagnoli MA, Lee TD (2004) Challenge Point: A framework for conceptualizing the effects of various practice conditions in motor learning. J Motor Behav 36(2):212–224. https://doi.org/10.3200/JMBR.36.2.212-224
Harris DJ, Bird JM, Smart AP, Wilson MR, Vine SJ (2020) A framework for the testing and validation of simulated environments in experimentation and training. Front Psychol 11(605):. https://doi.org/10.3389/fpsyg.2020.00605
Harris DJ, Buckingham G, Wilson MR, Brookes J, Mushtaq F, Mon-Williams M, Vine SJ (2020). The effect of a virtual reality environment on gaze behaviour and motor skill learning. Psychol Sport Exerc 101721. https://doi.org/10.1016/j.psychsport.2020.101721
Harris D, Wilson M, Vine S (2020) A critical analysis of the functional parameters of the quiet eye using immersive virtual reality. J Exp Psychol Human Percept Perform 47(2):308–321. https://doi.org/10.1037/xhp0000800
Harris DJ, Buckingham G, Wilson MR, Brookes J, Mushtaq F, Mon-Williams M, Vine SJ (2021) Exploring sensorimotor performance and user experience within a virtual reality golf putting simulator. Virtual Reality 25(3):647–654. https://doi.org/10.1007/s10055-020-00480-4
Harris DJ, Arthur T, Kearse J, Olonilua M, Hassan EK, De Burgh TC, Wilson MR, Vine SJ (2023). Exploring the role of virtual reality in military decision training. Front Virtual Reality 4:. https://doi.org/10.3389/frvir.2023.1165030
Harris DJ, Wilson MR, Jones MI, De Burgh TC, Mundy D, Arthur T, Olonilua M, Vine SJ (2023) An investigation of feed-forward and feedback eye movement training in immersive virtual reality. J Eye Mov Res 15(3):7
Hashimoto DA, Petrusa E, Phitayakorn R, Valle C, Casey B, Gee D (2018) A proficiency-based virtual reality endoscopy curriculum improves performance on the fundamentals of endoscopic surgery examination. Surg Endosc 32(3):1397–1404. https://doi.org/10.1007/s00464-017-5821-5
Hayhoe MM (2017) Vision and Action. Ann Rev Vis Sci 3(1):389–413. https://doi.org/10.1146/annurev-vision-102016-061437
Horn RR, Marchetto JD (2020). Approximate Target Pre-Cueing Reduces Programming Quiet Eye and Movement Preparation Time: Evidence for Parameter Pre-Programming? Res Q Exerc Sport 0(0):1–9. https://doi.org/10.1080/02701367.2020.1782813
Jacobson N, Berleman-Paul Q, Mangalam M, Kelty-Stephen DG, Ralston C (2021) Multifractality in postural sway supports quiet eye training in aiming tasks: A study of golf putting. Human Mov Sci 76:102752. https://doi.org/10.1016/j.humov.2020.102752
Janelle CM, Champenoy JD, Coombes SA, Mousseau MB (2003) Mechanisms of attentional cueing during observational learning to facilitate motor skill acquisition. J Sports Sci 21(10):825–838. https://doi.org/10.1080/0264041031000140310
Jarodzka H, Balslev T, Holmqvist K, Nyström M, Scheiter K, Gerjets P, Eika B (2012) Conveying clinical reasoning based on visual observation via eye-movement modelling examples. Instr Sci 40(5):813–827. https://doi.org/10.1007/s11251-012-9218-5
Jarodzka H, van Gog T, Dorr M, Scheiter K, Gerjets P (2013) Learning to see: Guiding students’ attention via a Model’s eye movements fosters learning. Learn Instruct 25:62–70. https://doi.org/10.1016/j.learninstruc.2012.11.004
Jensen L, Konradsen F (2018) A review of the use of virtual reality head-mounted displays in education and training. Educ Inf Technol 23(4):1515–1529. https://doi.org/10.1007/s10639-017-9676-0
Jin S-AA (2012) “Toward Integrative Models of Flow”: Effects of Performance, Skill, Challenge, Playfulness, and Presence on Flow in Video Games. J Broadcast Electron Media 56(2):169–186. https://doi.org/10.1080/08838151.2012.678516
Kaplan AD, Cruit J, Endsley M, Beers SM, Sawyer BD, Hancock PA (2020) The effects of virtual reality, augmented reality, and mixed reality as training enhancement methods: a meta-analysis. Human Factors 0018720820904229. https://doi.org/10.1177/0018720820904229
Kozak JJ, Hancock PA, Arthur EJ, Chrysler ST (1993) Transfer of training from virtual reality. Ergonomics 36(7):777–784. https://doi.org/10.1080/00140139308967941
Krassanakis V, Filippakopoulou V, Nakos B (2014) EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. J Eye Mov Res 7(1):. https://doi.org/10.16910/jemr.7.1.1
Lebeau J-C, Liu S, Sáenz-Moncaleano C, Sanduvete-Chaves S, Chacón-Moscoso S, Becker BJ, Tenenbaum G (2016) Quiet Eye and performance in sport: A meta-analysis. J Sport Exerc Psychol 38(5):441–457. https://doi.org/10.1123/jsep.2015-0123
Levac DE, Huber ME, Sternad D (2019) Learning and transfer of complex motor skills in virtual reality: A perspective review. J NeuroEng Rehabil 16(1):121. https://doi.org/10.1186/s12984-019-0587-8
Liu J-S, Wang P, Tversky B, Feiner S (2022) Adaptive Visual Cues for Guiding a Bimanual Unordered Task in Virtual Reality. IEEE Int Symp Mixed Augmented Reality (ISMAR) 2022:431–440. https://doi.org/10.1109/ISMAR55827.2022.00059
Lüdecke D, Ben-Shachar M, Patil I, Waggoner P, Makowski D (2021) performance: An R Package for Assessment, Comparison and Testing of Statistical Models. J Open Source Softw 6:3139. https://doi.org/10.21105/joss.03139
Mackenzie SJ, Evans DB (2010) Validity and reliability of a new method for measuring putting stroke kinematics using the TOMI® system. J Sports Sci 28(8):891–899. https://doi.org/10.1080/02640411003792711
Mackenzie SJ, Foley SM, Adamczyk AP (2011) Visually focusing on the far versus the near target during the putting stroke. J Sports Sci 29(12):1243–1251. https://doi.org/10.1080/02640414.2011.591418
Makransky G, Mayer RE (2022) Benefits of taking a virtual field trip in immersive virtual reality: evidence for the immersion principle in multimedia learning. Educ Psychol Rev 34(3):1771–1798. https://doi.org/10.1007/s10648-022-09675-4
Makransky G, Terkildsen TS, Mayer RE (2019) Adding immersive virtual reality to a science lab simulation causes more presence but less learning. Learn Instruct 60:225–236. https://doi.org/10.1016/j.learninstruc.2017.12.007
Mangalam M, Yarossi M, Furmanek MP, Krakauer JW, Tunik E (2023) Investigating and acquiring motor expertise using virtual reality. J Neurophysiol 129(6):1482–1491. https://doi.org/10.1152/jn.00088.2023
Mann, Williams AM, Ward P, Janelle CM (2007) Perceptual-cognitive expertise in sport: a meta-analysis. J Sport Exerc Psychol 29(4):457–478. https://doi.org/10.1123/jsep.29.4.45710.1123/jsep.29.4.457
McAnally K, Wallwork K, Wallis G (2022). The efficiency of visually guided movement in real and virtual space. Virtual Reality. https://doi.org/10.1007/s10055-022-00724-5
Michalski SC, Szpak A, Loetscher T (2019). Using virtual environments to improve real-world motor skills in sports: a systematic review. Front Psychol 10:. https://doi.org/10.3389/fpsyg.2019.02159
Michalski SC, Szpak A, Saredakis D, Ross TJ, Billinghurst M, Loetscher T (2019) Getting your game on: Using virtual reality to improve real table tennis skills. PLoS ONE 14(9):e0222351
Miles CAL, Vine SJ, Wood G, Vickers JN, Wilson MR (2014) Quiet eye training improves throw and catch performance in children. Psychol Sport Exerc 15(5):511–515. https://doi.org/10.1016/j.psychsport.2014.04.009
Moore LJ, Vine SJ, Cooke A, Ring C, Wilson MR (2012) Quiet eye training expedites motor learning and aids performance under heightened anxiety: The roles of response programming and external attention. Psychophysiology 49(7):1005–1015. https://doi.org/10.1111/j.1469-8986.2012.01379.x
Moore LJ, Vine SJ, Wilson MR, Freeman P (2012) The effect of challenge and threat states on performance: An examination of potential mechanisms. Psychophysiology 49(10):1417–1425. https://doi.org/10.1111/j.1469-8986.2012.01449.x
Moore LJ, Vine SJ, Freeman P, Wilson MR (2013) Quiet eye training promotes challenge appraisals and aids performance under elevated anxiety. Int J Sport Exerc Psychol 11(2):169–183. https://doi.org/10.1080/1612197X.2013.773688
Nalanagula D, Greenstein JS, Gramopadhye AK (2006) Evaluation of the effect of feedforward training displays of search strategy on visual search performance. Int J Ind Ergon 36(4):289–300. https://doi.org/10.1016/j.ergon.2005.11.008
Niehorster DC, Li L, Lappe M (2017) The accuracy and precision of position and orientation tracking in the htc vive virtual reality system for scientific research. I-Perception 8(3):2041669517708205. https://doi.org/10.1177/2041669517708205
Parong J, Mayer RE (2018) Learning science in immersive virtual reality. J Educ Psychol 110(6):785–797. https://doi.org/10.1037/edu0000241
Pastel S, Marlok J, Bandow N, Witte K (2022) Application of eye-tracking systems integrated into immersive virtual reality and possible transfer to the sports sector—a systematic review. Multimedia Tools Applic. https://doi.org/10.1007/s11042-022-13474-y
R Core Team (2017) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna URL https://www.R-project.org/
Razeghi R, Arsham S, Movahedi A, Sammaknejad N (2020) The effect of visual illusion on performance and quiet eye in autistic children. Early Child Dev Care 0(0):1–9. https://doi.org/10.1080/03004430.2020.1802260
Sadasivan S, Greenstein JS, Gramopadhye AK, Duchowski AT (2005) Use of eye movements as feedforward training for a synthetic aircraft inspection task. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp 141–149). https://doi.org/10.1145/1054972.1054993
Schuetz I, Fiehler K (2022) Eye tracking in virtual reality: Vive pro eye spatial accuracy, precision, and calibration reliability. J Eye Mov Res 15(3):3. https://doi.org/10.16910/jemr.15.3.3
Seinfeld S, Arroyo-Palacios J, Iruretagoyena G, Hortensius R, Zapata LE, Borland D, de Gelder B, Slater M, Sanchez-Vives MV (2018) Offenders become the victim in virtual reality: Impact of changing perspective in domestic violence. Sci Rep 8(1):1. https://doi.org/10.1038/s41598-018-19987-7
Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, Satava RM (2002) Virtual Reality Training Improves Operating Room Performance. Ann Surg 236(4):458–464
Sim M, Kim J-U (2010) Differences between experts and novices in kinematics and accuracy of golf putting. Human Movement Science 29(6):932–946. https://doi.org/10.1016/j.humov.2010.07.014
Słowiński P, Baldemir H, Wood G, Alizadehkhaiyat O, Coyles G, Vine S, Williams G, Tsaneva-Atanasova K, Wilson M (2019) Gaze training supports self-organization of movement coordination in children with developmental coordination disorder. Sci Rep 9(1):1. https://doi.org/10.1038/s41598-018-38204-z
Srivatsa A (2023). Attention computing—What is it and what it means? https://www.tobii.com/blog/attention-computing-what-is-it-and-what-could-it-mean-for-you
Tao J, Tan T (2005). Affective computing: a review. In: Tao J, Tan T, Picard RW (eds) Affective computing and intelligent interaction (pp. 981–995). Springer. https://doi.org/10.1007/11573548_125
Thomas O, Hanton S, Jones G (2002) An alternative approach to short-form self-report assessment of competitive anxiety: a research note. Int J Sport Psychol 33(3):325–336
Torkington J, Smith SGT, Rees BI, Darzi A (2001) Skill transfer from virtual reality to a real laparoscopic task. Surg Endosc 15(10):1076–1079. https://doi.org/10.1007/s004640000233
Tremblay J, Bouchard B, Bouzouane A (2010) Adaptive game mechanics for learning purposes—making serious games playable and fun. 2: 470
Tunga Y, Cagiltay K (2023) Looking through the model’s eye: a systematic review of eye movement modeling example studies. Educ Inf Technol 28:9607–9633. https://doi.org/10.1007/s10639-022-11569-5
Vaughan N, Gabrys B, Dubey VN (2016) An overview of self-adaptive technologies within virtual reality training. Comput Sci Rev 22:65–87. https://doi.org/10.1016/j.cosrev.2016.09.001
Vickers JN (1992) Gaze control in putting. Perception 21(1):117–132. https://doi.org/10.1068/p210117
Vickers JN (1996) Visual control when aiming at a far target. J Exp Psychol: Human Percept Perform 22(2):342–354. https://doi.org/10.1037/0096-1523.22.2.342
Vickers JN (2007). Perception, cognition, and decision training: The quiet eye in action. Human Kinetics
Vickers JN (2009) Advances in coupling perception and action: the quiet eye as a bidirectional link between gaze, attention, and action. In: Raab M, Johnson JG, Heekeren HR (eds) Progress in Brain Research, vol 174. Elsevier, pp 279–288. https://doi.org/10.1016/S0079-6123(09)01322-3
Vickers JN (2011) Mind over muscle: the role of gaze control, spatial cognition, and the quiet eye in motor expertise. Cogn Process 12(3):219–222. https://doi.org/10.1007/s10339-011-0411-2
Vine SJ, Wilson MR (2011) The influence of quiet eye training and pressure on attention and visuo-motor control. Acta Psychol 136(3):340–346. https://doi.org/10.1016/j.actpsy.2010.12.008
Vine, S. J., Moore, L., & Wilson, M. R. (2011). Quiet eye training facilitates competitive putting performance in elite golfers. Front Psychol 2:. https://doi.org/10.3389/fpsyg.2011.00008
Vine SJ, Moore LJ, Wilson MR (2014) Quiet eye training: The acquisition, refinement and resilient performance of targeting skills. Eur J Sport Sci 14(1):S235–S242. https://doi.org/10.1080/17461391.2012.683815
Walters-Symons R, Wilson M, Klostermann A, Vine S (2018) Examining the response programming function of the Quiet Eye: Do tougher shots need a quieter eye? Cogn Process 19(1):47–52. https://doi.org/10.1007/s10339-017-0841-6
Wijewickrema S, Ma X, Piromchai P, Briggs R, Bailey J, Kennedy G, O’Leary S (2018) Providing automated real-time technical feedback for virtual reality based surgical training: is the simpler the better? In C. Penstein Rosé, R. Martínez-Maldonado, H. U. Hoppe, R. Luckin, M. Mavrikis, K. Porayska-Pomsta, B. McLaren, & B. du Boulay (Eds.), Artificial Intelligence in Education (pp. 584–598). Springer International Publishing. https://doi.org/10.1007/978-3-319-93843-1_43
Williams AM, Singer RN, Frehlich SG (2002) Quiet eye duration, expertise, and task complexity in near and far aiming tasks. J Motor Behav 34(2):197–207. https://doi.org/10.1080/00222890209601941
The use of gaze training to expedite motor skill acquisition. In: Handbook of Sport Neuroscience and Psychophysiology. Routledge
Wolf J, Lohmeyer Q, Holz C, Meboldt M (2021) Gaze comes in handy: predicting and preventing erroneous hand actions in ar-supported manual tasks. IEEE Int Symp Mixed Augmented Reality (ISMAR) 2021:166–175. https://doi.org/10.1109/ISMAR52148.2021.00031
Yarossi M, Mangalam M, Naufel S, Tunik E (2021) Virtual Reality as a Context for Adaptation. Front Virtual Reality 2:139. https://doi.org/10.3389/frvir.2021.733076
Zahabi M, Abdul Razak AM (2020) Adaptive virtual reality-based training: A systematic literature review and framework. Virtual Reality 24(4):725–752. https://doi.org/10.1007/s10055-020-00434-w
Gray R (2017) Transfer of Training from Virtual to Real Baseball Batting. Frontiers in Psychology 8:2183. https://doi.org/10.3389/fpsyg.2017.02183
Funding
This work was supported a Leverhulme Early Career Fellowship awarded to DH. The funders played no role in the design or execution of the research.
Author information
Authors and Affiliations
Contributions
DH: Data curation; Formal Analysis; Visualization; Writing – original draft. SV: Conceptualization; Supervision; Writing – review and editing. MW: Conceptualization; Supervision; Writing – review and editing. TA: Conceptualization; Methodology; Data curation; Formal Analysis; Writing – original draft.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare they have no financial or competing interested related to this work.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harris, D., Donaldson, R., Bray, M. et al. Attention computing for enhanced visuomotor skill performance: Testing the effectiveness of gaze-adaptive cues in virtual reality golf putting. Multimed Tools Appl 83, 60861–60879 (2024). https://doi.org/10.1007/s11042-023-17973-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-17973-4