Encouraging clinicians to consider feedback from outcome measures as part of their routine practice is more than likely here to stay. It is an evidence-based practice for improving patient care and, as literature suggests, is associated with better patient outcomes (Carlier et al. 2012; Knaup et al. 2009) yet there is also evidence of real challenges in implementation wherever this approach has been attempted (Bickman et al. 2015; Gleacher et al. 2015).

As defined in the introduction to this special issue, there are two shorthand terms that have been used to refer to this process: routine outcome monitoring (ROM) and measurement feedback systems. The former emphasizes the importance of collecting data that informs an understanding of outcomes, whether client feedback or other data, and is widely used in the United Kingdom (UK). The latter emphasizes the use of systems to give feedback from those accessing services to inform practice and is widely used in the United States (US). In practice, those promoting these approaches share a common commitment to collecting information and feeding it back to clinicians, service managers, service users, and others in as close to real time as possible to inform decision making.

These systems inform practice-based evidence as they can be used to reflect on the provision of therapy and, in turn, quality improvement. Consideration of patient reported outcome measures (PROMs) and experience measures (PREMs) in particular may promote positive patient experience and patient engagement in care by fostering patient-clinician communication. The overarching drivers for clinicians and services to use feedback from outcome measures are bottom-up, to help decision making in therapy, and top-down, to demonstrate service value to commissioners.

The aim of this special section is to add to the literature on implementing and sustaining the appropriate use of PROMs and PREMS. We sought to synthesize international expertise from the field in the UK, US, and the Netherlands on approaches to overcoming implementation challenges, not forgetting the clinician and service user perspectives.

Turning to the first article in this special section, de Jong draws on feedback theory to summarize the literature on using PROMs, and then reflects on her own experience in adult services in the Netherlands. Although use of PROMs data may be a promising intervention to enhance therapeutic outcomes, de Jong notes a number of characteristics that may moderate effectiveness, including features of the feedback itself, the recipient, and the organization. Building on this, Edbrooke-Childs, Gondek, Deighton, Fonagy, and Wolpert explore whether patient demographic and case characteristics are associated with the likelihood of using sessional monitoring. Sessional monitoring was more likely with common problems such as mood and anxiety problems but less likely with more complex cases, such as those involving youths under state care or those in need of social service input. Next, Gondek and colleagues report on a systematic review of studies of feedback from outcome measures. In over half of the 32 included studies, participants receiving feedback had significantly higher levels of treatment effectiveness on at least one treatment outcome variable. Feedback was particularly effective for not-on-track patients or when it was provided to both clinicians and patients.

Mellor-Clark, Cross, Macdonald, and Skjulsvik then provide an honest and invaluable reflection on their wealth of experience of implementing the use of PROMS in adult services in the UK. Summarizing the implementation science literature and experiences of counterparts in the US, they discuss their implementation model which focusses on meta-ROM: monitoring clinicians’ monitoring and planning for cases when ROM use veers off-track. Complementing Mellor-Clark et al., Douglas, Button, and Casey reflect on their experience from mental health services in the US. They explain the three key user-centred, theory-based principles that underpin the adoption and sustainability of using and feeding back PROMs in their contextualized feedback system: integrating outcomes monitoring and feedback with clinical values and workflow, promoting the ‘golden thread’ of data-informed decision making throughout a healthcare organization, and the value of innovation for sustainability.

In the next article, Fleming, Jones, Bradley, and Wolpert focus on their challenging experience of sustaining use of PROMs by outlining the Child Outcomes Research Consortium (CORC) in child mental health services in the UK and elsewhere. This article introduces ongoing research using the CORC dataset, where they are able to “close the loop, turning practice-based evidence to evidence-based practice” (Fleming et al. 2016, p. 4).

Edbrooke-Childs, Wolpert, and Deighton then present findings from pre-post observational data from clinicians who attended training to use routine outcome measures in child mental health. They argue that such training may promote the acceptability and clinical utility of PROMs, and their data show that clinicians post-training had more positive attitudes and higher levels of self-efficacy related to using measures and providing feedback.

The last article in this special section comes from the other side of the fence, bringing in clinician and service user voices. Wolpert, Curtis-Tyler, and Edbrooke-Childs report on a qualitative study on the complementary and divergent views from two chronic conditions: child mental health and child diabetes services.

One theme that emerges from this collection of papers is that use of PROMs and PREMs is only as good as the systems and processes that support it. In particular, Douglas et al. highlight the importance of accessibility to data, appropriate feedback systems, and embedding the review of feedback into supervision to allow for appropriate consideration of outcome data. De Jong; Edbrooke-Childs, Wolpert, and Deighton; and Wolpert et al. particularly emphasize the importance of appropriate training on the delivery, interpretation, and feedback of outcome measures to ensure that they can act as a vehicle for improving outcomes and collaborative practice, and promoting a greater sense of patient agency. Fleming et al. and Mellor-Clark et al. focus on the importance of ongoing monitoring and support to overcome the challenges of using PROMs and PREMs, and to ensure that meaningful data can be collected. Edbrooke-Childs, Gondek et al. suggest that sessional monitoring may be more likely when cases present with more common problems such as mood or anxiety problems, but may be less likely when cases present with more complex problems, such as when youths are under state care or in need of social service input. Finally, Gondek et al. present evidence of when feedback from outcome measures may and may not be effective. We hope that future research will continue to explore these factors that may potentially moderate the effect of use of PROMs on better patient outcomes.

By reflecting on a range of experiences of implementing PROMs and PREMs in child and adult mental health services, we hope that this special issue adds to other learning from the field (Boswell et al. 2015), promoting research on the best ways to implement and use PROMs, and providing implementers, organizations, clinicians, and service users some practice-based evidence for implementing evidence-based practice.