Advertisement

Gynecological Surgery

, 15:21 | Cite as

Didactic lectures versus simulation training: a randomised pilot evaluation of its impact on surgical skill

  • Prasanna Raj SupramaniamEmail author
  • Monica Mittal
  • Rebecca Davies
  • Lee Nai Lim
  • Kirana Arambage
Open Access
Original Article

Abstract

Background

The Bristol enquiry and national surveys have highlighted medicolegal concerns, reduction in training time available for trainees and the change from trainees performing procedures for the first time on patients. The Royal Colleges have taken an active role in advocating the use of simulation training prior to doctors undertaking operative procedures in real time. This study compares didactic lecture-based teaching to simulation training using a quantitative assessment tool.

Method

Randomised pilot study including 20 trainees within their first and second year of Obstetrics and Gynaecology training. The participants were randomised to one of two groups. Group A were taken through the 10 steps to perform a diagnostic laparoscopy with a lecture, followed by an assessment using a laparoscopic pelvic box trainer. Group B were given the same didactic lecture, followed by simulation training in a dry lab, prior to undergoing the same assessment as group A.

Findings

The study demonstrates a statistically significant improvement in the overall OSATS score for trainees undertaking a hands-on simulation training session prior to completing the diagnostic laparoscopy assessment (p = 0.023).

Conclusions

This study clearly demonstrates that exposure to simulation training is superior compared to didactic lecture-based teaching for the acquisition of surgical skills.

Keywords

Diagnostic laparoscopy Simulation training Lecture-based teaching Teaching modalities and principles Objective structured assessment tools 

Abbreviations

OSATS

Objective structured assessment of technical skills

OSCE

Objective structured clinical examinations

Background

The surgical training curriculum has attempted to mirror the significant advancements that have been made in the field of laparoscopic surgery. However, the acquisition of competencies remains decades behind. Furthermore, surgical specialities have seen a gradual decline in overall recruitment and retention of surgical trainees, with a career within a surgical field no longer being appealing to many junior doctors in today’s times. Whilst the calibre of surgeons being trained remains that of the highest order, the pool from which these selections are being made are slowly diminishing [1].

Adoption of the European Working Time Directive has reduced overall hands-on training opportunities in specialities dominated by practical skills such as surgery. Previous surveys have reflected on and highlighted the reduction in training time available for trainees [2]. This, in combination with a higher skill set and competency requirement with advancements in technology, and medicolegal concerns, such as those highlighted by the Bristol enquiry, show that it is no longer acceptable for trainees to be performing procedures for the first time on patients [3]. This has prompted the Royal Colleges to take on an active role in advocating the use of simulation training prior to doctors undertaking operative procedures in real time. With advancements in laparoscopic technology, trainees are faced with additional responsibilities in also being competent in manoeuvring the equipment to achieve the desired surgical outcome.

Simulation training sessions can emulate a safe surgical environment for trainees to develop and hone specific skills. It also provides the ideal environment to become acquainted with new surgical equipment. The training surgeon can test and learn new skills without the risk of causing direct or indirect harm to the patient.

The adult learning curve is one that is well studied in multiple other fields. It was first described in the aeronautical field where the amount of time required for the production of each unit of aircraft decreased at a uniform rate each time the production was doubled [4]. These understandings have helped to shape the crux of simulation training. There is significant improvement seen in camera navigation, manual dexterity and bi-manual coordination following repeated exposure to dry lab training scenarios.

Simulation training is an example of Ericsson’s deliberate and repetitive practice theory. Although not directly thought of in the medical education concept, trainers have long used this educational principle to develop surgical skills in new trainees. Furthermore, simulation training allows trainees to use Kolb’s model of experiential learning [5], where they can reflect on the success and failures of surgical procedures, whereby this experience forms an active experimentation base that they can then build upon.

Multiple strategies have been employed to teach surgical skills, some of which have consisted of didactic lectures. In this study, we compare the effects of a didactic lecture followed by hands-on simulation training with a didactic classroom lecture session only, through the evaluation of a diagnostic laparoscopy using a validated objective structured assessment of technical skills (OSATS) (Additional file 1), performed by an Obstetrics and Gynaecology junior trainee, and marked by a senior supervisor (Consultant/Senior specialty trainee).

Methods

A pilot randomised controlled study, conducted through Health Education England, Thames Valley Deanery, including 20 junior trainees within their first and second year of Obstetrics and Gynaecology training. The trainees were advised of the objectives and structure of the study at the beginning of their training session, and were given an opportunity to ask questions. The trainees were advised that should they not wish to participate in the study, then it would not affect their learning opportunity for the day or negatively impact them, but would exclude them from the randomisation process. A consent form was then completed by each participant (trainee) in accordance with Good Clinical Practice and the consenting framework, recommended by the Clinical Trials and Research Governance department of Oxford University Hospitals NHS Foundation Trust, Oxford.

All junior specialty trainees were then exposed to the same didactic classroom lecture on how to perform a diagnostic laparoscopy in a stepwise manner. This lecture included a methodical 10-step approach and reflected the key points for the assessment in performing a safe diagnostic laparoscopy and taught troubleshooting techniques for each step. The 10 steps included general preparation of the equipment and anaesthesia; preparation of the patient (positioning, cleaning and draping, bladder catheterisation, bimanual examination); primary incision for veress needle and trochar placement; insertion of the veress needle and checks to ensure correct placement; gas insufflation and pneumoperitoneum; insertion of primary trochar; familiarity and handling of the laparoscope with primary survey of the pelvis; insertion of the secondary ports; and closure techniques (Additional file 1).

Randomisation and assessment

The junior specialty trainees were randomised to one of two groups using a computer-based randomisation program. Group A consisted of 10 trainees who were taken through the 10 steps to perform a diagnostic laparoscopy safely in a lecture based setting, followed by an assessment of their performance of the 10-step diagnostic laparoscopy using a laparoscopic pelvic box trainer and the validated simulation OSATS (Additional file 1). The assessment was carried out 15 min after the lecture-based teaching. The set-up of the box trainer’s was kept identical for each participant to reflect a mock theatre environment. The diagnostic laparoscopy was carried out in a pelvic box trainer which also had a gynaecological model of a uterus, fallopian tubes, ovaries and anatomical drawings of the major landmarks (Fig. 1: Laparoscopic view of pelvic anatomy model).
Fig. 1

Laparoscopic view of homemade pelvic box trainer with anatomical landmarks to improve training experience

Group B consisted of 10 trainees, who were given the same didactic lecture along with group A on how to perform a diagnostic laparoscopy safely, followed immediately by simulation training in a dry lab for 60 mins, on a laparoscopic pelvic box trainer by an experienced tutor on a 1 to 2 ratio (Tutor to Trainee), prior to undergoing the same assessment as group A. The formal assessment was undertaken 60 mins after completion of the simulation training. The lecture, simulation training and assessment were all undertaken on the same training day.

The OSATS was validated independently by two examiners who were consultants with more than 10 years of experience in advanced laparoscopic surgery. The simulation OSATS tool was marked by a senior supervisor (Consultant/Senior specialty trainee) who had completed a systematic assessment session to streamline the overall assessment. The assessors were not blinded to the randomisation process.

Inclusion criteria

Only trainees in their first and second year of obstetrics and gynaecology training were recruited into the study. Those that gave written informed consent were included.

Exclusion criteria

Any trainee beyond their second year of competency-based training or whom did not give their written informed consent.

Outcome measures

The outcomes were (quantitative evaluation):
  1. 1.

    Total score for the OSATS: the maximum score that could be awarded for this parameter was 40. This score is constructed from general and patient preparation (7 marks); incision and insertion of veress needle (9 marks); gas insufflation and pneumoperitoneum (6 marks); primary trochar placement (3 marks); laparoscopy and camera (6 marks); secondary ports (6 marks); closure technique (3 marks)

     
  2. 2.

    Familiarity score: overall familiarity with the instruments and laparoscopic stack (maximum score 10)

     
  3. 3.

    Sequential score: ability to perform the procedure in a sequential manner (maximum score 10)

     
  4. 4.

    Demonstration score: ability to demonstrate the procedure (maximum score 10). Most objective structured clinical examinations (OSCEs) are dependent on the ability of the candidate to showcase their knowledge and understanding in how to perform procedural skills. Whilst OSCE assessments have in the past rewarded candidates when certain procedural steps are mentioned but not physically undertaken, we assessed their specific ability to demonstrate the skill. No mark was awarded to the candidate that mentioned a procedural skill without performing the task

     
  5. 5.

    Global score: overall global impression of the trainee in performing a diagnostic laparoscopy by the assessor (maximum score 10)

     
  6. 6.

    Total score: a sum of all scores 1–5 (maximum score 80).

     

Ethical approval

Approval for the study was sought from the Clinical Trials and Research Governance department of Oxford University Hospitals NHS Foundation Trust, Oxford. As the study was considered part of an improvement in training target, ethical approval was not deemed to be required.

Statistical analysis

Statistical analysis was performed using SPSS software. The data was analysed using Mann-Whitney U test as it was presumed to have a non-parametric distribution. The power calculation performed using the means of the test score from each group, recommended a minimum of nine people in each group for 80% power and a 0.05 significance to detect a 10% difference between both the groups.

Results

A total of 20 trainees were recruited into the study. The results were compared between the two groups: group A and group B. There was no difference in the overall distribution of trainees and their level of training. A comparison of the number of previous diagnostic laparoscopies undertaken by the trainees resulted in a mean 2.60 (standard deviation [SD] 1.265) for group A and a mean of 2.40 (SD 1.506) for group B. Trainees were also asked to disclose their previous exposure to formal laparoscopy courses and no difference was demonstrated between the two groups, with the majority answering that they had not attended any such training in the past.

Table 1 illustrates the results for all groups.
Table 1

Summary of all results for pre and post simulation training

 

OSAT score (Max 40)

Familiarity score (Max 10)

Sequential score (Max 10)

Demonstration score (Max 10)

Global score (Max 10)

Total score (Max 80)

OSAT + global (Max 50)

Score

Group A

Group B

Group A

Group B

Group A

Group B

Group A

Group B

Group A

Group B

Group A

Group B

Group A

Group B

Median

26.50

29.00

7.5

7.0

7.00

8.50

7.00

8.50

7.00

7.50

47.00

53.00

33.00

37.00

IQR

24–28

28–30

6–8

6–9

6–8

7–9

6–8

8–9

6–8

7–9

44–49

49–55

32–35

36–38

Range

13–29

22–33

0–8

6–9

3–10

6–9

2–8

7–9

0–8

6–9

16–54

43–60

13–36

29–42

p value

0.023

0.436

0.247

0.004

0.052

0.007

0.007

Simulation total OSATS score (Fig. 2: Simulation OSAT score with vs without simulation training)

The maximum score achievable for the OSATS assessment was 40. Group A demonstrated a median score of 26.50 (interquartile range 24 to 28). Group B, in contrast, had a median score of 29.00 (interquartile range 28 to 30). Statistically significant improvement was seen in group B who undertook simulation training prior to the evaluation versus the group that had the didactic lecture only (p = 0.023).
Fig. 2

Simulation OSAT score with (group B) vs without (group A) simulation training. The scores are presented as medians with their 25th and 75th inter-quartile ranges

Simulation familiarity score (Fig. 3: Simulation familiarity score with vs without simulation training)

Each trainee within both groups were also assessed on their familiarity with the laparoscopic equipment. To ensure the reproducibility of this assessment, the box trainer, camera, laparoscope and instruments were set up to replicate a gynaecology theatre trolley. A maximum score of 10 was achievable in this part of the assessment. Group A had a median score of 7.50 (interquartile range 6 to 8) and group B had a median score of 7.00 (interquartile range 6 to 9). Whilst there appears to be some improvement in the overall range and the minimum score for group B, this was not statistically significant (p = 0.436).
Fig. 3

Simulation familiarity score with (group B) vs without (group A) simulation training. The scores are presented as medians with their 25th and 75th inter-quartile ranges

Simulation sequential score

This assessed the trainees’ ability to undertake a diagnostic laparoscopy in a sequential manner. Group A had a median score of 7.00 (interquartile range 6 to 8) whilst group B had a median score of 8.50 (interquartile range 7 to 9). A small improvement was demonstrated in the overall median score for group B; however, this did not reach statistical significance (p = 0.247).

Simulation demonstration score (Fig. 4: Simulation demonstration score with vs without simulation training)

Trainees were simultaneously assessed on their ability to demonstrate the procedural steps. Candidates from group A scored a median of 7.00 (interquartile range 6 to 8) and group B scored a median of 8.50 (interquartile range 8 to 9.00). An overall increment in the ability of the trainees to demonstrate the tasks was established with the minimum score increasing from 2 to 7, and an overall increase of 1.50 marks in the median. This was noted to be statistically significant (p = 0.004).
Fig. 4

Simulation demonstration score with (group B) vs without (group A) simulation training. The scores are presented as medians with their 25th and 75th inter-quartile ranges

Simulation global score (Fig. 5: Simulation global score with vs without simulation training)

This assessed the overall global impression of the trainee by the assessor on how each participant performed in their laparoscopic evaluation OSATS. The maximum score achievable was 10. An increased global impression of those who underwent simulation training in addition to the didactic lecture was demonstrated: group A had a median score of 7.00 (interquartile range 6 to 8); group B had a median score of 7.50 (interquartile range 7 to 9). No statistical difference was noted (p = 0.052).
Fig. 5

Simulation global score with (group B) vs without (group A) simulation training. The scores are presented as medians with their 25th and 75th inter-quartile ranges

Total score (Fig. 6: Total score with vs without simulation training)

The total score was calculated as a cumulative sum of the OSATS assessment, Global, Familiarity, Sequential and Demonstration scores for each individual trainee. The maximum achievable score was 80. A comparison between the two groups showed that group A had a median score of 47 (interquartile range 44 to 49), whilst group B had a median score of 53 (interquartile range 49 to 55). A significant change in the minimum score between the groups was demonstrated, with a 27 positive mark improvement for the minimum score and a 6 mark increase in the overall median score in favour of group B (p = 0.007).
Fig. 6

Total score with (group B) vs without (group A) simulation training. The scores are presented as medians with their 25th and 75th inter-quartile ranges

Total score excluding global score

To reduce the risk of perception bias by the assessor, the data was further evaluated for cumulative sum of OSATS, Familiarity, Sequential and Demonstration scores, excluding the Global assessment by the assessor. The global score was excluded to limit bias by the assessors’ subjective assessment of a trainee’s ability to perform the skill better having undergone simulation training. The remaining evaluation parameters were objective, and therefore had a reduced risk of perception bias. Group A had a median score of 47 from a total of 70 (interquartile range 44 to 49) versus group B who had a median score of 53 (interquartile range 49 to 55). A 6 mark positive increase in favour of group B was demonstrated for the minimum score achieved and median score obtained. This was statistically significant (p = 0.007).

Discussion

Advancements in medical simulation equipment have enabled participants to experience ‘Kolb’s learning environment’ in a simulated scenario as they would in real-time [5]. Simulation training is derived from and driven by active participation in conjunction with interaction and cooperation with others in the learning environment [6]. Through this process, the participant can conceptualise the protocol and implement their learned theories to promote their learning and commit the process to memory.

One of the limitations of this tool is cognitive load and the length of the session, as this can impact the amount of information subsequently committed to long-term memory [7]. To ensure that the trainees had maximum benefit from the session, we ensured that the program was divided appropriately with a variety of teaching tools. The main objective of the teaching session was to ensure that the candidates were confident in performing a diagnostic laparoscopy. Exposure to hands-on teaching with low fidelity trainers promotes the theory of experiential learning. The use of this theory will allow participants to enhance their decision-making process in an emergency scenario. With repetitive practice through simulation, followed by reflective observation, the learner will be able to effectively and efficiently reach a decision-making process [8]. Whilst the teacher benefits from understanding the pitfalls in the scenario, and can effectively improve on the teaching session fuelled by feedback from the active participation of the motivated student.

To ensure that the quality of the OSATS assessment used was of an appropriate standard, we used the Angoff method [9]. The simulation OSATS tool was developed and validated in partnership with senior clinicians and the marks were developed to ascertain what a candidate should potentially achieve should they perform all the tasks. However, the marks for a borderline candidate were not developed as this assessment tool was not used to differentiate between a pass and a fail mark, but was used as a formative assessment.

As this is a surgical skills assessment, it is vital that a candidate that is deemed competent to perform the procedure would then be clinically safe, aware of all the safety precautions and how to carry them out prior to embarking on live surgery in real time [10]. Each of the components in the assessment was evaluated by a senior clinician who is aware of the competencies of a junior trainee. This experience and knowledge that they possess in turn can be translated into the probability in which they feel that the borderline candidate should pass the said assessment. As the standard setting method is not one that is relative in nature, the quality of the borderline candidate in theory is reproducible [11]. This means that for a surgical assessment, every candidate that attends and completes the examination successfully would have the minimum required skill to be able to carry out this procedure safely, in a clinical environment on a real patient, albeit with senior supervision.

The results demonstrate a statistically significant improvement in the overall OSATS score for trainees undertaking the hands-on simulation training session prior to completing the diagnostic laparoscopy assessment. This highlights the positive effects from simulation training [12, 13, 14]. Whilst there is more benefit yet to be made from the enhancement of the learning curve and repetitive practice, this study has shown that exposure to simulation training is superior compared to didactic lecture based only teaching for surgical skill acquisition.

It is important to note that the study has been conducted on a comparison of two learning methodologies. In view of this, there is no baseline evaluation of the subjects’ knowledge. One can argue that the grade or level of training of the subjects being similar would also be representative of a similar baseline score.

One of the key advantages of the Angoff method is that the assessors performing the standard setting exercise must be aware of the procedure itself [9]. Ideally, they should also be able to perform the procedure themselves as this would aid in determining the characteristics of the borderline candidate. Given that the assessors are not detached from the skill the examination is aiming to test, one would expect that their evaluation of the borderline candidate would then be a true reflection of junior trainees in clinical practice.

Whilst one would ideally feel that the senior clinician is best positioned to evaluate and score candidates along with describing their characteristics, one should be aware of the potential for bias in this situation, especially in the absence of blinding to the randomisation process. Particularly in an operative skill assessment process, it has previously been proven that clinicians that are sub-specialised in a field will not make ideal examiners for their field of expertise. This is secondary to their expectation of the borderline candidate, which is altered by their everyday practice and exposure. As such, they would be biased in their assessment of the candidate and this would be a disadvantage to the generic candidate.

When looking at the subgroup analysis, one could further critically evaluate this teaching session. The benefit in excluding the global evaluation score when comparing the total score between groups highlights another perspective in this learning environment. One would argue that there may be potential bias in the assessors performing the assessment and that they would perceive the candidate who has undergone simulation training to perform better in the assessment. Whilst this may be the case for global perception, which is a subjective evaluation, the rest of the assessment is an objective quantitative assessment that required the candidates to demonstrate competency. A positive increment in the overall score for group B candidates, which was statistically significant, further echoes the hypothesis that our study aimed to prove.

However, one of the limitations of this study is that it does not assess the retention of surgical skill knowledge by the trainees. However, there is sufficient evidence to suggest that this is achieved by repetitive practice which can be carried out with simulation training [15].

Based on Piaget’s theory, allowing a small group interaction during hands-on simulation training builds on an internal mental process that aids learning [16]. As described by Hutchinson in 2003 [17], this allows for a safe learning environment that seeks to provide a positive effect on the learning for everyone. It also allows for students to explore the gaps in their knowledge without feeling embarrassed or incompetent. Another limitation of this study is that one could argue that repetitive exposure to the same lecture may potentially yield similar outcomes in knowledge increment comparable to that seen through repetitive practice with hands-on simulation training.

Conclusion

The results demonstrate a statistically significant improvement in the overall OSATS score for trainees undertaking the hands-on simulation training session prior to completing the diagnostic laparoscopy assessment. This highlights the positive effects from simulation training. Whilst there is more benefit yet to be made from the enhancement of the learning curve and repetitive practice, this study has shown that exposure to simulation training is superior compared to didactic lecture-based only teaching for the acquisition of surgical skills.

Notes

Acknowledgments

Stryker, in particular, Matthew Conduit, for their kind loan of the laparoscopic equipment for the training day.

Funding

Not applicable.

Availability of data and materials

Please contact the corresponding author for data requests.

Authors’ contributions

PRS, MM, RD and KA designed the study and collected the data; PRS and MM analysed the data; PRS, MM and RD generated the manuscript with LNL and KA overseeing the development process. All authors read and approved the final manuscript.

Ethics approval and consent to participate

Approval for the study was sought from the Clinical Trials and Research Governance department of Oxford University Hospitals NHS Foundation Trust, Oxford. As the study was considered part of an improvement in training target, ethical approval was not deemed to be required. A consent form was completed by each participant (trainee) in accordance with Good Clinical Practice and the consenting framework, recommended by the Clinical Trials and Research Governance department of Oxford University Hospitals NHS Foundation Trust, Oxford.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary material

10397_2018_1053_MOESM1_ESM.docx (21 kb)
Additional file 1: Validated OSATS. (DOCX 326 kb)

References

  1. 1.
    Green R, Steven R, Haddow K (2017) Declining applications to surgical specialist training. The Royal College of Surgeons. Bulletin 99(4):142–144.  https://doi.org/10.1308/rcsbull.2017.141 CrossRefGoogle Scholar
  2. 2.
    Bhatti NI, Ahmed A, Choi SS (2015) Identifying quality indicators of surgical training: a national survey. Laryngoscope 125:2685–2689.  https://doi.org/10.1002/lary.25262 CrossRefPubMedGoogle Scholar
  3. 3.
    The Report of the Public Inquiry into children’s heart surgery at the Bristol Royal Infirmary1984–1995. Learning from Bristol. https://webarchive.nationalarchives.gov.uk/20090811143822/http:/www.bristol-inquiry.org.uk/final_report/the_report.pdf.
  4. 4.
    Wright TP (1936) Factors affecting the cost of airplanes. J Aeronaut Sci 3:122–128CrossRefGoogle Scholar
  5. 5.
    Kolb DA (1984) Experiential learning: experience as the source of learning and development. Prentice Hall, Englewood CliffsGoogle Scholar
  6. 6.
    Vygotsky LS (1978) Mind in society: the development of higher psychological processes. Harvard University Press, CambridgeGoogle Scholar
  7. 7.
    Sweller J (1994) Cognitive load theory, learning difficulty, and instructional design. Learn Instr 4:293–312CrossRefGoogle Scholar
  8. 8.
    Schön D (1983) The reflective practitioner: how professionals think in action. Temple Smith, LondonGoogle Scholar
  9. 9.
    Peterson CH, Schulz EM, Engelhard G Jr (2011) Reliability and validity of bookmark-based methods for standard setting: comparisons to Angoff-based methods in the National Assessment of educational Progress. Educ Meas Issues Pract 30:3–14.  https://doi.org/10.1111/j.1745-3992.2011.00200 CrossRefGoogle Scholar
  10. 10.
    Kohn KT, Corrigan JM, Donaldson MS (1999) To err is human: building a safer health system. National Academy Press, Washington, DCGoogle Scholar
  11. 11.
    Hambleton RK (1995) Setting standard on performance assessments promising new methods and technical issues, paper presented at the meeting of American Psychological Association, New YorkGoogle Scholar
  12. 12.
    Aggarwal R, Tully A, Grantcharov T, Larsen CR, Miskry T, Farthing A et al (2006) Virtual reality simulation training can improve technical skills during laparoscopic salpingectomy for ectopic pregnancy. BJOG 113:1382–1387CrossRefGoogle Scholar
  13. 13.
    Hyltander A, Liljegren E, Rhodin PH, Lonroth H (2002) The transfer of basic skills learned in a laparoscopic simulator to the operating room. Surg Endosc 16:1324–1328CrossRefGoogle Scholar
  14. 14.
    Ahlberg G, Enochsson L, Gallagher AG, Hedman L, Hogman C, McClusky DA III et al (2007) Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies. Am J Surg 193:797–804CrossRefGoogle Scholar
  15. 15.
    Ericsson KA, Krampe RT, Tesch Roemer C (1993) The role of deliberate practice in the acquisition of expert performance. Psychol Rev 100:363–406CrossRefGoogle Scholar
  16. 16.
    GJ MC, Reid DK (1981) The learning theory of Piaget and Inhelder. Brooks/Cole Pub. Co., MontereyGoogle Scholar
  17. 17.
    Hutchinson L (2003) Educational environment. BMJ 326:810CrossRefGoogle Scholar

Copyright information

© The Author(s). 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Women’s and ChildrensOxford University Hospitals NHS Foundation Trust, John Radcliffe HospitalOxfordUK
  2. 2.Women’s and ChildrensImperial College Healthcare NHS Trust, St Mary’s HospitalPaddington, LondonUK

Personalised recommendations