This page provides further information on the design framework ‘Learning mechanisms’ theme, including the background and supporting material, and the development process.
Background and supporting material
One of our aims was to identify examples of evaluative work being undertaken within and between embedded research initiatives in order to shed light on whether these initiatives work as intended. Our data revealed that few initiatives sought to evaluate their effectiveness in achieving their intended outcomes. This gap was particularly prominent in the published literature (with one notable exception – Wye et al., 2019). Our interviews shed more light on the evaluative and learning mechanisms used within initiatives with these falling into three broad approaches – performance monitoring, formal evaluation and informal learning and reflection.
Initiatives led by those within a health service setting tended to make use of performance monitoring mechanisms including key performance indicators and annual performance reviews (of the embedded researchers). These mechanisms seemed to be used because they fitted into wider organisational governance arrangements that often focused on maintaining funding levels and controlling resource allocation.
Initiatives that were funded and/or controlled by organisations with a strong research focus (e.g. national research funders, academic-practice partnerships) tended to make use of formal evaluations. These were usually focused on producing an in-depth understanding of how and why the initiative was (or was not) working and made use of formal evaluation methodologies. Both summative and formative approaches were used, and these often resulted in academic publications.
Some initiatives made use of more informal mechanisms for learning and reflection. These were usually understood to be developmental and formative in nature. Informal mechanisms included group or individual supervision, team meetings, workshops and learning sets. The main participants in such activities tended to be embedded researchers themselves, however, and there was relatively little involvement from those leading or managing the initiative.
Wye, L., Cramer, H., Beckett, K., Farr, M., le May, A., Carey, J., Robinson, R., Anthwal, R., Rooney, J. and Baxter, H. (2019) ‘Collective knowledge brokering: the model and impact of an embedded team’, Evidence & Policy.
Development and adaptation process
This theme has undergone relatively little adaptation as a result of the workshop. The first sub-theme was originally named ‘key performance indicators’ and was re-named to better reflect a broader range of monitoring mechanisms. Monitoring and governance mechanisms were also discussed during the workshop as an area of importance when designing an initiative.
Other insights from the workshop were the importance of considering when learning will take place during the initiative, whose learning will be supported and acknowledging that learning will likely be emergent and experiential. These insights have been incorporated into the description of the theme.