Given the challenges in measuring outcomes beyond knowledge attainment in medical education training courses, as evaluators we have explored new opportunities using simulation for outcome evaluations. In this interactive workshop, we will introduce commonly used frameworks like ‘Moore’s model of continuing medical education outcomes’ and discuss challenges in measuring the higher-level outcomes (e.g., competencies, performance). Considering the framework of Moore’s model and using examples from our practice, we will illustrate how simulation can be used as a formal program evaluation method to capture outcomes and drive program and curriculum improvement. Through this workshop, participants will become familiar with useful data sources that can be leveraged through simulation and have the opportunity to consider their own potential uses of this methodology. Through interactive knowledge exchange activities, participants will gain insight on using a new evaluation methodology to measure outcomes related to competencies and performance.
You will learn:
- about common medical education outcome frameworks that can guide program evaluations
- how simulation can be used to measure education outcomes beyond knowledge
- how different data sources can be used through simulation methodologies to measure educational outcomes.
Erica McDiarmid is the Manager of the Centre for Addiction and Mental Health (CAMH) in Toronto will be co-facilitating this workshop with Megha Bhavsar, Latika Nirula, and Alyssa Kelly. All four facilitators work as the Education Evaluation team at the CAMH and have begun using simulation as a methodology for evaluation medical education programs. They have consulted for other groups within their organization on how to use simulation for evaluation purposes which is now being more broadly implemented. Their poster on this evaluation methodology won best poster at the 2016 CES conference and a manuscript is under review.
You can contact Erica McDiarmid via email at firstname.lastname@example.org
Participants should have a basic understanding of current program evaluation methods and data sources, as well as a basic knowledge of education evaluation. Some awareness of medical education will be beneficial, but not crucial in participating in the discussions.
Sunday, April 30 from 1:00 to 4:00 pm
Link to CE competencies for evaluators
- Understands the knowledge base of evaluation (theories, models, types, methods, and tools)
- Develops evaluation designs
- Defines evaluation methods
- Identifies data sources
- Develops reliable and valid measures/tools