Evaluation Rubrics Methodology

Share on Facebook0Tweet about this on TwitterShare on LinkedIn0Email this to someonePrint this page

Summary

This workshop is the antidote to evaluations that get lost in indicators, metrics, observations, and stories. If evaluation feels more like a measurement or opinion-gathering exercise or if you wonder if there’s something clearer and more valuable you could be delivering for decision makers, then this workshop is for you. It covers (a) how to make your evaluation questions ‘actually’ evaluative; and (b) a hands-on experience on how to create awesome evaluative rubrics as a way of directly answering those questions, especially when there are a lot of ‘intangibles’ to capture.

Rubrics are an essential tool in evaluation-specific methodology; they help us say not just what the results were but how good, valuable, or important they were – and to show systematically and transparently how we go to those conclusions. Participants will learn why, when, and how to use rubrics methodology in different contexts and cultures, as well as some of the lesser-known tips.

You will learn:

  • how to write a set of big-picture overarching questions that are actually evaluative, to guide any evaluation
  • hot tips for the design of evaluative rubrics, including the best ways to involve stakeholders in the process
  • Why, when, and how to use evaluative rubrics to get direct, evaluative answers to these questions, especially when there are ‘intangibles’ to capture
  • Dataviz options for results captured using rubrics.

Facilitator

Dr. E. Jane Davidson is an internationally recognized evaluation specialist and thought leader, best known for developing evaluation rubrics as a methodology for drawing conclusions about quality and value. She has also made significant contributions in the areas of causal inference for qualitative and mixed methods, and in synthesis methodologies for evaluation.

Jane is former Associate Director of the internationally recognized Evaluation Center at Western Michigan University, where she launched and directed the world’s first fully Interdisciplinary Ph.D. in Evaluation. She was 2005 recipient of the American Evaluation Association’s Marcia Guttentag Award.

Jane is sought after as a speaker for her signature approach of methodologically robust but refreshingly practical evaluation with breathtaking clarity. She has presented keynotes and invited workshops in the US, Canada, the UK, Denmark, Singapore, Brazil, South Africa, Australia, and New Zealand.

Jane is author of Evaluation Methodology Basics: The nuts and bolts of sound evaluation (2005, Sage), which has sold heavily in North America and internationally as a graduate text and practitioner guidebook. She has also published two minibooks, Actionable Evaluation Basics: Getting succinct answers to the most important questions (2012) and more recently, Making the Important Measurable, Not the Measurable Important (with coauthor Joanne McEachen).

After 11 years back home in New Zealand contributing to the evaluation profession there, Jane recently moved back to the States and is now based in Seattle, where she loves being within a short drive of beautiful Vancouver!

Jane’s practice, Real Evaluation, offers insightful, cut-to-the-chase keynotes, seminars, workshops, coaching, tools, frameworks, and super-practical evaluation support to clients around the world. Find out more and sign up for Jane’s low-traffic, high-value newsletter at http://RealEvaluation.com!

Language

English

Level

Multilevel – for new and experienced evaluators looking to add rubrics methodology to their repertoire.

Prerequisites

Some evaluation experience

Schedule

Sunday, April 30 from 9:00 am to 12:00 pm

Link to CE competencies for evaluators

  • Understands the knowledge base of evaluation (theories, models, types, methods and tools)
  • Develops reliable and valid measures/tools
  • Analyzes and interprets data
  • Identifies the interests of all stakeholders
  • Attends to issues of evaluation use
  • Apples evaluation competencies to organization and program measurement challenges