BU-CTSI Tracking and Evaluation
Evaluation Plan Overview
For more information please contact Dr. Deborah M. Fournier, Director of Evaluation for the Clinical and Translational Science Institute, at email@example.com
The BU-CTSI Tracking and Evaluation Plan entails the use of mixed methods in the systematic collection, analysis and reporting of evaluation findings that: 1) inform decision-making about needed improvements to the institute during its implementation (formative evaluation), 2) demonstrate accountability for what has been achieved by the institute and stewardship of funds (summative evaluation), and 3) monitor the continued adequacy of the evaluation plan and compliance with national program evaluation standards on utility, feasibility, propriety, and accuracy (meta-evaluation) http://www.jcsee.org.
Both formative and summative evaluation reflect the perspective that the CTSA is a large-scale research enterprise that is diffuse and complicated (lots of moving parts across multiple sites, partners and stakeholders) and complex (lots of interacting parts that respond and adapt to one another and in doing so continually generate newly changed practices that impact the CTSA system as a whole—a moving target). To deal with the complex, adaptive nature of the CTSA, the evaluation approach weaves evaluation efforts with the development of the research enterprise so that it can support the progressive unfolding of the CTSA as it built capacities across the key components and brings them to scale. Using more of a utilization-focused evaluation that intertwines Evaluation + Development implies two key considerations. First, the evaluator role is to both assess and support the development of strategies and activities using rigorous, high quality data that serves as rapid feedback. The use of tracking and evaluation data is not a report or quarterly event, but rather a process that involves frequent interactions with the PI, directors, coordinating staff, and stakeholders (i.e., process use versus just delivery of results). The aim is to encourage lively debate, clarify the use of existing strategies, question assumptions made, grapple with unexpected contingencies, push the parameters of decision-making, and respond to emerging opportunities so as to optimize impact. Second, such an evaluation role requires an evaluation structure that is strategically positioned so as to support continuous use of evaluation findings—weekly, biweekly and monthly group meetings that can strengthen capacity for both formative evaluation (e.g., adding a new drug development course to the KL2 curriculum—improvement focus) and more developmental evaluation (e.g., merging the KL2 curriculum with other educational programming to create a new curriculum model—systems change focus).
The BU-CTSI Tracking and Evaluation Plan aims are to:
- Design and implement effective internal evaluation practices that will support a continuous process of institute improvement that enhances the capacity of the institute for change and sustainability
- Use a participatory approach to evaluation that recognizes that meaningful use of evaluation findings occurs when diverse stakeholder groups work together to decide how to assess outcomes, conduct data collection and analysis and take action on their findings
- Implement evaluation activities that can be adjusted or altered in light of emerging opportunities and new stakeholders and issues that will undoubtedly unfold in the implementation of the institute so that the evaluation stays responsive to needs and interests from year to year
- Ensure the dissemination and use of evaluation findings that will lead to improvements, demonstrate achievements and contributions and advance lessons learned that may be generalizable to others
- Advance the national consortium through leadership, sharing of ideas and resources, and active participation in national committees and working groups
- Design and implement a process of tracking and program evaluation that is in compliance with the Program Evaluation Standards on utility, feasibility, propriety and accuracy (click here for more information).
Boston University CTSI Logic Models
Boston University CTSI Dashboards
Boston University CTSI Evaluation Research Studies
Boston University CTSI Annual Evaluation Progress Report
National CTSA Evaluation Key Function Working Groups
- Shared Resources
- Social Network Analysis
- Bibliometrics Interest Group
- CTSA National Evaluation Liaison Group
Deborah Fournier, PhD, is Assistant Provost for Institutional Research and Evaluation at Boston University Medical Campus and Director of Evaluation for the Boston University Clinical and Translational Science Institute (BU-CTSI). She has more than 20 years of experience in applied social science research and education and social program evaluation. She collaborates with BU-CTSI directors and researchers to evaluate processes and outcomes related to advancing team science and improved interconnectivity of researcher networks. At the national-level, she also serves on the CTSA Evaluation Key Function Committee, participates in the national CTSA evaluation workgroup on Bibliometrics and Social Network Analysis, as well as the CTSA Education and Career Development and Community Engagement Key Function Committees. She also serves on External Advisory Boards of individual CTSA institutions. At the local-level, she sustains professional working relationships with other Massachusetts-based CTSA evaluators. Her applied scholarship at Boston University includes her collaboration with administrators and faculty to build and sustain capacity for institutional assessment and accreditation compliance, comprehensive program evaluation plans for educational programs and research institutes, outcomes assessment processes and dashboards to guide operational and strategic decision-making, faculty annual evaluation and development systems, and numerous faculty workshops on evaluation methods, institutional research and competency-based assessment. Her research interests center on evaluation methods and program theory-driven approaches used in complex adaptive systems. Her primary contributions to the field of evaluation include exploring evaluative reasoning, warrantability and the inferences drawn from types of evidence in applied field studies. She has edited such volumes as Reasoning in Evaluation: Inferential Links and Leaps, New Directions for Evaluation, 68 and Progress and Future Directions in Evaluation: Perspectives on Theory, Practice, and Methods, New Directions for Evaluation, 76. She is actively involved with the American Evaluation Association, having served on the Editorial Advisory Board of The American Journal of Evaluation, established and chaired the Topical Interest Group, Theories of Evaluation, and has served as national program conference chair.
For more information please contact Dr. Deborah M. Fournier at firstname.lastname@example.org