Reflection on Evaluator Competencies and Growth
As I wrap up this course on evaluation, I have been thinking a lot about how far I have come in understanding what it means to be an evaluator. Looking at the AEA Evaluator Competencies after completing the course, I rated myself; I gave myself a 4 out of 5—somewhere between "understanding" and "expert," which I would need to have more evaluation experience to be able to rate myself at that level. I improved in some areas in the AEA Evaluator Competencies, but overall, I feel I could use more real-world practice and rate myself at a 3.5 or "neutral" level of expertise. I would not call myself an expert, but I have grown from feeling unsure and overwhelmed to having a solid foundation I can build on. This growth has come from the hands-on approach provided in LDT 506. Working through an entire evaluation process, with the support of my assigned team, helped me grow my analytical skills.
At first, the terms and frameworks felt very different from what I was used to in my previous profession as an educator. However, over time, I made the needed connections and understood more deeply how important evaluation is in the world of learning design. I also found that as a teacher, the school improvement committees, technology programs, and grade-level initiatives I worked on involved collecting data, analyzing trends, and using that information to present recommendations for school initiatives similar to what an evaluator does, just not within the expected parameters of an official evaluation proposal and report. Nevertheless, the approaches I deployed are all core elements of evaluation work. Once I made that connection, I began to feel more confident and realized I was more prepared for this journey than I initially thought.
Strengths and Areas for Growth
One area where I am strongest is Domain 1: Professional Practice. Teaching has always required me to act ethically, reflect on my decisions, and commit to ongoing learning. I have always valued professional growth, whether attending PD sessions, seeking feedback, or trying new approaches in my classroom. These habits align well with what evaluators are expected to do. I also care deeply about equity and cultural responsiveness, which I have tried to embed into my teaching. It was also affirming to see how central those values are to evaluation (AEA, 2018). It helped me see that my commitment to inclusive, culturally responsive education is something I can carry with me into this work.
Domain 2: Methodology was where I felt most unsure initially; however, once I gained hands-on experience creating the evaluation proposal, I underestimated how much experience I already had in some of these areas, like defining evaluation goals or working with stakeholders. I have worked on school committees and curriculum teams, where we reviewed data, piloted programs, and shared feedback with leadership. However, I still have a lot to learn when designing complete evaluation plans, choosing the right data collection tools, and analyzing results systematically because each evaluation request may require a different approach and ask me to review the methods shared within LDT 506. Stevahn, King, Ghere, and Minnema (2005) state that strong evaluators develop technical skills while balancing stakeholder needs and contextual factors. As a learning designer, I may be asked to participate in an internal evaluation, and to be better prepared for that, I will want to become more fluent in deciding which qualitative and quantitative methods are best suited for a given evaluation proposal.
Domain 3: Context is another area that I needed to strengthen, and while engaging in the Russ-Eft and Preskill readings and the discussion assignment in module 3, I was able to improve the application of these competencies and understand their value in helping evaluators facilitate relationships with stakeholders. This course taught me how much politics, power dynamics, and organizational culture can shape an evaluation. I learned how important it is to be aware of the environment you are working in, especially when trying to engage stakeholders or navigate competing interests (AEA, 2018). It is not just about collecting data, but also about building relationships and being an effective communicator and listener while keeping the bigger picture in mind.
What Surprised Me Most
One thing that surprised me was how much emphasis the field of evaluation places on cultural competence and social justice. As a teacher, I have always prioritized these things, but I did not expect them to be so deeply embedded in evaluation practice. I assumed evaluation was more technical or data-focused, but now I see it is also about fairness, inclusion, and ensuring all stakeholders' voices are heard (AEA, 2018; Stevahn et al., 2005). Stevahn and colleagues (2005) highlight this in their taxonomy, noting that evaluators need to "demonstrate cross-cultural competence and attend to issues of equity and inclusion" (p. 56). That stuck with me. It helped me see that evaluations are not just about measuring effectiveness but also about creating change and advocating for equity.
Where I am Headed Next
Looking ahead, I want to gain more experience in developing my understanding of different evaluation methodologies. This goal can be reached by exploring the different data collection tools and analyzing more real-world examples during an evaluation. I also want to get more comfortable using mixed methods, like combining surveys, interviews, and observations and aligning them with stakeholder needs.
I also want to read more about logic models, program theory, and how to interpret results in meaningful and accessible ways (AEA, 2018). I've started watching webinars through AEA and other organizations to strengthen my learning in this class and have collected great resources if any future evaluation opportunities come my way.
For the context domain, exploring case studies where evaluators have had to navigate complex environments would enhance my learning of the evaluation process. I want to understand what it looks like to build trust with stakeholders, especially when there might be tension or differing priorities. If evaluation work becomes a part of my professional future, my goal is to become technically skilled, thoughtful, and relational in how I approach evaluation work.
Throughout this course, I have been fortunate to have guidance from my AI mentor, Amanda Nguyen. She encouraged me to dive deeper into evaluation frameworks like Kirkpatrick's Four Levels, Phillips' ROI Model, and Learning Design Metrics. She also suggested I explore tools like the Quality Matters Rubric (Quality Matters, 2018) and Mayer's Multimedia Principles (Mayer, 2014) to improve the instructional design side of my work. These resources will help me bridge evaluation and learning design, which makes this course more relevant to my professional career goals. I want to be someone who can assess and improve learning programs through an equity-focused lens, whether I end up in higher education or a corporate learning role.
Final Thoughts
Looking back, I am proud of my growth during this course. I started unsure of how I fit into the world of evaluation. However, now I have found a meaningful connection between a career in learning design and the role of an evaluator. I know there is still much to learn, but I am excited about the journey. With continued practice, reflection, and opportunities to apply what I have learned, I can grow into a thoughtful, ethical, and skilled professional in the field and bring those strengths into every learning experience I help design.
References
American Evaluation Association. (2018). Evaluator competencies. https://www.eval.org/Competencies
Mayer, R. E. (2014). The Cambridge handbook of multimedia learning (2nd ed.). Cambridge University Press.
Quality Matters. (2018). Higher education rubric, sixth edition. https://www.qualitymatters.org/qa-resources/rubric-standards/higher-ed-rubric
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59. https://doi.org/10.1177/1098214004273180
Comments
Post a Comment