Blogs

A Jamboree of Monitoring, Evaluation, and Learning Practitioners
Print Page

Photo: https://pixabay.com/

What could possibly bring together a director of operations, senior finance, administration, and operations staff, policy and campaign activists, and monitoring and evaluation officers for a two-day meeting? A shared curiosity and responsibility for monitoring, evaluation, and learning (MEL), of course. In late November, the Fiscal Governance Program (FGP) at Open Society Foundations held an inaugural jamboree to connect staff with MEL responsibilities across their grantee organizations. And a MEL jamboree, it was! Here are some of my take-aways from this invigorating convening.

MEL takes a village

 The range of formal titles that participants carried – from Country Director, to Head of Finance and Administration, to Lead Policy Advisor, to M&E Officers and Managers – was truly inspiring. There are certainly challenges with the mantra, “MEL is everyone’s responsibility,” as this can often translate to, “MEL is not anyone’s job” and result in inaction. And there are risks in stretching responsibilities too broadly across different technical skill sets (finance, operations, and program management each require different skills – just as monitoring, evaluation, and learning each draw on different areas of expertise). But I saw this diversity in perspectives as an asset: for those individuals’ organizations; for the discussions we had among Jamboree participants; and for the FGP team as they move to deepen their own MEL practices. And, I saw this as an opportunity for FGP to exercise their #Bird’sEyeView to explore with and across their grantees different MEL practices at organizations of varying sizes and goals of different scope and scale. What might the linkages be between MEL organizational models and grantee organizational health and effectiveness?

Indicators – quantitative vs qualitative

The group engaged in a vigorous debate around the definition of an indicator, and whether and what the difference is between quantitative and qualitative indicators. What about the shifting political landscapes in which our work takes place? What are we leaving out by “boiling down” our work to mere metrics that are often imperfect proxies to the phenomena we are trying to change? Spoiler alert – I won’t resolve these differences here. But I can offer my perspective that indicators are intended to measure implementation or results. Rather than clarity around qualitative vs quantitative indicators, this discussion inspired me to return to TAI’s emerging monitoring system and indicators. Specifically, I aim to review and pare down the indicators or metrics we’ll use to monitor our own progress and results in pursuing our strategy. Much work and detail go into developing, collecting data, and using each indicator. And as one fellow participant noted during this discussion, too much emphasis on terminology takes us farther away from our efforts to better understand and learn from our work with our partners.

Mutual peer learning

Two components of the agenda were structured as peer-to-peer “clinics,” where participants could bring a concept or a draft product to seek guidance from other participants to refine their thinking and perhaps address challenges they faced. Participants were invited to join facilitator-suggested groupings (though mutiny was encouraged) around each of the examples brought forth. I personally found these clinics to be very energizing – first, as a window into the work of different grantees; second, as a challenge to use my own skills in a context beyond my day to day work; and finally, as an effective process to leverage the perspectives and experience in the room to think through a clear need or problem. I wish we had more time to share our discussions across small groups. And I wonder: what that type of peer learning might look like over time, and perhaps through virtual interactions? I hope to draw on the “clinic” experience in future peer learning work in which I am engaged.