Over the last ten years one of the things I have had a love-hate relationship with is the monitoring frameworks. I love it for its instrumental role in international development programming but my hatred for its inability to be standardized enough without becoming too rigid. Of late I had the opportunity to revisit this relationship in the context of institutional monitoring needs and frameworks for accountability in my current role. This revisit has tilted the balance in favor of more love than hate. It is the bad implementation of monitoring and evaluation tools that made me hate them more rather than their ability to be adapted. If correctly applied, monitoring frameworks can actually:

  • Improve institutional accountability to its primary stakeholders and actually work to proactively improve its programming rather than not.
  • Need for ‘skills’ to develop a good monitoring framework rather than rush through the process to meet a donor requirement or an institutional process step.
  • Lastly and perhaps most importantly, monitoring and evaluation is a key element of the international develop­ment programs can be successfully applied to almost all ar­eas of public administration, domestic and international.

I think most of us owe a lot to Practical Concepts Inc. who developed the Logical Framework Approach (LFA) in 1969 for USAID. Since then it has been widely adopted and adapted by all organizations big and small around the world to monitor programmes and in project planning and design. Most, if not all, donors require all organizations to submit as part of their planning process a log frame, which most often comprise of outcomes, outputs, activities, budgets and other inputs providing a strategic performance management framework.

There are enough critics of the LFA approach both in the donor community and in the development practitioner community and each suggestion an alternative. Read Robert Chambers; Rick Davies list of alternatives; participatory learning process approach in David Korten’s foundational publication Community Organization and Rural Development: A Learning process approach; Dennis Rondinelli in the evolution of development management theory and practice in aid: a context for evaluation;  Norman Uphoff’s Learning from Gal Oya where he calls for synthesis and a holistic approach:

An appreciation of chaos theory moves us away from deterministic models without abandoning the presumption that there are causes and effects to be analysed and understood. It encourages us to think of systems as being more open than closed and as evolving rather than fixed. (Uphoff 1996:294)

Despite the criticism and alternative model suggestions LFA has remained the most predominant methodology for monitoring international development programs. Try writing up “local context” in under 5000 words without losing any context and you will know why the other models fail at being used by the donors!

This brings me back to my the reason why I started mulling over this in my previous post politics of results. What are the reasons we monitor and evaluate implementation of programs? What results are we trying to find? Are there some results that we want to find as opposed to others which we don’t? For whom are these results being measured? Is the organization monitoring and evaluating grass-root implementation processes in order to inform decisions at the headquarters-based management level? Are results for other donors and funding agencies? Or is monitoring and evaluation a way for donors to keep track of their investments?

Then perhaps the the question I should be asking is not politics of results but rather trying to unpack the underlying purpose of monitoring and evaluation and the tools used to do so!

I am exploring a hunch and will find arguments both for and against it to see if I can claim to prove or disprove my hunch. The hunch is: there is politics of results not just evidence but results. This is evident when we try and establish a goal: whose goals? who will decide what results can be produced? The importance of establishing and pursing specific goals is clear to everyone however what result we seek and measure is a political decision. Whose politics you ask? The politics of those who seek to measure it, the politics of those who fund it, the politics of those who are the subjects of the intervention.

An insightful paper by Charles Kenny and Andy Sumner highlights that increased aid from developed nations to developing countries was not directly linked to performance and results, and it is much more difficult to know whether it had the desired impact overall. The paper also makes  a very strong case for any new MDG-like agenda needing “targets that are set realistically and directly link aid flows to social policy change and to results”. So this question of politics of results in my opinion is extremely important to discuss.

Whose results get tracked and how is highly political. If one were to just look at the most recent progress chart prepared by UNDP (the agency charged with reporting on progress toward achieving the MDGs) what gets measured under goal 8 is internet users (!!). There is no numerical target for financial aid or any other aspect of developed/rich countries’ assistance being tracked under that goal, in contrast to the highly specific poverty-related targets set for developing countries. Of course it is not to say that poverty reduction is not the primary goal for every country (rich or poor), however what gets tracked and hence reported are clearly political.

If we use the rights based approach to development then shouldn’t the same apply to determining and measuring results, if so can this approach in some way overcome the politics of results? Recently, I was listening to Diane Elson and Radhika talk about economic policy today geared towards achieving economic growth, underwritten by assumptions about the virtues of the market. Efficiency rather than ethics has been the focus of concern. Yet, the means adopted to achieve economic growth have been responsible for undermining goals in the domain of human rights. It is time to assess economic policy using the ethical lens of the human rights standards that all governments have agreed upon. Their work shows how we can rethink macroeconomic strategies from a human rights perspective, with a focus on economic and social rights, which is also the topic of their recent book Economic Policy and Human Rights: Holding Governments to Accounts.

So we if use the rights based approach to determining economic policy, shouldn’t the rights based approach be applied to what we measure and how we measure it? Do we not consider differences of context when we choose the indicators and results to measure?