Over the last ten years one of the things I have had a love-hate relationship with is the monitoring frameworks. I love it for its instrumental role in international development programming but my hatred for its inability to be standardized enough without becoming too rigid. Of late I had the opportunity to revisit this relationship in the context of institutional monitoring needs and frameworks for accountability in my current role. This revisit has tilted the balance in favor of more love than hate. It is the bad implementation of monitoring and evaluation tools that made me hate them more rather than their ability to be adapted. If correctly applied, monitoring frameworks can actually:
- Improve institutional accountability to its primary stakeholders and actually work to proactively improve its programming rather than not.
- Need for ‘skills’ to develop a good monitoring framework rather than rush through the process to meet a donor requirement or an institutional process step.
- Lastly and perhaps most importantly, monitoring and evaluation is a key element of the international development programs can be successfully applied to almost all areas of public administration, domestic and international.
I think most of us owe a lot to Practical Concepts Inc. who developed the Logical Framework Approach (LFA) in 1969 for USAID. Since then it has been widely adopted and adapted by all organizations big and small around the world to monitor programmes and in project planning and design. Most, if not all, donors require all organizations to submit as part of their planning process a log frame, which most often comprise of outcomes, outputs, activities, budgets and other inputs providing a strategic performance management framework.
There are enough critics of the LFA approach both in the donor community and in the development practitioner community and each suggestion an alternative. Read Robert Chambers; Rick Davies list of alternatives; participatory learning process approach in David Korten’s foundational publication Community Organization and Rural Development: A Learning process approach; Dennis Rondinelli in the evolution of development management theory and practice in aid: a context for evaluation; Norman Uphoff’s Learning from Gal Oya where he calls for synthesis and a holistic approach:
An appreciation of chaos theory moves us away from deterministic models without abandoning the presumption that there are causes and effects to be analysed and understood. It encourages us to think of systems as being more open than closed and as evolving rather than fixed. (Uphoff 1996:294)
Despite the criticism and alternative model suggestions LFA has remained the most predominant methodology for monitoring international development programs. Try writing up “local context” in under 5000 words without losing any context and you will know why the other models fail at being used by the donors!
This brings me back to my the reason why I started mulling over this in my previous post politics of results. What are the reasons we monitor and evaluate implementation of programs? What results are we trying to find? Are there some results that we want to find as opposed to others which we don’t? For whom are these results being measured? Is the organization monitoring and evaluating grass-root implementation processes in order to inform decisions at the headquarters-based management level? Are results for other donors and funding agencies? Or is monitoring and evaluation a way for donors to keep track of their investments?
Then perhaps the the question I should be asking is not politics of results but rather trying to unpack the underlying purpose of monitoring and evaluation and the tools used to do so!