The 58th Commission on the Status of Women is currently underway in New York, USA. It started on the 10th of March and will go on till 21st of March. This annual meeting of hundreds of women, men, government officials, ministers, national delegations, girl’s groups, boy’s groups, NGOs, and other stake-holders at the United Nations is to discuss this year’s priority theme, Challenges and achievements in the implementation of the Millennium Development Goals for women and girls.
Inevitably there are a whole host of new initiatives and campaigns that are launched/announced at this annual meeting. This year I have started my own ‘campaign’ to gather information about all the campaigns both launched and mentioned during #CSW58 and compile them in one place. This is an incomplete list in all likelihood but my hope is that I will be able to get people to leave comments with those that are missing from this compilation to help me along!
So lets get started shall we!
Campaigns @ CSW58
Tweets about the various campaigns launched or mentioned at CSW58.
Over the last ten years one of the things I have had a love-hate relationship with is the monitoring frameworks. I love it for its instrumental role in international development programming but my hatred for its inability to be standardized enough without becoming too rigid. Of late I had the opportunity to revisit this relationship in the context of institutional monitoring needs and frameworks for accountability in my current role. This revisit has tilted the balance in favor of more love than hate. It is the bad implementation of monitoring and evaluation tools that made me hate them more rather than their ability to be adapted. If correctly applied, monitoring frameworks can actually:
Improve institutional accountability to its primary stakeholders and actually work to proactively improve its programming rather than not.
Need for ‘skills’ to develop a good monitoring framework rather than rush through the process to meet a donor requirement or an institutional process step.
Lastly and perhaps most importantly, monitoring and evaluation is a key element of the international development programs can be successfully applied to almost all areas of public administration, domestic and international.
I think most of us owe a lot to Practical Concepts Inc. who developed the Logical Framework Approach (LFA) in 1969 for USAID. Since then it has been widely adopted and adapted by all organizations big and small around the world to monitor programmes and in project planning and design. Most, if not all, donors require all organizations to submit as part of their planning process a log frame, which most often comprise of outcomes, outputs, activities, budgets and other inputs providing a strategic performance management framework.
An appreciation of chaos theory moves us away from deterministic models without abandoning the presumption that there are causes and effects to be analysed and understood. It encourages us to think of systems as being more open than closed and as evolving rather than fixed. (Uphoff 1996:294)
Despite the criticism and alternative model suggestions LFA has remained the most predominant methodology for monitoring international development programs. Try writing up “local context” in under 5000 words without losing any context and you will know why the other models fail at being used by the donors!
This brings me back to my the reason why I started mulling over this in my previous post politics of results. What are the reasons we monitor and evaluate implementation of programs? What results are we trying to find? Are there some results that we want to find as opposed to others which we don’t? For whom are these results being measured? Is the organization monitoring and evaluating grass-root implementation processes in order to inform decisions at the headquarters-based management level? Are results for other donors and funding agencies? Or is monitoring and evaluation a way for donors to keep track of their investments?
Then perhaps the the question I should be asking is not politics of results but rather trying to unpack the underlying purpose of monitoring and evaluation and the tools used to do so!