Image: 2012 Ted Goff

I think we need to say it upfront that all data is *much* greater than what people call big data. As someone who works with data in development space, I believe in the use of all data.

That’s why its imperative to understand “all data > (is greater than) big data”.

Increasingly conversations about possibility of big data playing an emerging role in solving development challenges has me, like few others, unsettled. Just because it has the term “big” in front of data does not mean its “all” data. There are many definitions that exist of big data. See here 40 such definitions. The common theme between them, for me, is that this data is ‘analyzable’ by machines to derive meaning from it.

This is problematic, for a few reasons:

1. Firstly, there is still significant amounts of data that are not digital. Sure, we are seeing digitization of data increasingly but that does not mean all data is digitized or in an analyzable format today. Therefore, whatever constitutes the universe of ‘big data’ is a subset of ‘all data’.

2. Secondly, digital data that is being created every second does not represent “all” and definitely not “us”.  So any analysis that results to public policy application will definitely not be reflective of the “us”. This is captured well by Nick Couldry in A necessary disenchantment: myth, agency and injustice in a digital world

A new myth about the collectivities we form when we use platforms such as Facebook. An emerging myth of natural collectivity that is particularly seductive, because here traditional media institutions seem to drop out altogether from the picture: the story is focused entirely on what ‘we’ do naturally, when we have the chance to keep in touch with each other, as of course we want to do.

http://onlinelibrary.wiley.com/doi/10.1111/1467-954X.12158/abstract

3. Thirdly,  its a myth that big data is generating entirely new and better forms of knowledge which will help solve development issues. This is the most problematic in the field of public policy. As Nick Couldry puts it:

” analysts are giving up on specific hypotheses and instead
focussing on generating, through countless parallel calculations, ‘a really good proxy’ for whatever is associated with a phenomenon, and then relying on that as the predictor. ”

The implication of development policy-making based on ‘real good proxy’ sends nervous shivers down my spine.

4. Lastly, to me, there is a power differential that is at play in what the ‘data’ in the big data represents. Whose data (digital haves vs digital have-nots), who analyzes (digitalsavvy-haves vs. digitialsavvy have-nots) and how its analyzed are all subject to the biases and power relations that exist in the real world ‘we’ inhabit.

These reasons are important to remember as we invest time, energy and money in making arguments about ‘big data’ in development discourse. There are challenges that with timely, relevant data analysis may be met but development challenges are not always wanting of faster analysis but are results of long standing socio-economic-political power struggles which no matter how fast and timely analysis you produce will not be solved because of that analysis.

Over the last ten years one of the things I have had a love-hate relationship with is the monitoring frameworks. I love it for its instrumental role in international development programming but my hatred for its inability to be standardized enough without becoming too rigid. Of late I had the opportunity to revisit this relationship in the context of institutional monitoring needs and frameworks for accountability in my current role. This revisit has tilted the balance in favor of more love than hate. It is the bad implementation of monitoring and evaluation tools that made me hate them more rather than their ability to be adapted. If correctly applied, monitoring frameworks can actually:

  • Improve institutional accountability to its primary stakeholders and actually work to proactively improve its programming rather than not.
  • Need for ‘skills’ to develop a good monitoring framework rather than rush through the process to meet a donor requirement or an institutional process step.
  • Lastly and perhaps most importantly, monitoring and evaluation is a key element of the international develop­ment programs can be successfully applied to almost all ar­eas of public administration, domestic and international.

I think most of us owe a lot to Practical Concepts Inc. who developed the Logical Framework Approach (LFA) in 1969 for USAID. Since then it has been widely adopted and adapted by all organizations big and small around the world to monitor programmes and in project planning and design. Most, if not all, donors require all organizations to submit as part of their planning process a log frame, which most often comprise of outcomes, outputs, activities, budgets and other inputs providing a strategic performance management framework.

There are enough critics of the LFA approach both in the donor community and in the development practitioner community and each suggestion an alternative. Read Robert Chambers; Rick Davies list of alternatives; participatory learning process approach in David Korten’s foundational publication Community Organization and Rural Development: A Learning process approach; Dennis Rondinelli in the evolution of development management theory and practice in aid: a context for evaluation;  Norman Uphoff’s Learning from Gal Oya where he calls for synthesis and a holistic approach:

An appreciation of chaos theory moves us away from deterministic models without abandoning the presumption that there are causes and effects to be analysed and understood. It encourages us to think of systems as being more open than closed and as evolving rather than fixed. (Uphoff 1996:294)

Despite the criticism and alternative model suggestions LFA has remained the most predominant methodology for monitoring international development programs. Try writing up “local context” in under 5000 words without losing any context and you will know why the other models fail at being used by the donors!

This brings me back to my the reason why I started mulling over this in my previous post politics of results. What are the reasons we monitor and evaluate implementation of programs? What results are we trying to find? Are there some results that we want to find as opposed to others which we don’t? For whom are these results being measured? Is the organization monitoring and evaluating grass-root implementation processes in order to inform decisions at the headquarters-based management level? Are results for other donors and funding agencies? Or is monitoring and evaluation a way for donors to keep track of their investments?

Then perhaps the the question I should be asking is not politics of results but rather trying to unpack the underlying purpose of monitoring and evaluation and the tools used to do so!