Main characteristics and potential of PAs

 

 

Down​load as PDF​​



Main aspects and potential of the​​​​ PA

​Participatory assessment is particularly good at promoting learning and the discovery of new dimensions in a particular area or with regard to a specific intervention. 
Compared to other assessment methodologies, one feature of participatory assessment adds considerable value.


Target groups are not only asked for their opinions; target group representatives actively engage in the design of data collection, in generating insights and interpreting results. More concretely, the participatory assessment methodology aspires to deliver the ‘view from within’ an​d to minimise external expert or management bias by collecting data directly from exchanges among target groups and their peers.

 Bias Management

​Biases are omnipresent.

​Our perception of context and interpretation of facts is subjective. All of us have a conceptual and ideological framework through which we view the world around us. We explicitly or implicitly frame each situation in a certain way, which always leads to some kind of bias. Biases are omnipresent in our lives, but we are often not conscious of them or may experience blind spots. It is possible, but not always easy, to be aware of and transparent about our bias.  
All assessment processes feature various forms of inherent bias, framed by the project set-up, the contexts, perception of local conflicts and the local political economy, the topic of the programme, the purpose of the assessment, the values, incentives and interests motivating the key actors in the assessment process, and perceptions of target groups and ‘peers’. ​




The participatory assessment methodology aspires to make biases more transparen​​t and deal with them in a smart way, therefore leading to better results. 
Participatory assessments do not exclude bias per se. Different types of bias play a role in the exchange between target groups and peers – and must be dealt with. Management and expert biases also persist in other phases of the assessment, for example in the framing of the process and the analysis of the collected data.  ​


Learn more: biases, influence on perceptions and different forms of bias

What is a bias?


​A bias is a tendency or inclination in our thinking processes in favour of or against an idea, a belief, a person or a group, closing our mind to alternative interpretations.
There are typical patterns of thinking processes and social dynamics that influence the formation of bias. Bias occurs unconsciously and automatically, although most people believe they are making a reflective and conscious judgement.
It explains why we so often make the wrong decisions, how we gloss over reality and why even experts often fail to deal with novel, unexpected dangers. 




People also constantly overestimate their own knowledge and ability to reason. Experts are in fact especially prone to this – and particularly unwilling to admit this failure. 
Biases occur, they are omnipresent, they are natural. On the other hand we gain a shared, enriched view if we reflect on them consciously. 



Why do we have biases? The neurological dimension

External stimuli such as light or sound signals are recorded and transmitted to the brain. Our nerve cells communicate through simple electrical signals, however.
The nerve cells in the brain evaluate this information and then construct an image of the world with the help of previously stored patterns. 
This means we remember the way things have always looked, and then reconstruct the norm. 



In creating images from the input we get, unconscious, automatic mechanisms come into play. 
These mechanisms are influenced by biological factors (e.g. emotions, memories stimulating associative networks in the brain) and psychological factors (e.g. framing through language or specific words, the social environment).


What are the consequences of biases?

We refuse to take up information because it saves our brain's energy. Instead of trying to adapt and re-create new concepts to help us navigate, we stick to those that already exist. This mechanism is called cognitive dissonance. 



Therefore, ​our perception diverges from the existing data. As a consequence, our decision making is based on a restricted data/information base, and our thought processes are based on distorted assumptions and riddled with logical errors.
What types of bias are particularly important in a PA?

We can distinguish between two categories of bias: cognitive bias, which is due to pitfalls in our perceptions of information and in our logical thought process, and social bias, which is influenced by our interactions with others. 



A selection of cognitive biases


  • Confirmation bias relies on automatic (fast) thinking, based on successful heuristics from the past which might not fit the current situation. Information and interpretation supporting our own pre-existing opinions are filtered, other information is ignored or underestimated. Often, contradicting information is not even sought, contradicting hypotheses are not formulated.
  • Availability bias describes the tendency to use the information that is vivid in the memory (usually linked to emotions and unusual situations), or what seems to be 'common sense'.
  • Logical fallacies, such as confusing root cause and impact, or the tendency of our brain to invent a chain of reasoning (even if there is no causality) may influence the ways in which we perceive reality.
  • Culturally learned underlying assumptions and mental models subconsciously frame and guide our interpretation of a message or situation.
  • A gulf between declared intention and immediate action is observed in many decision-making processes. Even if all of the information is accessible, people may not behave 'rationally'. Sometimes they choose what seems better at a specific moment, responding to short-term incentives and ignoring long-term, overarching goals. The benefits of the long-term decision are far in the future; the costs are a certain effort in the present. The further away the benefit seems to be, the higher the tendency to opt for short-term gain.
  • Since we pay more attention to spectacular phenomena, things we categorise as small are perceived as less dangerous than big things. A mosquito is in reality more dangerous than a shark.
  • Comparing available options – where one is definitely better than the other – may blind us to the fact that neither is satisfactory. ​
  • Complexity – in the form of a high number of available options – diminishes our capacity to adequately weigh all of the options. Experts in a particular field are not immune to this.
  • The default fallback option is chosen because following traditional practice takes less energy than actively breaking with it, especially when the default is given as a recommendation.
  • Under omission bias, we tend to avoid actions if the future is very uncertain, and to underestimate future consequences. We prefer to do nothing even though this may turn out to be a high-risk strategy with unpleasant consequences.
  • The anchoring bias describes a tendency in decision-making to over-rely on a certain reference or an incomplete picture.
  • The 'sunk cost bias' leads us to continue an unproductive course of action to try to recover losses rather than abandon it at an early stage. There is a tendency to continue a project once an initial investment is made; stopping is interpreted as recognising a mistake, a waste of resources and effort.​


A selection of social biases​


  • Reciprocity leads to a sense of obligation to give something back to someone from whom one has received something. Criticizing donors' approaches might be perceived as an unfriendly act towards someone to whom you should be grateful.
  • If we like or feel sympathetic towards someone, we may tend to agree with them or give in. We are more likely to disagree with someone we don't like.
  • We place more confidence in the decisions of people with authority because we believe they have better knowledge, more experience – or more power.
  • Discrimination on the basis of social identity and the stereotyped perceptions we have of people may lead to bias against certain arguments or information.
  • Wanting to stick to an earlier decision can make us interpret data and argue in a particular way because we want to be perceived by others as behaving in a consistent way.
  • Errors in learning from the past happen because we create a narrative around past events to fill our need to safeguard our own image or reputation, embellishing our past. This might lead to lessons learned that are based on false assumptions.
  • Fundamental attribution errors explain the tendency to assume that a specific behaviour is caused by certain character traits rather than by the circumstances, while putting our own behavior down to the circumstances.
  • The desire for unified voices to maintain group cohesion suppresses the motivation to acknowledge arguments that do not fit in with group thinking.
  • Group polarisation describes the tendency to make more extreme decisions within in a social group than alone.
  • The spiral of silence is the phenomenon that people are more likely to withhold their opinions if they feel they are in the minority, are afraid of repression or feel they may be excluded or isolated.
  • Pluralistic ignorance occurs when individuals privately believe that they are the only one to hold a certain opinion, staying silent to conform to peer pressure.

Which biases can be linked to specific phases in the PA?

PA aspires to minimise an expert's bias. What is meant by that exactly? Looking at the biases listed above, it seems likely that the confirmation bias will play a prominent role. Several cognitive biases may also come into play, in addition to bias from framing, the availability bias, the complexity bias and the anchoring bias. All of the social biases may occur depending on the specific situation.

 

The facilitator and his/herhis/her team are supposed to substitute for the traditional expert. If aware of potential expert biases, they are assumed to take these into account and make adjustments to ensure that they do not impact on different steps in the process. For example, when setting-up the interview phase, when framing the purpose, in the selection and training of peers as well as in the data collection and analysis steps, the aforementioned cognitive biases are in the foreground. As soon as the facilitator is interacting with the peers, many of the social biases listed may play a role the facilitation teams as well as for peers.

 

The peer-exchange itself is necessarily framed by the 'mandate' that is formulated for the peers regarding the intended purpose of the PA. It would be a misunderstanding of the methodology if for the sake of avoiding expert bias the process was left open without providing a clear frame for the exchange between peers, and without a clear mandate or assessment purpose. If no framework is given, the stakeholders will unconsciously frame the assessment in their own way through their interpretation of the context, the authority of the facilitator or other stakeholders, or sympathy and reciprocity – reproducing multiple biases.



The peer-exchange and its creation and collection of data may be influenced by confirmation bias, which is a consequence of sympathy and reciprocity bias. Obviously, closed (yes/no) questions are a strong indicator that the spectrum of answers is narrowed by confirmation bias.

 

In the data collection and analysis steps, the cognitive biases of the facilitator and his/her histeam play a strong role, depending on how the process of drawing conclusion for management decisions is set-up, and social biases also interfere significantly.  There should not be a taboo around the fact that there is a dependency relationship between the contractor and the donor.

 

Donors, organisers and implementers should pay special attention to the interpretation of the collected data and information, and with regard to their decision-making process. E.g. confirmation bias narrows the possible sphere of learning if not dealt with consciously.

 

The list of biases might serve as a checklist for the different phases, to raise awareness and define specific measures in the process and to deal with the biases' impact on the results.​



Further references:

Daniel Kahnemann, Thinking, Fast and Slow, 2011

World Bank Group, World Development Report, Mind, Society and Behaviour, 2015

​​​

​​

  Roles & Actors

Compared to an expert evaluation or an internal review, the participatory assessment attributes specific roles to various actors:​

  • The project funder (in most cases the organiser of the PA) is responsible for defining the scope and the purpose and for the overall design of the process, in close cooperation with the facilitator. The funder navigates the complexity of the process, with a view to tapping into the potentials and managing the risks. The funder is responsible for responding to the conclusions at the level of management, and manages the overall organisational learning processes.
  • The individuals and groups targeted are not only valued as resources, they are expected to actively contribute insight about the intervention, from a more genuine perspective that is less framed by the evaluating experts or project managers.​
  • 'Peers' are expected to frame the data collection (through exchange and interviews) and generate insight together with the targeted individuals and groups. Peers should be perceived by the individuals targeted by the intervention as part of their group. The assumption is that target groups are more open in expressing their views in an exchange with peers than in an exchange with experts or project stakeholders.


  • facilitator (independent from project management) takes responsibility for organising the data collection, facilitating the exchange, analysing, interpreting and documenting the results (as far as possible together with the 'peers' and target groups).
  • The project's implementer also plays a specific role. The implementing structure has established links and relations with target groups and disposes of data and information that is needed for contacting the targeted groups and peers. The implementer is an important stakeholder who can provide highly important insights and must contribute to the validation of the findings. On the other hand, the performa​​nce of the implementer is also the object of the assessment and should not be involved in the primary research activities.

Learn more​

​Taking into account the very active role that is expected from targeted individuals and groups in the participatory assessment, this methodology might not always be the most appropriate.​​

For example, students who have finished a training module under the programme at stake and then moved on to other professional activities might not be interested in or motivated to invest considerable time in assessing the training programme in detail. In other cases, where target groups are more constant (e.g., farmers involved in a long-term programme), they will be much more interested in learning from the assessment for their own future, and motivated to contribute. In any case, the target groups and the peers need to understand their active role, be available and ready to contribute in a meaningful way.

Good communication and a certain level of trust among the facilitator, the target groups and the peer groups are key for making the PA empower the interlocutors and increase downward accountability of the intervention at stake.





​Similarly to other methods, there is no blueprint or 'one size fits all'. Participatory assessments require particularly careful design and management of the evaluation process. This must respond to the identified purpose, taking to account the context, clarifying expectations and balancing the various dynamics that might evolve throughout the process. On the one hand, the PA's purpose and the intervention's objective and set-up must frame the scope and design of the assessment to make the synthesised results useful for the intervention's management.

On the other hand, the active role of target groups and peers is expected and welcomed as it may bring new ideas and dynamics – although there may be a risk of going beyond the original purpose, reducing the assessment's relevance for the intervention, or even of doing harm.  

For example, conducting a PA in fragile and/or conflict-affected situations requires conflict sensitivity to avoid doing harm to targeted individuals and groups who might fear for their lives and livelihoods if they express critical views. They might not trust the assessment process enough to express their real views.​
​​

​​

Characteristics of intervention and target groups

As for other methodologies, PA process designs depend on the characteristics of the intervention at stake – and the involved target groups.​​

For example, a participatory assessment of an agricultural programme directly supporting farmer groups or value chains will have to be designed differently to an assessment of a programme focusing on institutional support of the agricultural ministry.


Investme​nt

A participatory assessment requires a certain investment, which can be managed by careful scoping and process design, keeping an attractive cost-benefit ratio. Adequate financial and human resources should be available and well-planned at every stage of the process.  



The quality of the PA results depends mainly on the engagement of the donor, the programme management throughout the process, and the capacities and availabilities of the actors involved.  ​

 Potential and risks

To sum up, the participatory assessment methodology has great potential. If done well, the PA can:

  • provide key insights on how to im​​prove the inclusiveness of an intervention by creating a more differentiated under​​stand​ing of the target groups and their different perspectives on results, needs, challenges;
  • make blind spots visible, reveal new dimensions by listening to real stories which contributes a different kind of data and may change our understanding of the context;
  • contribute to making future inte​rventions more effective, sustainable and responsive to the needs of target groups;



  • challenge the theories of change, the logic of the intervention and the priorities set by the project/programme, subjecting them to a reality check;
  • contribute to making management processes more participatory and inclusive, improving relations between project management and target groups, establishing trust and empowering and motivating them to become engaged;
  • identify unexpected/ unintended effects of an intervention.

​The PA process might lead to negative consequences:

  • At the level of programme managemen​​t: If the purpose, scope and process are not clarified carefully, the cost-benefit ratio might be inadequate, which means a lot of work was done for meagre results that are not useful for programme management. In addition, if the process is not guided and implemented with a thorough understanding of its complexity, the conclusions may be misleading and do harm to the project/programme and its beneficiaries.



  • At the level of participants: If the conflict risks and power dimensions are not identified and managed properly, participants who express critical views may be exposed to negative consequences in their communities. Another risk is that the process may raise expectations that the project/programme cannot respond to.
  • At the institutional level: If the process is not well designed and coordinated with the partners, a PA may do harm to the reputation of SDC and its implementing partners.

 Resources​


Summaries of country and regional PA projects

Tanzania 2017

BA Re port on Opportunities for Youth Employment (OYE Project). 

Hach Consultancy, SDC, Helvetas / November 2017

Download Report

 

Bangladesh 2017

BA Report on 'A Systematic Understanding of Farmers' Engagement in Market System Interventions: An Exploration of Ka talyst's Work in the Maize Sector in Bangladesh'.

IDS, Praxis India / June 2017

Katalyst Beneficiary Assessment Final

To learn more, click here

 

Haiti 2016

Rapport d'évaluation des impacts du projet EPA-V, projet eau potable et assainissement. Helvetas / Juin 2016

Download Rapport


Evaluation par les Acteurs du service de l'eau, concept pour l'évaluation des impacts du projet EPA-V. Helvetas / 20 Mars 2016

Download Methodology EPA-V

 

Honduras 2016

BA Report on VET (Vocational Education and Training) / Evaluación Participativa - Project PROJOVEN. 

SWISSCONTACT, January 2016

Download Informe Final

 

Lesotho / Swaziland 2015: 

See a description of how a BA was conceived and implemented in Lesotho, Swaziland, from the point of view of an SDC Regional Programme Manager

Download Report

 

Nicaragua 2014

Final Report on Beneficiary Assessment of PAGRICC Programme (Environment, Disaster Risk Management and Climate Change).

Final Beneficiary Assessment (Evaluación participativa por protagonistas) del Programa Ambiental de Gestión de Riesgos de Desastres y Cambio Climático. 

MARENA / October 2015

Download Report 

 

Bangladesh 2014: 

Beneficiary Assessment of Improving Water, Sanitation and Hygiene Status in the Kurigram and Barguna Districts of Bangladesh. 

Terre des hommes Bangladesh / Mosabber Hossain and Humyra Habiba / March 2014

Download Report

 

Ethiopia 2013: 

Beneficiary Assessment of Community's Perceptions on Rehabilitated and Improved Water Resources in Miyo Wereda, Borana Zone of the Oromia Region. 

HEKS / Dr. Mirgissa Kaba and Liben Gollo / December 2013

Download Report

 

Nepal – 2013: 

Beneficiary Assessment of the Water Resources Management Programme (WARM-P)

Validation Workshop. Ramesh Chandra Bohara, Harihar Sapkota, Riff Fullan, Martin Fischler / September 2013 

Download Report

 

Nepal / Ethiopia 2013: 

Reflections on BAs of WASH Projects in Nepal and Ethiopia. 

Riff Fullan / September 2013

Download Presentation

 

Panama 2013 

Final Summary of the BA Workshop. Agencia Suiza para el Desarrollo y la Cooperación. 

COSUDE (SDC) / October 2013 

Download Final Summary


Central America, Bolivia and Kenya 2012: 

Beneficiary Assessment – A Participative Approach for Impact Assessment

SDC Learning Event / Martin Fischler / January 2013

Download Report

 

Nepal 2012: 

Public Audits in Nepal. 

SCD Learning Event / Katrin Rosenberg / November 2012 

Download Report 

 

East Africa 2010

Impact Assessment of Pull-Push Technology developed and promoted by ICIPE and partners in eastern Africa. 

ICIPE, Intercooperation / March 2010

Download Report 

 

El Salvador 2008: 

Informe Resultados Evaluación Participativa por Municipios (EPM), AGUASAN. 

COSUDE (SDC) / August 2008

Download Report

 

Bolivia 2006: 

Valoración Participativa de Impacto (V.P.I.) - Programa ATICA. 

ATICA / September 2006

Download Report

 

Bolivia 2004

Valoración Participativa de Impacto (VPI) - Programa Nacional de Semillas (PNS). 

CINER / Diciembre 2004

Download Report

 

Bolivia 2004

Valoración Participativa de Impacto de proyectos productivos con manejo de recursos naturales. 

ATICA / Julio 2004

Download Report 

 

Nicaragua 2003

Evaluación Participata por Productores (EPP). 

PASOLAC / Febrero-Marzo 2003

Download Report

 

El Salvador 2003

Evaluación Participativa por Productor EPP Fase 2000-2003. 

PASOLAC / 2003

Download Report

 

Honduras 2003

Informe de Resultados -Evaluación Participativa por Productores y Productoras. 

PASOLAC / Marzo 2003

Download Report



​​


Reflections on Beneficiary Assessment​

​Beneficiary Assessment - An Approach Described
Lawrence F. Salmen, 2002
Download Report

How to Note - BA
SDC, 2013
English Download
Spanish Download

​​

​​Learning About Beneficiary Assessment - SAHA
Testing the Beneficiary Assessment methodology in the context of external project evaluation
Marlene Buchy & A. Kelbert, n.d.

​​ ​


Overview of learning events​​

​BA in PCM at Strategic and Operational Level SDC (Quality Assurance Tools & Evaluation)
Laurent Ruedin, Ursula Laeubli, Anne Bichsel, SDC

Public Audits, Nepal
Katrin Rosenberg, Helvetas Swiss Intercooperation 

Citizen Engagement through Visual Participatory Processes, Bosnia, Herzegovina 
Stephanie Guha, SDC



Beneficiary Assessment: the Case of SAHA, Madagascar
Alexandra Kelbert 

Beneficiary Perspectives in the Water Consortium
Agnès Montangero, Swiss Water & Sanitation NGO Consortium 

Whose Learning and Accountability? Working with citizen researchers
Cathy Shutt, Plan international 



​​



​​
To get started with your participatory assessment, contact us:
Stephanie Guha, stephanie.guha(at)eda.admin.ch   
Ursula König, uk(at)ximpulse.ch

​​


 



​​