The section offers an overview of Monitoring and Evaluation to help researchers adopt data collection models that incorporate quantitative and qualitative methods.
–Rebecca Kunin and Dikshant Uprety
Monitoring:
Monitoring is the process of continuous data collection informing relevant project stakeholders on a project’s progress or deviations from the goal. Monitoring takes a form of routine assessments, and answers the basic question, ‘Are we on track to achieve our goal?’ Project stakeholders (project planners, beneficiaries, project staff, researcher) themselves conduct periodic monitoring.
Evaluation:
Evaluation is the systematic assessment by an insider or culturally knowledgeable outsider of how a specific event or program has affected individuals, groups and communities. It takes into account the perspectives of various actors, combining both qualitative and quantitative and practical and abstract analysis. It looks to the distant future while also analyzing immediate results, keeping in mind the beginning, intermediary and final stages of a project.
Limitations:
Practical concerns prohibit any model from being solidly structured, inflexible, and all encompassing. The model should be the ideal, and it should incorporate all of the points of evaluation discussed above. Evaluations must strive for this ideal, but practical concerns may prohibit certain areas from being fully explored. For instance, depending on the context, it may be impossible to obtain plurality by interviewing individuals from different perspectives on a project. Time concerns and funding may prohibit one from conducting continuous long-term evaluations, especially if the project is far from home. Finally, applied ethnomusicologists work on a number of projects simultaneously, or in quick succession. Striving to achieve an ideal model would significantly reduce the number of projects that an ethnomusicologist could work on.
Ethnomusicological models must consider these, amongst other practical concerns. While the importance of close relationships with the community and individuals within the community may seem like a basic concept for ethnographers, many government funded projects and NGOs hire external evaluators.These evaluators use mostly quantitative data extraction techniques to determine whether a project is a success and should continue to receive funding. While this is implemented in order to maintain objectivity in evaluations, information could be lost, manipulated or misunderstood when evaluators do not have a personal connection and cultural knowledge of the community. This sentiment is extended even further when it concerns ethnomusicology, which typically broaches intangible concepts. Indices of poverty, mortality, or health are easier to determine using quantitative data analysis than intangible concepts of heritage or identity, for instance. Many existing methods of evaluation are incongruent with anthropological and ethnomusicological practice. Ethnomusicologists must find a middle ground that is both practical and affective.
–Rebecca Kunin and Dikshant Uprety
Governmental and non-governmental institutions rely on monitoring and evaluation (M&E) to inform their current track record and whether or not the goals set can be met. Identifying problems early on allows for project or policy stakeholders to suggest corrections. A well built M&E system also ensures that project activities and objectives are attained within their given timeframe, which in turn may have financial repercussions. M&E’s are also good way to ensure optimum use of scarce project resources (which can be human, social, physical and financial capital). Sometimes regular monitoring is conducted against a baseline standard, which involves data collected at the beginning of the project. Evaluations conducted after the completion of project or policy initiative can also benefit from baseline data.
Finally, M&E systems are dynamic systems which can change with time and place. For example, an ethnomusicologist monitoring whether or not local children are accessing audio-visual materials within an archive in rural Nepal may decide to visit the children’s houses to interview them and their parents to understand how the archive materials can be made more accessible or attractive for the children. The same method – visiting each child’s house, taking interviews of parents and children – might not be possible, either financially or physically in a large city such as New York.
M&E can be used to:
- To identify problems and make corrections on projects
- To contribute to a general knowledge about successful/unsuccessful project strategies
- To demonstrate the results of a project to funders, clients, researchers, etc.
- To avoid financial overruns and optimize the use of scarce resources
- To achieve the project objectives in a timely manner
- To assess the performance of the project against a standard
- To improve the M&E process
–Rebecca Kunin and Dikshant Uprety
In the most basic of terms, indicators are variables to measure changes. Most programs or projects in the public sectors have integrated monitoring and evaluation components, and indicators are the building blocks of such components. They are highly useful to measure the efficiency and effectiveness of ongoing or completed activities within a program or project. Let’s take an example to make this point clear. Let’s say that an ethnomusicologist obtained US$ 20,000 to buy music instruments for a rural library in Nepal so that more students from the village frequent the library and spend time there. Now let’s say that the ethnomusicologist received the money from a large international non-governmental organization, called EDUCATE, and would receive half the funding in the first year, and half next year depending on the level of success of the project. While signing the contract with the ethnomusicologist, EDUCATE demanded that the ethnomusicologist provide a brief quarterly monitoring report and an evaluation report at the end of each year. The latter will guide EDUCATE’s decision on whether to continue funding the project for the next year. Specifically, EDUCATE demands this information to be provided:
- How many students currently visit the library (per day/per week/per month)?
- How many hours do students currently stay in the library (per visit/per month)?
- What is the average student age/gender?
Questions 1, 2, and 3 are what can be termed as baseline information, i.e. information to which new information can be compared and contrasted. Moreover, EDUCATE demands this information after the project starts: (a) Has the number of students who visit the library increased? (b) Has the number of hours that students use the library increased? (c) Is the library useful for students of different age groups and genders? Questions a, b, and c can be obtained only when the ethnomusicologist has the data for the Questions 1, 2, and 3. Therefore, “student visits per day,” “student visits per week,” and “average time spent in the library,” are indicators. Similarly, the ethnomusicologist can construct other indicators. Based on the information collected (daily/weekly/monthly), the ethnomusicologist can write the quarterly monitoring reports and an evaluation report at the end of the year.
It can be argued that the use of indicators is meaningless for ethnomusicologists since our disciplinary background pushes us to produce long textual narratives instead of numbered data files. However, it is very important for ethnomusicologists to understand these concepts, especially those who want to work for non-profits, non-governmental, and governmental organizations. Moreover, indicators are basic data, that is, they still need interpretation. For example, continuous data gathering on student visits by gender under the age of 18 might show that more females are visiting the library instead of males. How would our ethnomusicologist interpret this data? The obvious choice is a turn towards ethnographic participant observation and interviews. The ethnomusicologist might find that most males under the age of 18 are working as child laborers in a nearby brick kiln factory and are unable to visit the library during usual work hours. Thus, in addition to indicators classic ethnomusicologists methods can also be employed to add value to indicators.
–Rebecca Kunin and Dikshant Uprety
Plurality:
One single project could be both a success and failure depending on differing viewpoints. It is exactly for this reason why ethnomusicological evaluation must take into account the results of a project from the perspective of different actors within a community. Admittedly, ethical dilemmas get in the way of this holistic type of evaluation. Power dynamics between respondents, ethnographers, funders, institutions and governments must be taken into account. It is important to remember that the ethnomusicologist as a cultural broker must consider both external pressure from the government, funders, and whoever else might have a stake in the success of a project, and internal pressure from the community, group, or individuals that the project is aimed towards benefiting. The cultural broker must walk the line between the practical concerns of obtaining funding for a project and the abstract concerns of doing a good job and benefitting the community. Evaluative models must therefore be flexible and provide the space to problematize singularity.
Qualitative and Quantitative Methods:
Evaluations must incorporate both quantitative and qualitative research. Music and healing, or medical ethnomusicology, is perhaps a good place to start because success is somewhat more tangible, as research in these subfields often seeks to produce concrete results. Because it can be both quantitatively and qualitatively analyzed, medical ethnomusicology is a possible route to theorizing evaluation.
Practical and Abstract Evaluations:
While medical ethnomusicology is more tangible, it still must take into account that results are both practical and abstract. While ethnomusicologists might be able to better determine the effects of distributing condoms and educating healers on disease prevention, for instance, how can ethnomusicologists possibly begin to evaluate the effects of positive living? It is for this reason why ethnomusicologists must balance qualitative and quantitative methods.
Short-term and Long-term Evaluations:
According to existing models, evaluations must not only be both formative and summative, but they must also take into account the process of change. While any evaluation should conduct formative, process and outcome assessments, practical problems may prohibit individuals from being able to evaluate into the long term. Depending on the project, evaluating long-term results may mean that ethnomusicologists must never stop evaluating their projects, even if they have been completed. While funding and time restraints make this impossible, ethnomusicologists must find a middle ground. If they cannot evaluate much into the future, they must explain why.
–Rebecca Kunin and Dikshant Uprety
The steps to construct an M&E system vary between organizations. Here we have tried to include the most basic steps for ethnomusicologists to think about when they are planning any project or program where M&E is an integral component. Remember that monitoring usually includes frequent data collection that informs project personnel and funding agencies on whether or not the project is on track or if project activities need to be readjusted to obtain expected outcomes or results. In the same vein, evaluation pertains to long-term assessment of the project or program (usually after a year or sometimes after project completion) to investigate whether the whole project was successful or not. As such, the ethnomusicologist with their stakeholders can decide when to periodically monitor and when to conduct evaluations unless specified by the funding agency itself.
–Rebecca Kunin and Dikshant Uprety
In this section, we have outlined the basic M&E methods and tools. M&E tools are used to collect indicator data (see M&E Indicator section above). Usually for monitoring purposes quantitative methods and tools are used. However, as we discussed in the indicator section above, qualitative methods are typically employed for numeric data interpretation as well as for understanding other aspects of the project which cannot be quantified. For example, the quality of the books in the library or parents’ perception on whether they approve their children visiting the library. The information to these issues can only be obtained using qualitative methods and tools. Table 1 below displays the types of data collections methods and tools.
- Qualitative methods and tools: These include methods that ethnomusicologists are familiar with, such as participant observation and interviews. Focus Group Discussions (FGDs) are a new approach in ethnomusicology, although they have been used within the international development sector for some time. FGDs can be very useful when cross-check information or to collect data through group dialogues which may not be possible during participant observation or one-on-one interviews. Typically FGDs can be conducted with 5-10 people in a secluded room (for privacy reasons). The conversation should be recorded (after approval from the FGD participants) for the ethnomusicologists to refer in the future. FGDs are also a cheaper and less time intensive alternative to survey questionnaires.
- Quantitative methods and tools: As we have said multiple times in the M&E section, quantitative methods and tools are very important for checking the progress of the project. For informed decision making, many projects have inbuilt systems for checking results or outcomes periodically to report to funding agencies or project personnel. Quantitative survey questionnaire’s are thus an invaluable resource. An ethnomusicologist who knows how to design a quantitative questionnaire will be able to check their project’s progress with relative ease than with an ethnomusicologist who does not have this skill. Alongside primary data from the survey, ethnomusicologists can also aid their findings through secondary data sources.
- Mixed methods and tools: As the name suggests, mixed methods employ both quantitative and qualitative methods and tools. There are no universal hard and fast rules which guide the use of what mixed methods look like. However, the data and M&E expectations of the funding agency, and the need for data for project personnel usually guide the mixture of qualitative and quantitative tools within mixed methods.
–Rebecca Kunin and Dikshant Uprety
American Anthropological Association (AAA) Guidelines for Evaluation of Ethnographic Visual Media
National Endowment for the Arts (NEA) Teacher Evaluation and Accountability Toolkit (2011)
National Endowment for the Arts (NEA) Resources on Program Evaluation and Performance Measurement
National Endowment for the Humanities (NEH) Landmarks of American History Workshops Summer Seminars and Institutes: Participant Evaluations
National Endowment for the Humanities (NEH) Division of Public Programs Impact Evaluation
Web Center for Social Research Methods
American Evaluation Association
W.K. Kellogg Foundation Evaluation Handbook
Centers for Disease Control and Prevention: Program Performance and Evaluation Office
Better Evaluation
National Assembly of State Arts Agencies: Performance Measurement Models
Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
United States Department of Agriculture Forest Service: Evaluation and Monitoring
United Nations Development Program: Independent Evaluation Office
My Environmental Education Evaluation Resource Assistant (Meera)
Grantcraft: Participatory Action Research: Involving “All the Players in Evaluation and Change”
A Handbook for Participatory Action Research, Planning and Evaluation
UN Women: Monitoring and Evaluation Frameworks
Handbook on Planning, Monitoring and Evaluating for Development Results
(University of Oxford) A Step by Step Guide to Monitoring and Evaluation