It is impossible to argue that evidence is important for public policymaking. Essentially, the evidence provides data on what works and what does not. Parkhurst (2017) speaks about proponents of greater utilization of evidence tending to point to various examples that show how a stricter or more extensive evidence use helps avoid unnecessary damage and attain social policy goals. However, at the same time, these people are afraid of and nervous about someone’s ability to cherry-pick, obfuscate or manipulate certain pieces of evidence for their own political purposes. It is a crucial issue to address, and, for evidence advocates, one way to tackle these concerns is through using evidence-based policymaking, or EBP (Parkhurst, 2017). EBP expects policy decisions to be based on careful and scrupulous use of scientific evidence.
Calls for evidence-based policies have become so widespread in the last few decades that they have turned into a movement themselves. Parkhurst (2017) notes that it resulted in the increase in EBP calls not only within state apparatus but within scientific institutions and the media as well. The quest for better evidence-based policy-making is closely associated with widespread demands for more effective service delivery and enhanced accountability in democratic societies. Head (2016) reports that this attention to improved policy and program design for greater effectiveness has long been mostly a matter of domestic policy issues. Recently, however, concerns about advancing foreign aid programs’ design and delivery have been raised. According to Head (2016), evidence-based processes of decision-making, founded on explicit use of reliable evidence and acceptable consultative processes, are considered to contribute to measured policies and just governance. The objectives focused on efficiency and effectiveness are accompanied by broader goals aimed at enhancing the perceived validity of decision-making in political processes and society’s trust in the people behind the decisions.
One of the important means of generating evidence is the process of evaluation. Before turning to the role evaluation plays in the implementation of the National Development Plan, it is reasonable to speak about how the paradigm has shifted in recent years. Almost twenty years ago, Hopson (2003) spoke about the development of multiculturally and culturally sensitive evaluation approaches and strategies as an absolute necessity. It includes the development of multicultural competence, which implies the recognition of “epistemological ethnocentrism”, favoring the perspective and values of white middle-class folks (Hopson, 2003, p. 2). Therefore, the task of evaluators would be to understand how knowing and being aware of cultural differences contributes to a variety of understandings of evaluation. In the current day and age, when the world is as diverse and inclusive as ever, these concepts have been realized and these ideas are relevant.
In the African context, the evaluation does not seem to be as widespread, especially when it comes to evaluations driven by governments and not donors. However, Goldman et al. (2018) state that, since 2007, national evaluation systems have been implemented in South Africa, Uganda, and Benin, which is considered to be a substantial policy experiment. For one, National Planning Commission (2021), when speaking about National Development Plan 2030, does not forget to mention the importance of evaluation. When it comes to achieving NDP’s development priorities, evaluation is essential to this paradigm as a generator of evidence that is capable of shaping important policies and practices of the plan. As per National Planning Commission (2021), since the National Policy Framework (NEPF) has been adopted in 2011, evaluation has continually been advanced as a successful decision-making tool. It has been estimated to work at all levels and in all divisions of government, effective in a number of different contexts and for all citizens.
For instance, when it comes to the improvement of education, training, and innovation, NDP has a specific vision for how to evaluate the results that will have to be achieved by a certain time. According to National Planning Commission (2021), by 2030, South Africans’ access to the highest-quality education and training is to lead to significant improvements in their learning outcomes. South African students’ performance in international standardized tests is to be comparable to that of students from areas with similar levels of development and access to learning resources. In addition to that, education, training, and innovation are to be tailored to different needs and generate highly skilled professionals. Graduates of South African educational institutions are to possess the knowledge and skills to meet the needs of the society and economy. Moreover, the education system is to be more inclusivity-oriented and help all citizens realize their full potential, especially those belonging to disadvantaged and minority groups.
The NEPF framework maintains the foundation for a minimum evaluation system at the government level to guide and promote the quality, effectiveness, and relevance of evaluation processes. In such a way, NDP ensures that its approach is result-oriented as it aims to make certain that credible evidence obtained from evaluations is used in planning, monitoring, budgeting, and organizational reviewing for performance improvement. Moreover, NDP warrants that evaluation-based evidence is endorsed by a set of guidelines supporting the evaluation-undertaking steps under the National Evaluation System. All that indicates that evaluation plays a key role in facilitating NDP’s outcomes and impacts.
It is interesting to note that evaluation might sometimes be confused with monitoring, and it is important one knows what the difference between them is. As per Planning, Monitoring & Evaluation (2019), monitoring raises questions about how effectively planned involvements are being executed, while evaluation raises the questions of whether these involvements are the right responses to a specific challenge. As per BetterEvaluation (2013), evaluation is about assessing whether these responses are cost-efficient, efficient, and effective, and finding the ways to improve them at later stages. Evaluation is somewhat judgmental, but its judgments are based on established evaluation objectives/criteria.
Finally, evaluation has its own specific purposes that guide the practice and help evaluators with their appraisal. According to Planning, Monitoring & Evaluation (2019), there are four such purposes, the first of which is the improvement of performance, or evaluation for learning purposes. This purpose is aimed at providing feedback to initiative managers. Potential questions to ask are: is this the right initiative to achieve the goal? Is this a logical combination of inputs, outputs, and outcomes? Are there more effective and efficient ways to achieve the stated goals? The second purpose is evaluation for accountability improvement, which is aimed at estimating where public spending is going, whether it is making a difference, and whether there is provision for the value for money.
The third purpose is evaluation for knowledge generation or evaluation for research. It means that knowledge of what works and what does not must be increased regarding a particular public policy or state program. This knowledge allows the government to establish a factual basis for the future development of policies and programs. The fourth purpose is decision-making, which means that policymakers need to be able to assess the initiative’s worth or merit. They need to be able to say whether the initiative meets its goals and objectives, and what kind of an impact it has on the intended beneficiaries’ lives. Moreover, the questions to ask are: does the initiative impacts different population segments differently? Are there undesired consequences? Should the initiative be expanded, re-engineered, or closed? With four sets of questions related to each of the purposes, all the substantial aspects of the evaluation are addressed.
References
BetterEvaluation. (2013). Manage evaluation. Web.
Goldman, I., Byamugisha, A., Gounou, A., Smith, L. R., Ntakumba, S., Lubanga, T., Sossou, D., & Rot-Munstermann, K. (2018). The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa. African Evaluation Journal, 6(1), 1-11.
Head, B. W. (2016). Toward more “evidence‐informed” policy making? Public Administration Review, 76(3), 472-484. Web.
Hopson, R. (2003). Overview of Multicultural and Culturally Competent Program Evaluation: Issue, Challenges, and Opportunities. Oakland, CA: Social Policy Research Associates.
National Planning Commission. (2021). National Development Plan 2030: Our Future – Make it Work.
Parkhurst, J. (2017). The Politics of Evidence: From Evidence-based Policy to the Good Governance of Evidence. Taylor & Francis.
Planning, Monitoring & Evaluation. (2019). National Evaluation Policy Framework.