Depending on the type of assessment one needs to conduct, there are different characteristics and variables to base the evidence evaluation upon. For the purposes of this discussion, the relevance-based assessment and the representativeness-based assessment were chosen. Since these characteristics are essential to any scientifically valuable research, both types are frequent in use and often conducted in parallel with each other. However, there are important differences in the features these assessments evaluate and the tools that can be applied.
Relevance-based assessment is conducted to determine whether the evidence provided is thematically and informationally relevant to the research question. Relevant evidence would make a fact clearer than it would have been otherwise without the reader familiarizing themselves with the evidence (Maseleno et al., 2017). Cross-referencing with other peer-reviewed articles is often used as a way to evaluate this characteristic by establishing a set of links between the noticeable works in the field.
Representativeness refers to the research’s idea to cover its demographic successfully, rather then only it’s half. It is a crucial factor in the field research, particularly where human participants or potentially sensitive topics are involved. To assess the evidence on the grounds of representativeness, a researcher could closely examine the methods utilized in the project. Particularly, one is to pay attention to the sampling procedures and whether the simple random sampling was implemented in the process (Gorbanev et al., 1438718). In general, this form of sampling eliminates the majority of potential bias allowing for greater correlation between reality and the research results. It is no doubt one of the key indicators of the paper’s overall academic quality. Generally, it is somewhat easier to assess, with most researchers keeping track of the research tools they have used.
References
Gorbanev, I., Agudelo-Londoño, S., González, R. A., Cortes, A., Pomares, A., Delgadillo, V., Yepes, F. J., & Muñoz, Ó. (2018). A systematic review of serious games in medical education: Quality of evidence and pedagogical strategy. Medical Education Online, 23(1), p. 1438718. Web.
Maseleno, A., Huda, M., Siregar, M., Ahmad, R., Hehsan, A., Haron, Z., Ripin, M. N., Ihwani, S. S., & Jasmi, K. A. (2017). Combining the previous measure of evidence to educational entrance examination. Journal Of Artificial Intelligence, 10(3), pp. 85-90. Web.