This paper presents a review of the article that assesses the levels of reading comprehension in low-achieving adolescents from the perspective of subskills identification and the specificity of tasks. The report features the main points of the study, as well as the authors’ assumptions, views, and arguments. The paper presents a critical evaluation of the article, analyzing its claims, organization, perspectives, evidence, and observations. The report concludes that the study and its findings can be considered reliable.
specifically for you
for only $16.05 $11/page
The article selected for the review is the work by van Steensel, Oostdam, and van Gelderen (2012) that addresses the topic of evaluating reading comprehension skills in low-achieving adolescents. The main points of the authors are that the SAlsa Literacy Test (SALT) reading can assess various reading subskills, the reliability of the test is affected by the specificity of tasks, and that the optimal number of tasks that can enhance the test’s reliability within a fixed period. The evaluated subskills include retrieving, interpreting, and reflecting on learned information. The underlying assumptions of van Steensel et al. (2012) are that reading subskills are can be identified in low-achieving adolescents easily and that task specificity can affect error variance. The potential biases of the article are caused by relation to potentially unreliable studies in the field that have been able to identify reading comprehensions subskills using the same approaches to evaluation, as well as the limited study sample.
To test their hypothesis, the authors use the SALT as the primary evaluation measure. They administer the SALT reading, constructed of 21 retrieving, 18 reflecting, and 26 interpreting items among individuals showing poor academic performance (van Steensel et al., 2012). The authors’ purpose is to examine whether the subskills could be identified within the selected group of students using the confirmatory factor analyses. In addition, van Steensel et al. (2012) aim to assess the impact of task specificity utilizing the two-step G theory analysis. The primary argument the authors have to support their main points is that various studies in the field, including those that do not support the study’s thesis, allow for the suggestion that test can distinguish subskills empirically. It is possible to say that the intended audience is the researchers in the field as well as the developers of reading tests. Such a conclusion is determined by the specificity of the research question and a vast number of references to similar works, as well as authors’ extensive recommendations for their colleagues and future studies.
The findings of the study show that there is no dimensionality in the ability to identify subskills. As a result, the authors conclude that reading comprehension is a single ability both for high-achieving and low-achieving students (van Steensel et al., 2012). In addition, the study shows that test specificity does not contribute to error variance. Increasing the number of items per task is found the most effective measure of maximizing the reliability of the test. The major conclusion of the study is that reading comprehension subskills cannot be identified using the SALT test; however, the one-dimensionality of the scores can be seen as the proof of the test’s validity (van Steensel et al., 2012). The authors note that future researchers should either utilize qualitative studies to illustrate that reading tests can be used to identify particular reading problems or avoid making such claims.
It is possible to say that the authors’ preliminary arguments about the distinguishability of reading comprehension subskills and the link between the test’s reliability in relation to the specificity of tasks are logical. Van Steensel et al. (2012) support their claims with evidence from other works in the field. However, it is possible to state that the assumptions are inaccurate because they are based on unreliable data, which they confirm in the discussion section. The authors’ facts presented in the findings can be considered reliable, as they offer statistic data using a second-order factor model, the G theory-based study, and the D study. The study reveals that various assessment tools show the same results, which means that the facts are reliable. The article is well-organized, clear, and easy-to-read; all information is divided into sections, and the necessary references and tables are provided. The authors define significant terms clearly and explain the goals of each evaluation tool in detail. Such an approach allows the audience to understand the findings of the study better and assess its reliability.
It is possible to say that the study features sufficient evidence on various perspectives on the issue but the background for authors’ assumptions is unclear, as their hypotheses are not based on particular studies. However, there is enough evidence supporting the conclusions of the research. Van Steensel et al. (2012) present the results of the extensive investigation of the research question. The final claims of the paper are different from the authors’ preliminary arguments and the main points; initial claims of the study support its hypothesis. The article can be considered appropriate for the intended audience because, as mentioned above, it features extensive studies in the field, presents a multifaceted perspective on the topic, and offers valuable recommendations for future research.
The study presents opposing views as it includes information about available works on the issue, that have different perspectives. For instance, the authors present various perspectives on the role of task specificity and support each of them with evidence. It is possible to say that the study does not refute any of the existing views; however, its findings reveal that some of the works in the field may be unreliable. The article helps to understand the subject because it states all the facts clearly, shows a comprehensive approach to analyzing the issue, and discusses the causes of differences between findings of various studies. In addition, the authors present sufficient evidence for the choice of evaluation methods.
100% original paper
on any topic
done in as little as
The main observation the article features is that other researchers have presented different findings while using the same evaluation methods and analytical approaches. The possible background for such an issue is that the ability to outline separate subskills is determined by not only tests but also the characteristics of test takers (van Steensel et al., 2012). Another observation the authors make is that there is a problem of defining the boundary between a sufficiently challenging and an overly complicated test, as the complexity level potentially affects the findings of the study. Finally, van Steensel et al. (2012) note that the item format of the test utilized for the evaluation could have resulted in the differences in findings of existing works in the field. The study concludes that the SALT-reading is valid if reading comprehension is considered a single construct.
The reviewed article presents the analysis of the SALT test’s ability to identify reading comprehension subskills in low-achieving individuals. The study’s preliminary assumptions are based on the research in the field and are different from its findings. The authors present sufficient evidence supporting the results of their evaluation and suggest that the differences between findings can be determined by the characteristics of participants, utilized items, and the complexity of tests.
Van Steensel, R., Oostdam, R., & van Gelderen, A. (2012). Assessing reading comprehension in adolescent low achievers: Subskills identification and task specificity. Language testing, 30(1), 3-21.