Constructed- and Selected-Response Items
When designing a test, one may select an option of providing either constructed- or selected-response items. Each of the tools has its place in the hierarchy of psychological assessment methods, and both have their unique advantages, as well as certain inherent flaws. For example, the use of selected-response items allows for faster analysis of the results, saving time for answering, and embracing a range of issues (Reynolds & Livingston, 2012), although it restricts the number of options. Furthermore, the adoption of the specified tool as the means of data collection may lead to participants guessing the right answer as opposed to demonstrating their knowledge. Constructed-response items, in turn, help retrieve broader data and reduce the threat of misinterpretations, yet they take a significant amount of time to answer and process the participants’ responses. If I were to create a test, I would consider the goals thereof prior to choosing the type of answers since selected-response items provide a chance to save time, whereas constructed-response ones help develop a better understanding of the participants’ opinions.
Item Analysis
The concept of the item analysis implies the process of deconstructing the elements of a test in order to explore their role, contribution to the goals of the testing process, and the strengths and limitations of the items in question (Reynolds & Livingston, 2012). The importance of item analysis as one of the stages of developing a psychological test cannot possibly be underrated since the specified element reveals the limitations of test response items. For example, item analysis may lead to the conclusion that some of the responses to the test represent the same idea and, therefore, need to be excluded from the list of options. Furthermore, item analysis points to the opportunities for improving a test, e.g., by shaping the questions or suggested answers so that the responses retrieved from the participants could be more accurate, detailed, etc. Item analysis may also serve as the foil for the improvement of the test by shaping the suggested responses. The appropriateness of the procedures chosen for the specified test will be identified in the process, therefore, allowing for a significant improvement of the outcome (Lane, Raymond, & Haladyna, 2015).
Testing Language: Grade 1
Bloom’s Taxonomy
The framework of Bloom’s taxonomy provides the foundation for developing higher-order thinking in students. By definition, Bloom’s Cognitive Taxonomy implies that essential learning priorities be set straight. Offering six categories for classifying the key objectives and processes associated with the acquisition of the relevant knowledge and skills by learners, Bloom’s Taxonomy provides the foundation for engaging in the process of metacognition, thus developing an intrinsic understanding of one’s learning process. As a result, students become able to participate in the activities linked to higher-order thinking.
Therefore, the framework can be used to appeal to test participants on different levels of thinking. For example, the level of comprehension as one of the elements of Bloom’s taxonomy can be assessed successfully by providing learners with a discourse that they will have to analyze and interpret. Similarly, the respondents will be able to engage in the activities associated with synthesis as one of the essential elements of Bloom’s Taxonomy by reconstructing the suggested discourse and exploring its meaning.
For example, the following question can be used to prompt learners to write an essay: “Is metacognition necessary for children’s development?” Viewing the question from the perspective of Blooms’ Taxonomy, one may rewrite it to make it meet the criteria of Bloom’s Taxonomy known as basic knowledge, analysis, and synthesis: “What role does the use of metacognition play in the development of children’s analytical thinking abilities?” The specified question helps define the learners’ general understanding of the subject matter, as well as deconstructing it to determining the constituents thereof (i.e., evaluate the effects of specific aspects of metacognition on analytical thinking). Furthermore, the opportunities for synthesis are created once the elements are rearranged and combined. It could be argued, though, that the question compels learners primarily to carry out the analysis of the subject matter (Feller & Yengin, 2014). A relevant article (Vo, Li, Kornell, Pouget, & Cantlon, 2014) can be located at the following link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4217213/.
Item Development
Developing a test is an intricate and complex process that involves careful consideration of its author’s goals, the purpose that the test is supposed to serve, the audience that it will address, and the needs that it must meet. Therefore, a thorough overview of the key factors that affect the further design of a test will have to be carried out. Although the significance of the existing factors, as well as the range thereof, hinges on a number of circumstances, including the objectives of a test, its intended audience, the environment in which it will be conducted, etc., there are three primary issues that must be taken into consideration disregarding the target environment. These factors include the background of the participants, their specific needs, and the available resources that can be used to meet these needs successfully.
The reasons for including the specified items for directing the choice of test questions is rather understandable. It is essential to be aware of the culture-specific needs of the target population to appeal to them on a personal level and, therefore, help them provide the necessary responses. Furthermore, a proper understanding of the respondents’ background will inform the objectives of a test and the questions that will be incorporated into it. Consequently, the chances of receiving the required information will rise significantly. Similarly, the setting in which the testing process will occur will also shape the test significantly. For instance, in the environment where the target population is provided with little time, the use of short questions with multiple-choice answers seems reasonable. An analysis of the target population, its needs, and the environment in which the testing will take place and other factors that shape the choice of test questions, format, and other parameters should be viewed as imperative. As a result, the opportunities for receiving the required results will be increased. A relevant article (Dellinges & Curtis, 2017) can be found at the following address: https://escholarship.org/content/qt0pg7d0n1/qt0pg7d0n1.pdf.
References
Dellinges, M. A., & Curtis, D. A. (2017). Will a short training session improve multiple-choice item-writing quality by dental school faculty? A pilot study. Journal of Dental Education, 81(8), 948-955.
Feller, S., & Yengin, I. (2014). Educating in dialog: Constructing meaning and building knowledge with dialogic technology. Philadelphia, PA: John Benjamins Publishing Company.
Lane, S., Raymond, M. R., & Haladyna, T. M. (2015). Handbook of test development. New York, NY: Routledge.
Reynolds, C. R., & Livingston, R. B. (2012). Mastering modern psychological testing: Theory and methods. Upper Saddle River, NJ: Pearson.
Vo, V. A., Li, R., Kornell, N., Pouget, A., & Cantlon, J. F. (2014). Young children bet on their numerical skills: metacognition in the numerical domain. Psychological Science, 25(9), 1712-1721. Web.