is a concept inventory? What
are they for and what are they not for?
The transforming work of David Hestenes and colleagues that led to the development of the Force Concept Inventory, a testing instrument designed to measure student comprehension of the Newtonian concept of force.
Its use led to the disconcerting realization that students capable of scoring well on standard exams did not really understand the basic concepts involved.
Our own research indicates that part of the problem is the fragility of students' understanding – often the result of not being placed in situations where they can explore, and hone their understanding in new and challenging situations.
Several studies have assessed the effectiveness of freshman physics courses by pre-course & post-course testing of students.
tests, which emphasize conceptual understanding & qualitative reasoning,
indicate that students learn almost nothing, regardless of the skill
& enthusiasm of the instructor.
Concept inventories have two complementary purposes. The first is to identify basic concepts (such as Newton's Laws), that instructors believe are well understood by students, but which are not.
Here the point is to grab the instructor's attention, so that they think more critically about their own summative assessments (i.e. tests) and begin to rethink what they "cover" and how they assess their effectiveness. The role of the FCI in this type of instructional transformation has been described in detail by a number of prominent instructors in physics and has lead to a number of alternative teaching strategies, such as Eric Mazur's peer instruction and Bob Beichner's scale-up project. In these classes lecture time is dramatically reduced, and students spend more time in various forms of interactive engagement with the materials.
The immediate goal of the BCI is to catalyze a similar transformation in the biological education.
The second goal of concept inventories and related instruments is to identify student misconceptions, either innate or introduced in the course of instruction, in order to determine what approaches are most effective at leading students to conceptual mastery.
This includes understanding how the ways ideas interact, and how the order in which ideas are introduced impacts their incorporation, that is, their constructive and destructive interference.
Concept inventories by Kathy Garvin-Doxas
The assessment of learning and evaluation of performance are issues plagued by scale – this is especially true at the post-secondary level because we are faced with large class sizes and increasing pressure to demonstrate that we can not only bring in research dollars, but can teach students effectively in this type of learning environment as well. In an ideal world, we would be able to spend a considerable amount of time speaking to each of our students so that we could accurately gage their level of conceptual understanding. Conceptual understanding is particularly important with regard to the fundamental principles in a given discipline. With it, students can synthesize what they are learning to solve new problems and (eventually) move the discipline in different and unique directions. Without conceptual understanding, students are limited to the ability to define, label, categorize, etc.
When we are fortunate and have considerable help in the form of graduate students, we might be able to give an essay exam to students in a course with 400 of them as a means of finding out how deep their conceptual understanding goes, but by the time we are able to collect their essays and evaluate or assess them, it is often too late to address areas where they have the most difficulty. Hence concept inventories. Concept inventories are instruments that look like standard multiple-choice examinations, but are much more. There are many types of conceptual understanding such as, preconceived notions, non-scientific beliefs, conceptual misunderstandings, vernacular misconceptions, and factual misconceptions (e.g., Bio 2010). That is a great deal of territory to cover and, thus far, widely-used concept inventories tend to be limited to identifying only some types of misconceptions as well as only some portion of the material covered in any given course. In the case of the Biology Concept Inventory we are developing, we have elected to focus on conceptual misconceptions, preconceived notions, and vernacular misconceptions.
The instruments that function as a concept inventory employ a multiple choice format where each distracter indicates a commonly held misconception that has been rigorously documented by research. In this way, each potential response indicates something about the student’s thinking and their difficulties in ways that allow teachers to directly address them in their teaching. These instruments are often controversial within a given community as well, because they rely on natural language rather than on the usual vocabulary and jargon employed in the discipline. By the time students arrive in our classrooms, they have already internalized the rhetoric of science. For example, when faced with broad, open-ended questions asking them to use their own words to explain something like diffusion, students always fell back on what they have learned is expected of them when responding to essay questions in science – they defined, they labeled, they employed rote memorization. On top of that, they were not used to essay questions that were open and had no single “right” response, so they often failed entirely to even address the question asked. It is because of the predominant use of the rhetoric of science among our STEM students that we have to force them outside the box by using the wording they use when they talk to a non-expert about biology (or whatever subject you are interested in). It is no small thing to discover a means of wording both questions and potential responses in ways that force students to reveal what they really understand – and more importantly, what they think they understand, but do not (see e.g., A Private Universe). This is precisely why concept inventories are so important, but also so time-consuming and difficult to build. You have to get inside students’ heads in ways that allow them to really explain their understanding in their own words; then you have to find a way to translate the patterns among the wide variety of ways that students express the same thinking so that you can develop pilot questions; and finally, you have to validate each question as well as each distracter and correct response to ensure that when students read the question, they “see” the same question you intended, and when they select a response that it accurately reflects their level of conceptual understanding.
Why go to all of this trouble? There are many reasons, but there are two primary uses for concept inventories: 1) they provide a standardized measure that has been developed, validated, and tested for its reliability using solid research methods based on epistemological theories that form the foundation of what we know and understand about learning; and 2) the instrument is administered pre- and post-instruction and the standardization enables professors to determine the degree to which their particular teaching approach enabled their students to overcome their earlier misconceptions (e.g., Mazur). These characteristics also allow for comparison of teaching approaches on a given concept or concept cluster across courses, institutions, etc. (e.g., Hake’s meta-study on interactive engagement using the FCI as the standard).
So, concept inventories are difficult to build because they require a large body of detailed quantitative and qualitative research. When common misconceptions have been documented in a given discipline, building questions is much easier, but in our case (and that of most STEM disciplines), there is very little quality research that reliably documents common areas of misconceptions as well as what those misconceptions indicate. As a result, we followed the methodology and development process utilized for the development of the Force Concept Inventory (FCI; Halloun & Hesteness) with the addition of the use of initial essay questions as per the development of the Astronomy Diagnostic Test (ADT; Zeilik, et al).
next – building the BCI
|bioliteracy.net © all rights reserved||
last update 29 October 2006