Sunday, October 14, 2007

And so it is that I have come full circle...

When I started grad school again two years ago, my main research interest was learning disabilities and the use of dynamic assessment to identify children with learning disabilities. Somewhere along the way, my interests shifted to theories of measurement. But, last semester I had a graduate assistantship in the area of disabilities and once again my interests were renewed; but, they were couched in terms of measurement. I was more interested in conducting a measurement equivalence study of items given under accommodations to children with disabilities vs no accommodations for nondisabled peers.

But, now that I've begun this research into the history of intelligence measurement, I've discovered historical views about educating children with learning disabilities that has brought about a conceptual change within me. I had never realized that these historical views were the driving force behind the development of factor analysis, my pet interest. Theorists were so wedded to their beliefs about an innate, inherited, unitary intelligence that they devised statistical procedures to uncover the underlying unitary construct of intelligence in their set of tests. Many went further by deducing that the children on the low end of the intelligence rank could not benefit from education because of the unchangeable nature of intelligence.

However, Thurstone developed the technique of rotating factors, which caused the unitary factor of intelligence to disappear, replaced by multiple, independent, "primary mental abilities." Thus, children could no longer be ranked based on their average score on a number of intelligence tests. Instead, children are considered unique, with varying sets of strengths and weaknesses. And, I presume, educable.

And so it is that I have come full circle...for in order to truly pursue my initial interests in learning disabilities, I needed to undergo a conceptual change regarding my beliefs about the nature of intelligence. The measurement equivalence study I intended to conduct for my dissertation has very scary implications. If I find that students receiving an accommodation experience differential difficulty for certain items, I may conclude that the items are tapping a secondary dimension of cognitive processes in which the students with disabilities are lacking in ability compared to their nondisabled peers. But why? I must learn from the mistakes of my predecessors. First, intelligence, (or math ability or reading ability) is not a unitary construct that can be quantified using a linear scale. Rather, it is a multidimensional construct, in which a linear quantification could obscure the child's true location on the latent trait in reference to other children (think vectors here, not lines). Secondly, nothing about the mathematical abstraction of co-relations among test scores implies an innate, unchangeable mental ability (think reification of constructs here).

Thus, I should be careful that my interpretations of the measurement equivalence study do not lead one to conclude that children with learning disabilities are unable to ever solve certain types of problems due to intractable intellectual deficiencies. A more appropriate interpretation would be the mere suggestion that if the secondary cognitive processes can be identified, then instruction for students with disabilities should include such processes in order to obviate any future performance differentials between disabled and nondisabled students.

Ironically, my interest in dynamic assessments foreshadowed the conceptual change. Historical views have centered on one test score that places limits on students future learning. Dynamic assessments involved pretesting, teaching, and evaluating the outcome of teaching. If no learning occurs, the child is considered to have a learning disability, but in contrast to historical views, the child is expected to learn if given additional assistance.

And so it is that this weekend, starting with an affront to my own abilities, has led to a conceptual change about intelligence that impacts the way I view my own dissertation research. However, I hope that the larger lesson I learn is not to allow my own expectations as a researcher to cloud my interpretations of findings, or to influence my data collection or analysis techniques. I shall walk away from this weekend as a more conservative researcher, who refuses to draw unwarranted inferences or reify constructs.

Labels:

Saturday, October 13, 2007

In Response to the Previous Post: Teaching Feedback

So I have finally begun the arduous journey of conducting an historical research project for my History & Systems of Psychology class (yes, I'm aware the semester is halfway over already). I changed my topic and now I'm writing about the history of measurement, beginning with the first form of mental measurement, intelligence testing. But, I've started by first reviewing relevant chapters of the book, The Mismeasure of Man, by Steven J. Gould. This book was written in response to the infamous book, The Bell Curve, by Herrnstein and Murray. I have read neither, but the gist of it is the debate over the nature of intelligence, whether it is innate and unmalleable, or whether it is unmeasurable due to the complex, multidimensional, and malleable nature of the construct (in such case, we cannot deem to derive a linear quantification of the construct). The impact of the former stance on social policy is what drives the contention with The Bell Curve, I believe (since I haven't read the book, I prefer not to elaborate).

But, just by reading the first few pages about Binet, the person who developed what later became the Stanford-Binet Intelligence Test, I've decided that Binet is my hero. I admire him immensely. He cautioned against using the scale to rank and label all children. Rather, he saw the purpose of the test as solely for identifying children who needed remediation. His intent was to help the children and he believed that the test should not be considered a measure of intelligence, which is nothing but a reification that leads to false notions of a linear and quantifiable construct. But, those who brought the test to America (Goddard and Terman) fell prey to the fallacy and used the test to categorize and label children in order to impose limits on them. Such an approach to intelligence and its measurement has pervaded the American psyche ever since.

And now I think that growing up within this culture is what partly led to my response to the teaching feedback in the previous post. During childhood, I had plenty of experiences in which I fell at the bottom of the bell curve. For example, in 7th grade my teacher lined us all up at the front of the room and had us do mental arithmetic in our heads. He would increase the load until we couldn't produce the answer, at which point we had to sit down. I was always amongst the first to sit down. Such experiences reinforce the notion that we could be ranked by intelligence or ability. But, furthermore, the teacher never made any attempts to help improve the math computation ability of those who sat down first, which further reinforced the notion that our rank was unchangeable. Our own efforts at improvement would not change an innate math ability.

Thus, when affronted by such negative feedback, my first response was an acceptance of my lowly rank on teaching ability. My second response was a belief that I could not change my lowly rank by attempting to improve myself. I have set limits on myself based on a fallacy that is culturally pervasive! The next step is to see whether or not I can effect change in my own attitudes and beliefs towards the nature of intelligence.

Editor's Note: Eh! I'm probably exaggerating things a bit. It does appear that I like to make mountains out of molehills.

Labels: ,