In Measurement Matters: Assessing Personal Qualities Other than Cognitive Ability for Educational Purposes, Angela Duckworth and David Yeager urge caution on the part of policy makers and practitioners eager to apply measures of students’ personal qualities (or non-cognitive skills) that were originally developed for research for the purposes of educational assessment and evaluation.
“We share this more expansive view of student competence and well-being, but we also believe that enthusiasm for these factors should be tempered with appreciation for the many limitations of currently available measures,” the authors advise. Most current measures of non-cognitive qualities such as “growth mindsets” and “grit” were designed for research purposes. Still others directly address clinical purposes, including the informing of instructional and classroom routines that promote student growth and development directly.
In the article, Duckworth and Yeager explain that validity is not an inherent feature of a measure itself but rather a characteristic of a measure with respect to a particular end use. They contend that existing state-of-the-art measurement tools used in non-cognitive research—questionnaires and performance tasks—should not be relied upon to diagnose individual students for tracking or remediation or to assess educators and schools for purposes of accountability. The evidence assembled to date suggests that using these existing non-cognitive measures for other purposes in education could yield results that are inaccurate or misleading. Moreover, it could compromise the measures for the very purpose for which they were originally developed.
Validity is not an inherent feature of a measure.
In addition to these cautions, the authors urge practitioners, policymakers, and funders to make investments in R&D and training that could produce measures and measurement practices that would empower those seeking to cultivate these important qualities in students.
The Carnegie Foundation aims to build the capacity in educational organizations to develop and use practical measurement for continuous improvement. Practical measures are useful for assessing whether changes are leading to improvements, targeting attention to specific students at risk of failure, and setting priorities for improvement work. Practical measurement must be informative so as to guide decisions and actions in improvement efforts.
For example, the Student Agency Improvement Community (SAIC) at the Carnegie Foundation is designing and testing changes to educational practices that bolster students’ learning mindsets and strategies. Changes designed in SAIC are informed by the research base and are tested according to the methods of improvement science to ensure that any changes considered for widespread use are, in fact, improvements. We are creating a “practical” measure to support the improvement of classroom practices that build student agency—the mindsets and strategies to persist in the face of serious learning challenges. This survey is termed “practical” because it is easy for practitioners to embed it within their daily work routines, while minimizing the data collection burden placed on students and teachers. The survey will be short—only a few minutes to complete—while other design features of the survey and the plan of analysis will allow for regular use and timely reporting of data to educators. In improvement science, “practical” does not mean “of lesser rigor or quality.” The survey will employ items that are demonstrated to be powerfully predictive of important educational outcomes. Its development has been guided by theory and linked to specific work processes and change ideas being introduced in our improvement community.
In improvement science, “practical” does not mean “of lesser rigor or quality.”
In addition to surveys, data collected from the daily routines of teaching and learning can be another source of practical measures that serve as indicators of students’ personal qualities. These include measures such as the number of completed assignments, the proportion of students who fail a test who take it again, and the number of students who are engaged during class. These may have serious flaws for purposes like cross-school program evaluations, but for educators leading improvement efforts within their local contexts, these embedded measures, when collected over time as part of a system of measures, can provide valuable insights. A noteworthy benefit of these types of measures is that they impose no additional time burden on students or faculty to produce, since they do not require them to do anything or very much beyond what are routine parts of their classroom experience.
It is clear that students’ personal qualities have an impact on their motivation, engagement, and therefore, educational outcomes. With effective practices and routines, these attributes can be shifted. But before we rush to use existing measures of student personal qualities for a myriad of aims, we must consider the intended purpose of those metrics and the validity of their use for the intended purposes. When the intent of measurement is to inform the improvement of practice, practical measures can enable educators to quickly see how students are responding to changes in practice. Measures like these support ongoing improvement efforts within classrooms and across networks.
The Mindset Scholars Network summarizes Duckworth and Yeager’s piece in this research brief.
May 18, 2015
Improvement science relies on an understanding of the problem before creating solutions. Groups have found three key things helped them gain clarity on the problems and make the knowledge explicit, helping them design solutions with users, data, and will in mind.
July 2, 2015
In a recent article, High Tech High faculty and administrators highlight how they used the tools and mindsets of improvement science to increase the number of African American and Latino male students who directly attend 4-year institutions.