Assessment as a Tool for Learning
by Jill Hearne
Throughout the United States, principals and district administrators engaged in meaningful school reform are working with their communities to share assessment information to guide decision-making about curriculum and instruction. The result is that there is a shift from using assessment as a negative force in schools to a positive force that builds a climate of reflection about what is going on in classrooms.
When I was a principal, we had a social skills program where staff would give "coupons" to students seen "doing things right" (i.e. being helpful good citizens). An ideal school would treat assessment in the same way. Students, staff, and principals should be rewarded for using assessment as a tool for learning rather than simply rewarding right answers.
The Changing Scope of Assessment
The shift in consciousness from assessment data as organizational hammer to its use as a tool in strategic planning is slow but critical if we in school are to truly develop learning organizations. Recently a group of highly educated, mainly Ph.D. parents assembled to critique a new standards-based report card. Teachers had spent months laying out developmental descriptions of reading, math and language skills with carefully worded and ordered phrases such as: "recalls some story details", "recalls major story events", "recalls relevant passage details," "summarizes passages concisely," "makes references and draws conclusions." Each description defined a level of skill students could be expected to attain in a particular age band such as ages 5-7, 7 to 9 years, etc.
After studying this new report card form in some length, one of the parents raised his hand and said, "Oh! So this is what you do in school?" This innocent and honest question revealed for me the essential error those of us in school have made for all these years. Our error has been the assumption that what we did as instructors was clearly evident and known to all participants, students, parents and teachers.
But in fact we have not been clear. We have not made it clear to students what is to be learned—we have not made it clear to parents how well students are to perform and we have not agreed as educational communities on what learning or knowledge is of most worth. Lacking consensus on knowledge, skills, and understandings perhaps it is a functional solution to be vague about data, about student learning (assessment information).
As students are no longer being educated to perform rote tasks focused on knowledge and understanding, so too must teachers be supported as they acquire adult learning skills as creators and users of assessment information and not passive deliverers of curriculum prepackaged by a distant textbook publishing company. The movement toward teachers being makers and users of assessment data reflects the shift from teacher as assembly line worker to lifetime learner (Bullard, p. 206).
Principals, teachers, students, and the community can come together around sound principles of assessment to create learning experiences that matter. Data on student outcomes individually and collectively comes center stage as all the members of the school community discuss three critical questions regarding quality. Staff and parents ask themselves these same critical questions about quality that they can also use to teach students to ask about their work:
- What am I doing?
- How well am I doing it? (in relationship to established criteria)
- What do I need to do to improve? (Hearne, 1992)
In student Involved Classroom Assessment , Richard Stiggins (2001) engages in a particularly useful discussion about the match between assessment method and assessment targets. He discusses the four main types of assessment methods: selected response (multiple choice, true/false, matching, and fill in) essay, performance assessment, and personal communications.
For assessing knowledge and mastery, selected response methods are parsimonious. They allow a quick, accurate inexpensive means of finding out what is known about a subject or area. Essay responses can also show knowledge and also allow for indications of reasoning proficiency.
Performance assessments are too expensive and time consuming to be used at the fact-recall-knowledge mastery level, but they allow for observation of skills during performance and assess proficiency in carrying out steps in developing a product. Personal communication has strength at each level from knowledge through skills, product creation, and disposition about learning, but is not efficient at each level (Stiggins, 2001).
Sound assessment results only when there is a clear purpose for assessment, clear and appropriate targets, proper methods, an appropriate sample of the targets, and elimination of bias and distortion in measurement. Stiggins proposes that these five principles guide sound assessment practices.
- Is the purpose of the assessment clear?
- Is the target achievement clear and appropriate?
- What methods do the target and purpose suggest are appropriate?
- How can we sample performances appropriately, given target, purpose and method?
- What can go wrong, given target, purpose and method, and how can we prevent bias and distortion? (Stiggins, p. 15)
At the school level, understanding the match between method and student outcomes is critical. Also critical is an awareness of audience. Who needs to know what information and in what time frame? The needs of school board members are very different from the needs of parents or students.
As you examine your assessment menu in your school, remember to include parents and students in discussions of quality. Provide opportunities for each to truly understand what is being measured, what evidence is considered proficient or "good enough," and most importantly to see the link between the assessment and instructional complications.
Unless assessment results are used to make issues of quality part of everyday conversation in schools, they will not change instruction. This is where the assessment revolution is actually taking place—in the use of assessment data to drive decision-making. The difference is that "data" takes on a richer meaning when that "data" is actual student work instead of numbers representing a normative version of student work.
Certainly, normative data has a place, and there are clear advantages of using normative data for program planning as well as building and district evaluation. Consistency over time, ability to look at trend data, comparability between school systems at a regional, state, or international level are a few of the benefits.
Using Multiple Measures
Utilizing multiple measures of student learning that include actual student work builds a community of learners. No one test or assessment can give a clear picture of student achievement which is why several states (Washington, Maryland, Maine) and districts (Seattle, Washington, Charlotte-Mecklenburg, North Carolina) have incorporated multiple measures including classroom-based evidence as part of their total accountability system.
Student work, however, becomes data when it is scored using commonly understood criteria and reflected upon for the purpose of improving instruction. Not only is the process of scoring student work an important process for members of a school community to go through to communicate and internalize common standards, it is also a powerful staff development tool for improving instruction.
A useful organizational structure for using student work as data is suggested here as a seven step process schools can use to assess student learning.
- Decide what skill cluster to assess and select a broad assessment that captures more than one attribute of the domain.
- Construct or use existing scoring guides or rubrics for the task.
- Share the task and scoring criteria with staff.
- Administer the task to students in a similar time frame.
- Spend time discussing the scoring criteria and agreeing on anchor papers. (Anchor papers are a few papers from each score point that represent the quality expressed in the criteria.)
- Rate the student's papers. It is often useful to have the papers noted by a teacher who is not the students' own instructor for the subject.
- Compare ratings, discuss and formulate implications for instructional delivery.
- Data can be reported in terms of the percentage of students meeting the criteria at the various points.
In each school community there is an emphasis on multiple forms of data to answer questions of process quality, and effectiveness. There is a continual search for evidence that is student-centered and captures the richness of each school experience. This search for authenticity makes each person a learner. There is a shift from what Le Mahieu (1966) terms "accounting" for school achievement to authentic accountability, which redefines the lines of responsibility from the blame game to interactive reciprocal responsibility.
Learning from Sound Assessment
When assessment results are used as a barometer to measure the strength of learning and as a compass to show the direction of future action, all participants become learners. As the social and political context of schooling requires greater accountability, decision makers in schools must become more able to use information in all forms in the best interest of students.
The new view of leadership in learning organizations centers on subtler and more important tasks. In a learning organization leaders are designers, stewards, and teachers. They are responsible for building organizations where people continually expand their capabilities to understand complexity, clarify vision, and improve mental models—that is, they are responsible for learning (Senge, 1990). Principals as learners, teachers as learners, community members as learners are all part of this merging paradigm of schools as dynamic rather than static organizations.
Principals as learners: Principals model learning and are themselves learners as they seek better ways to structure school time, allocate resources, and motivate staff. Principals are the key to managing and creating the culture of reflective teaching that expects and teaches to the concept of "what good work looks like around here."
- Utilize multiple measures to create a building-based assessment system that links classrooms and students over time.
- Support teachers in their growth in assessment literacy through staff development.
- Provide parent education opportunities to help parents understand assessment.
- Work with local media to interpret various indices of school improvement in addition to normative measures.
- Support development of a building-wide portfolio system that showcases student work and moves from grade to grade.
- Make the goals and objectives of school clear, and give focused feedback to teachers on how their classroom efforts support these goals.
Teachers find themselves transforming their teaching as ongoing assessment reveals how students approach tasks, what helps them learn most effectively, and what strategies support their learning. The more teachers understand about what students know and how they think, the more capacity they leave to reform their pedagogy, and the more opportunities they create for student success. (Darling Hammond, (1996).
- Help students see what good work looks like by providing adequate models of work that meets requirements, exceeds requirements and does not meet requirements.
- Provide students with frequent feedback on specific ways to improve.
- Teach students self reflective skills which include the ability to see how their work meets the standard and what they need to change to improve (Hearne, 1992).
- Work with parents on how to monitor work at home in a positive manner.
- Be assessment literate in all they do (Stiggins, 2001). Share this with parents.
- Design lessons with a clear view of the student outcomes expected (Wiggins and McTigue, 1998).
- Use grading practices that communicate about student achievement (Airasian, 1994).
- Learn to value their own work.
- Use rubrics to assess their work.
- Reflect on how their work is like/different from the standard and state what they need to do to improve.
- Collect work over time and discuss it with an adult.
- Learn the relationship between effort and outcomes.
The community as learners: At an individual school level, one of the first questions you must ask yourselves as a school community is: "What are we assessing for? Are we measuring that which is most worthwhile to our school community?
In "The Socrates Syndrome—Questions That Should Never be Asked" Campbell (1995) suggests that true education is "a lifetime of seamless experience, connecting individual episodes into an ever expanding web of meaning, insight, and understanding." But he acknowledges that asking the kinds of questions that make this true education possible is threatening. People in schools are more willing to invest in magic bullets from publishers than in the time to wrangle over questions such as:
- What is so important that everybody must know?
- Why does any test have a time limit?
- What is the purpose of education?
Community members can:
- Read a variety of books on educational reform expressing different points of view.
- Attend several school board meetings.
- Visit their neighborhood school.
- Learn about their state and district accountability system.
- Become familiar with the types of assessments used in their community.
In a standards based system, clear learning expectations make it easier to use assessment data as an accountability tool. Everyone can become a learner as the answers to the three critical questions of quality are collaboratively explored. What are we doing? How well are we doing it? What do we need to do to improve?
Thus, as Shakespeare might have said, "Assessment doth make learners of us all."
References and Bibliography for Learning from School and Student Outcomes, Jill Hearne, Ph.D.
Airasian, Peter (1994) Classroom Assessment. New York: McGraw-Hill.
Bullard, P. and Taylor B.O. (1994) Keepers of the Dream. Chicago, IL: Excelsior.
Brookover, W. B., "Can We Make Schools More Effective for Minority Students?" The Journal of Negro Education 54(3) 257-268
Calfee, R. (1991) "What Schools Can do to Improve Literacy Instruction" in Teaching Advanced Skills to At-Risk Students. San Francisco: Jossey-Bass.
Campbell, D. (1995) "The Socrates Syndrome—Questions that Should Never Be Asked." Phi Delta Kappan p. 467-469.
Cohen, S.A. (1987) "Instructional Alignment: Searching for a Magic Bullet." Educational Researcher 16 (November) 16-20.
Darling-Hammond, L. and J. Ancess. (1996) "Democracy and Access to Education" in Democracy Education and the Schools, Roger Soder (Ed.) San Francisco: Jossey Bass.
English, F. (1992) Deciding What to Teach and Test. Developing Aligning and Auditing the Curriculum. Newberry Park, CA: Corwin.
Glasser, T. (1990) The Quality School. New York: Harper & Row.
Hearne, J. (1992) Portfolio Assessment: "Tracking Implementation and Use in One Elementary School." In J. Bamberg (Ed.), Assessment: How do we Know What they Know.
Le Mahieu, P. (1996) "From Authentic Assessment to Authentic Accountability." Standards Based Reform: A Road Map for Change: Educational Commission of the States, Colorado.
O'Neil, J. (1993) "On the New Standards Project: A conversation with Lauren Resnick and Warren Simmons." Educational Leadership (50:5) February p. 27-21.
School Improvement: Focusing on Desired Learner Outcomes. 1992 National Study of School Evaluation, Falls Church, Virginia.
Senge, P. (1990) The Fifth Discipline. New York: Doubleday.
Stevens, F. (1993) "Opportunity to Learn: Issues of Equity for Poor and Minority Students." National Center for Education Statistics, Washington, D.C.
Stiggins, R. (2001) Student Involved Classroom Assessment. New Jersey: Prentice Hall.
Walker, M. (1996) "What Research Really Says." Principal 75:3 (March): 41-43.
Wiggins, G. and J. McTigue (1998) Understanding by Design. Arlington, VA: Association for Supervision and Curriculum Development.
Center for Research on Education, Diversity & Excellence
Johns Hopkins University and Howard University Center for Research on the Education of Students Placed At Risk
American Educational Research Association
The Center on Education Policy
Washington State's Office of Superintendent of Public Instruction: Assessment, Research and Curriculum
North Central Regional Educational Laboratory Assessment
©2004 Jill Hearne, Ph.D.
About the Author: Jill Hearne, Ph. D. has worked in the area of school reform and assessment as a teacher, principal, and central office administrator. She is currently consulting nationally in the area of Standards-Based Reform and is an adjunct professor at several universities. Dr. Hearne consults and presents for a variety of groups and organizations, including the Office of the Superintendent of Public Instruction for Washington State (OSPI) and various school districts around the country. She has served in many capacities in education; as Coordinator of Assessment and Director of Elementary Education for Seattle Public Schools, as Principal in the Seattle and Federal Way School Districts, as researcher at the University of Washington, as Equity Specialist at OSPI and as Adjunct Professor for Western Washington University, the University of Washington and the University of Alaska.
Dr. Hearne is currently active in many professional organizations and has published in the areas of equity and school reform. Her current involvement includes serving as a judge for the U.S. Department of Education's Blue Ribbon Awards, as well as active participation in the Washington chapter of Association of Supervision and Curriculum Development, the Washington Educational Research Association and the American Educational Research Association.
She can be reached via e-mail at email@example.com or firstname.lastname@example.org.
This article appears on the New Horizons for Learning Web site. Reprinted here by permission of the author.