To coherently assess their institutions

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

To coherently assess their institutions, leaders on college campuses need to understand and manage complex approaches, vocabularies, scales of operations, disciplinary needs, accreditation pressures, and accountability reports. But they can easily get lost or bogged down in these topics, and theever-increasing external demands for accountability and accreditation tend to push leaders into focusing on simple inputs and outputs addressing student learning and institutional effectiveness. But focusing on these external audiences neglects more authentic documentation and assessment of student learning: what students learn, how they learn, what they can do with their learning, and how this recursively informs instructional design and the creation of academic support services.

Purposes of Assessment and Documentation

Often the attempt to classify what we are doing in order to demonstrate institutional effectiveness is undertaken at the expense of adopting a deeper, more authentic approach to documenting student learning. Adopting such narrowly conceived approaches is often reasonable because they are easier and are easily applicable to environments struggling to meet budgetary requirements when resources are diminishing or scarce, and faculty and staff are overburdened with so many competing demands on their time. Additionally, student learning assessment is not easy, and it requires significant time, thought, and resource investment if it is to be done correctly. It also requires a hook that gives careful consideration of how best to tap into and enhance the motivation and intrinsic needs of faculty, staff, and students for pursuing this work. Ignoring such challenges, the external route will ultimately neglect developing intentional assessment strategies aimed at understanding the phenomena by which learning emerges. In taking a path of less resistance, the institution will have created less rigorous research designs and less useful instructional and program designs--inadvertently undermining its ability to document and present valid evidence of learning.

Purposes of This Chapter

This chapter will argue that assessment should affect student learning as well as measure it. Further, the chapter will elaborate on the notion that documenting the outcomes of tasks that facilitate and inform learning processes, as well as using coursework and continuous assessment processes within particular learning contexts (academic and non-academic programs), opens greater possibilities for assessment for learning than does course/module examination (Murphy, 2006). In A Culture of Evidence: Postsecondary Assessment and Learning Outcomes, the ETS team of Carol Dwyer, Catherine Millett, and David Payne describe this outward focus of assessment and accountability:

[A]s we outline what a new era in higher education accountability might look like, we will strive to keep in mind two points: the need for clarity and simplicity in the system; and the need for a common language that can be used consistently within the higher education community as well as with stakeholders outside this community. (2006, p. 3).

This chapter will argue the opposite: that most faculty perceive their own disciplines and the learning tasks associated with them as anything but simple or easy to measure in simple terms, and that most institutions of higher learning differ from each other in important ways and thus require a more nuanced language to describe the learning that occurs on their campuses, even if that language must be abstracted for a larger audience. Assessment that takes seriously the role of faculty and students will conversely increase the odds that faculty and students will take seriously their roles in the learning and, by extension, in the assessment enterprise. Such an approach closes the student learning loop by giving faculty and students complex but vital information about the learning task, allowing both groups to make adjustments for continued learning, and engaging them more fully. Simplicity and shared terminology may be laudable goals for comparing institutions for purposes of accountability, but these goals rarely have the important effect of going beyond assessing learning to enhancing learning.

The chapter will explain how activities aimed at increasing cognitive and metacognitive processing increase the likelihood of attaining specified competencies related to the disciplines, to general education, and to desired soft-skills relevant to the workplace. This chapter will not leave out the student, but instead will examine processes and programs that have recursive effects: The student is not merely examined by the assessment process but learns from it. Sound student learning documentation requires capturing information about the outcomes of learning and the process by which learning is pursued. Additionally, assessment processes that both measure and contribute to student learning must have a strong link to faculty development, augmented by other campus offices (e.g., institutional research, assessment, planning, etc.), particularly as they pertain to increasing faculty members' understanding of their role and building their capacity to undertake it. As a natural extension to these arguments, the chapter suggests that campus leaders orient institutional support more intentionally toward the departments where the assessment takes place and invest in instructional and program design support centers and requisite technology, not necessarily toward the traditional accountability functions. In sum, we recommend moving beyond traditional assessment of learning to include assessment for learning as well.

Beyond Assessment of Learning

It is understandable that higher education administrators would desire a single measure or a limited number of measures that would present a clear and simple picture of the learning process at their institution. Indeed many faculty share that desire, particularly if they feel that assessment is something that is imposed from the top and not an integral part of classroom and disciplinary practice. The popularity of competency testing attests to the need to satisfy that desire.

A single test--the MAPP (Measure of Academic Proficiency and Progress), the CAAP (Collegiate Assessment of Academic Proficiency), or the CLA (Collegiate Learning Assessment) --might be the key tool for administrators to communicate student learning to stakeholders, particularly at the state and national levels. Some faculty might welcome the simple solution of competency testing because they have seen assessment as of little of value in their work. Yet the great majority of faculty at a variety of institutions want assessment that is more closely focused on what they do in the classroom or the curriculum, that examines the pathways of student learning more carefully, and that has some clear value to them in determining how to increase student learning in their courses and in their disciplinary fields (Schilling, 1998). Moreover, when student learning or lack of it is demonstrated through a large competency test or other type of institution-wide or multi-institutional assessment, one important constituency, the student, is almost always left out of the subsequent conversation(s).

For example, students who take the competency tests rarely if ever learn how they performed on the test and what they might do to improve. In fact, motivating student constituencies to take competency tests and other institution-wide assessment efforts seriously when they have so little to gain themselves is a significant problem for such assessments. This is not to diminish the role of these tests as efficient and reasonable metrics for external audiences (e.g., policy makers, accreditation agencies, general public, etc.) concerning broadly defined competencies at the institutional. But it is important to recognize that these measures do have significant limitations for understanding how learning emerges in the classroom and that they cannot tell us much about what specific mechanisms impacted the learners or enhanced an approach to the learning process for a faculty member or student.

Assessment for Learning

Assessment that affects student learning as well as measuring it is undertaken to improve teaching and learning and thus often seen as limited in scope and situated in the confines of the classroom. However, the principles of assessment for learning, almost always formative, as contrasted to assessment of learning, usually summative, can form the basis of a culture of assessment on a campus and become a guiding model for larger-scale assessment. Figure 1shows how formative assessment (assessment for learning) is part of a process that moves toward accountability. Summative assessment, like competency testing, is often seen as establishing useful comparisons between institutions, as in the Voluntary System of Accountability[1]. But if summative assessment is put into the context of formative assessment, in a "both/and" as opposed to an "either/or" structure, then each can enhance the other, providing multiple measures of learning (Stefanakis, 2002). Assessment for learning can, in many cases, be aggregated or scaled up to reveal summative patterns, as Figure 1 illustrates by demonstrating the continuum from learning to accountability.


There are many ways in which performance based or standardized summative assessments can support learner-centered assessments to improve as well as measure learning. Educational innovations like portfolios, learning communities, internships, and capstone experiences share a basis of cognitive engagement and metacognitive analysis that encourages evaluation of and by the learner. They also allow more general insight into the learning process by the institution, which can then be expressed in more specifically crafted accountability measures. Understanding concepts from the learning sciences, like cognitive engagement, is key to both understanding how to build assessment that is learner centered and to determining how to create learning environments that not only enhance learning but also support faculty in developing assessment that contributes to institutionally specific accountability structures.

Cognitive Engagement

How do we get students to exert the considerable mental energy required to persist in a complex learning environment where deeper levels of learning occur through synthesizing, integrating, and attaining strategic knowledge? Leaders in higher education commonly wonder aloud whether students will invest the requisite time and energy to achieve the desired levels of learning just because the institution invests the financial resources to create an authentic learning environment or provide a "take away" such as a showcase portfolio or improved resumé. This concern is at least one reason senior leaders often hesitate to invest in these endeavors beyond the "pilot phase" while aspiring to full implementation.

A burgeoning literature in educational research broadly and higher education in particular highlights how specific assessment approaches can have a significant impact on student learning as well as a significant role in motivating stakeholders to participate (Black et al., 2003; Black & WiliamWilliam, 1998; Sutton, 1995; Torrance & Prior, 1998 as cited in Murphy, 2006). Murphy (2006, p. 43) noted the significant body of literature growing to support this approach, especially in elementary and secondary school classroom research, but asserted,

In higher education we are still at an earlier stage of understanding how effectively this approach to assessment can be developed within different higher education institutions (although Bond, 1995, Knight, 1995, and Hinnet, 1997 have all made useful contributions to thinking about how such a development could progress).

To help frame our approach to documenting student learning in postsecondary contexts, we turn to the learning sciences literature, where much scholarly work has been undertaken to address issues related to assessment for learning. Specifically, we will focus on cognitive engagement, a concept that emanates from research on motivation that is significantly referenced as a critical component to successfully carrying out and documenting authentic student learning. Essentially, the view is that motivation leads to achievement in learning environments by increasing the quality of cognitive engagement (Blumfeld et al., 2006). Content understanding, skill development, and higher order thinking are all influenced by the degree that students are committed to their learning process and consequently help us assess for student learning.

Learning environments seeking to facilitate cognitive engagement among participants attempt to encapsulate more fully the relationship between student effort (buy-in) and learning. The cognitive, metacognitive, and volitional strategies employed are intended to increase the likelihood that "learners will think deeply about the content and construct an understanding that entails integration and application of the key ideas for the discipline" (Blumenfeld et al., 2006, p. 475). These strategies are promoted to increase deeper notions of engagement and learning that influence student motivation, enhance intrinsic values related to learning and fostering situational interest, and ultimately increase degree of participation in the learning enterprise, thus increasing measurable learning and enhancing learning outcomes.

Notions or levels of cognitive engagement are understood to emanate from superficial and deep approaches to learning. Superficial cognitive engagement includes tasks that rely on elaborative or rehearsed and memory-focused approaches to learning, often those that are measured through competency testing. Deeper notions of engagement facilitate students' reliance on and refinement of metacognitive strategies including the use of intentional reflective mechanisms that help the students establish goals, plan, monitor, and evaluate progress as they iteratively adjust their approach to a learning task (Blumenfeld et al., 2006). Intentional reflection is an important element in assessment for learning, particularly in learning environments like learning communities (reflection on interdisciplinarity), portfolios (reflection on performance), and senior seminars (reflection on disciplinary understanding). The volitional strategies students employ include self-regulating attention, affective awareness, and effort to overcome various distractions that may be internal (e.g., lack of self-efficacy, insecurity, insufficient self-confidence) or external (e.g., financial constraints, social pressure).

Principles of Cognitive Engagement

Specific features characteristic of learning environments may contribute to motivation and cognitive engagement if considered in the design of instruction approaches and assessment strategies. Specifically, Blumenfeld et al. (2006) put forward key features of learning environments that promote construction of strategies for assessment and documentation of student learning. The features or characteristics of such an environment are not mutually exclusive but are facets of its expression. These include authenticity, inquiry, collaboration, and technology.


The concept of authenticity has come into the higher education assessment field through Torrance's (1995) work on formative assessment in teaching and learning. Authenticity in assessment refers to matching the assessment approach to the educational goals of the particular learning context, with a focus on just what is pragmatic or feasible to satisfy the requirement to assess (Murphy, 2006). Authentic assessment is usually achieved by providing reasons for understanding and opportunities to problem solve that are drawn from physical or social examples in the real world (everyday life experiences), as well as discipline or content-related examples that provide opportunities for application within the design of the instructional format, not as an add-on (Newmann, Marks, & Gamoran, 1996 as cited in Blumenfeld et al., 2006). Moreover, the authentic assessment process is typically viewed as integrative in that the assessments are chosen because they can be built into the structure of a course or instructional format and, as such, are likely to improve the odds the learning objectives will be met (Murphy, 2006).

By extension, authenticity within a learning context becomes important for motivating students because it gives them multiple opportunities to work with concepts and create artifacts that enable content and skill acquisition that emanate from a relevant question. Students become motivated and subsequently engaged cognitively via the connection of their values to an outcome or set of outcomes with real world significance. This significance is "situated in questions, problems, or anchoring events that encompass important subject matter concepts so students learn ideas, processes, and skills" (Blumenfeld et al., 2006, p. 479). Clearly, internships, undergraduate research, field research, peer teaching and other authentic approaches to learning provide a rich field for meaningful assessment because students are not only invested in the learning process but also motivated to measure their own learning as part of their commitment to the real-world nature of authentic learning activities.


Approaches based on inquiry provide opportunities for autonomous exploration and application, as well as cultivation of or challenge to intrinsic values held by the individual. Blumenfeld et al. (2006) highlight that notions of autonomy and value can be enhanced via the type of artifacts being pursued as a function of cognitive and metacognitive tasks (i.e., self-regulate, reflect, synthesize, plan, execute decisions, evaluate information and data, etc.), as well as the roles the student pursues while undertaking the inquiry (e.g., scientist, philosopher, mathematician, historian, etc.). An aspect of most inquiry-based approaches, sharing findings with instructors and others inside and outside the classroom also increases a sense of autonomy and value. Ultimately, these higher-level approaches to learning will tend to increase students' notions about the value of work being undertaken and enhance their commitments to the learning enterprise, thereby increasing the odds that measurable learning is taking place at a deeper level.

It is important to note that inquiry-based methods must be constructed intentionally and staggered along a developmental continuum from simple to very complex. Moreover, the expectations related to task and performance should adequately match the developmental level for the student. This can be accomplished if care is taken to articulate the requisite skills and desired outcomes related to inquiry and their anticipated expressions for each stage of learning expected in the learning context. This articulation of skills and outcomes simultaneously builds platforms for both learning and assessment. For example, first-year students may not be able to adequately frame questions or utilize inquiry-based methods at a level that would allow them to probe and understand the underlying reasons for poverty either in general, in the United States, or in a given city with the same depth as a junior might be able to explore the same socio-political issue. Framing an inquiry-based learning task by taking into account issues of complexity as well as notions of autonomy and value may, quite naturally, also establish the framework for assessing learning. And understanding the relationship between framing the task and evaluating the task is of utmost importance in developing meaningful assessment practices. After all, what students learn depends on how they are taught, not just on what they are taught.


Approaches based on collaboration provide opportunities for students to engage with peers, further motivation for them to become cognitively engaged. "Collaborative learning involves individuals as group members, but also involves phenomena like the negotiation and sharing of meanings-including the construction and maintenance of shared conceptions of tasks-that are accomplished interactively in group processes" (Stahl et al., 2006). Collaborative approaches are especially useful for assessment because collaboration lends itself to computer-based learning approaches by which an artifact or the process of discovery can be mapped and feedback can be injected into the learning process for students: e.g., portfolios, distance learning and distributed computing environments, telementoring, writing and literacy support efforts, and simulation (Blumenfeld et al., 2006; Stahl et al., 2006). Also learning communities have become an approach to education that builds on notions of collaboration, as noted in Lave and Wenger's work (1991) on situated learning and communities of practice (as cited in Collings, 2006). In a learning community the goal is to advance the collective's knowledge base, which in turn supports individual knowledge growth and reinforces motivation for the undertaking (Scardamalia & Bereiter, 1994 as cited in Collins, 2006). The role of collaboration in learning and assessment increases personal notions of responsibility and functions as a "hook" that comes about by being associated with others and building an "intersubjective attitude" or "joint commitment to building understanding" and making unique contributions to work (Palincsar, 1998 as cited in Blumenfeld, 2006, p. 483).


Approaches based on technology are being adopted in institutions across the higher education spectrum. Technologically-supported teaching and learning systems enable these institutions to increase implementation efficiency, orient student assessment toward more learner centered approaches, increase reflective practice, provide motivational incentives for students to participate, and address lifelong learning needs (Bates, 2003; Chen et al., 2001; Cotterill et al., 2006; Kimball, 1998; Klenowski, et al., 2006; Laurillard, 1993; Lopez-Fernandez & Rodriguez, 2009; Preston, 2005; Ross et al., 2006; Scardamalia & Bereiter, 1996; Schank, 1997). Ross et al. (2006) note the importance of feedback, particularly its role in formative assessment, and contend that technological systems typically enable robust approaches for feedback that motivate students to participate in assessment activities. Lessons learned from many of these authors highlight advantages to technologically-assisted approaches: (1) supporting self-diagnosis, reflection, and tutoring support that is synchronous and asynchronous, (2) addressing the procedural difficulties of storage and access related to artifacts, (3) providing less cumbersome feedback tools that allow for iterative feedback processes among stakeholders (students, faculty, and staff), (4) enabling the use of prompts to assist in the scaffolding required for engaging at various developmental levels, (5) allowing the students to have more control of their own learning pace, (6) assisting a facilitator/instructor to contextualize assignments and assess progress toward learning goals relative to the students' interests, values, and preferred learning approaches, and (7) enabling assessment techniques that can account for cognition in problem solving processes. In summary, specific assessment approaches can have a significant impact on student learning as well as a significant role in motivating stakeholders to participate. Some key considerations should be kept in view when developing an institutional assessment for learning approach. (See Table 1 for the details).


Principles Applied: Reflective Portfolios

In considering the issue of increased attention to cognitive engagement in assessing learning outcomes, leaders must seek assessment designs that employ intentional reflective mechanisms in order to ensure that assessment can both facilitate and measure learning. As Table 1 notes, assessment that enhances learning includes desired outcomes of increasing specific content knowledge, building transferable strategic knowledge, promoting motivation, and strengthening self-efficacy. Often assessment that fosters learning is going to involve the learner in recursive activities facilitated through reflection on practice. The rationale behind this approach is that it enhances content learning and transferable skills while increasing a sense of control (self-efficacy) and strengthening motivation. "The essential way people get better at doing things is by thinking about what they are going to do beforehand, by trying to do what they have planned, and by reflecting back on how well what they did came out"(Collins, 2006, p. 58). Project-based learning, problem-based learning, inquiry-based learning, collaborative and constructivist learning approaches--indeed, most approaches--to learning that are based on cognitive engagement incorporate some aspect of reflection. In fact, assessment itself might even be defined as "meta-reflection" at a course, program or institutional level.

This section will concentrate on one of the most accessible and successful applications of the principles of cognitive engagement (authenticity, inquiry, collaboration and technology), particularly the element of reflection, into assessment activities: the portfolio. Portfolios provide the most efficient way for students to share their findings, and they reinforce important elements of inquiry. Portfolios are containers for products, for artifacts, for writing, and for visual production, among many possible elements, thus incorporating authenticity, a significant element of cognitive engagement (Annis & Jones, 1995; Banta, 2003; Cambridge, 2001; Fink, 2003; Gordon, 1994; Jafari & Kaufman, Perry, 1997, Zubizarreta, 2009).

Benefits and Capabilities of Portfolios

Even the simplest portfolio frameworks require that students who create them engage in intentional selection and arrangement, reinforcing aspects of autonomy. Portfolios require not only an ordering process, but a simultaneous evaluative process. As Zubizarreta points out about the construction of portfolios: "The process of such reflection tied to evidence promotes a sophisticated, mature learning experience that closes the assessment loop form assertion to demonstration to analysis to evaluation to goals" (p.42). Portfolios may be shared--created or owned by more than one student--allowing students to collaborate in creative ways. Portfolios provide flexibility of scale, with the capability of delineating work in a single class, a particular major, or an entire curriculum. Portfolios can demonstrate the link between general education and the student's major. They can demonstrate the acquisition of skills and their growth over time. Portfolios can be flexible, and they can be revised based on feedback to demonstrate mastery of concepts. Since an increasing amount of the portfolio assessment occurs in a digital environment, electronic portfolios (popularly abbreviated as e-portfolios) can incorporate building of technology skills as well as allowing for assessment of the learner's technological capabilities, while enabling collaboration (Banks,2004; Cambridge, 2001, Jafari and Kaufman, 2006, Stefani, Mason & Pegler, 2007). In all, portfolios represent the crossroads of assessment and cognitive engagement, employing many aspects of the latter in service to the former.

In his 1998 article "Teacher Portfolios, A Theoretical Activity," Lee Shulman pointed out some of the advantages of using portfolios. His points support the above principles of cognitive engagement, as follows:

  1. Complexity and autonomy: "portfolios permit the tracking and documentation of longer episodes of teaching and learning" (Shulman, 1998, p. 35).
  2. Technology and feedback: "portfolios encourage the reconnection between process and product" (p. 36).
  3. Collaboration: "portfolios institutionalize norms of collaboration, reflection, and discussion" (p. 36).
  4. Authenticity: "a portfolio introduces structure to . . . experience" (p. 36).
  5. Autonomy: "and really most important, the portfolio shifts the agency from an observer back to the [student]" (p. 36).

Electronic Portfolios

In the comments quoted above, Shulman was primarily describing the use of e-portfolios in teacher education, where they have indeed become the norm. While the National Council for Accreditation of Teacher Education (NCATE) has required e-portfolios in teacher education for some time, more and more institutions are turning to e-portfolios to track a variety of learning processes in a variety of disciplines, including assessment of learning outcomes. A recent survey of AAC&U members, conducted by Hart Research Associates, sums up the increase in use of e-portfolios in a graph which shows that 57% of the AAC&U member institutions which responded to their survey were using electronic portfolios, 29% were exploring the feasibility of using them, and 14% did not use them and had no current plans to develop them (Rhodes, 2009). Of those who used electronic portfolios, 42% reported that they were using them for assessment as well as other purposes (2009, p.11). Additionally, Clark and Enyon note that "the ePortfolio Consortium lists 894 institutional members, nearly 60% of them American colleges and universities . . . across all higher education sectors . . . [evidence that] the use of e-portfolios has tripled since 2003" (2009, p. 18).

Obviously, e-portfolios are gaining in popularity; Clark and Eynon summarized why. These authors described the ease with which e-portfolios may be used for assessment, but gave other reasons that emphasize the ways in which portfolios use principles of cognitive engagement--particularly authenticity, collaboration and technology--to impact the learner and the learning process, not merely to provide evidence for accountability. First, they cited the switch from teacher-centered to learner-centered pedagogies: "Defining students as authors who study their own learning transforms the traditional power structure, asking faculty to work alongside students as co-learners" (Clark & Eynon, 2009, p. 18). They also noted the growth in digital communication technologies and the ease with which millennial students employ these Web 2.0 technologies: "In an age of multimedia self-authoring, student interest in creating rich digital self-portraits has grown exponentially. . . [A] digital portfolio for student learning speaks the language of today's student body" (p. 18). Finally they cited the "increasing fluidity in employment and education" (p. 18). With increasing numbers of students transferring, both from two- to four-year institutions and among four-year institutions, as well as taking courses at multiple institutions, an e-portfolio may become "an educational passport" which students could also take into the employment arena, demonstrating links between their education and their professional aspirations and experiences (p. 19). In an ideal world, lifelong learning might lead to lifelong e-portfolio development, both enriching learners' self-understanding and self-efficacy and also providing on-going evidence, often hard for institutions to come by, of how student learning has affected professional growth: "The vision of an e-portfolio as a lifelong learning tool that is updated throughout life has considerable institutional implications" (Stefani, Mason & Pegler, 2007, p.12).

When students are asked to describe their experience with e-portfolios and process the value of the enterprise, many heartily echo the experts:

I feel that the process has enhanced my understanding for the overall higher education experience . . . . I have always felt confused and irritated by the lack of connection between my general education requirements and my core department requirements. I think that the e-portfolio is a great way to link the two types of classes. . . . I am a very visual person and the template of the e-portfolio was easy to follow and it truly helped to achieve the goal of linking my personal work to my personal goal. I also believe that this process was very empowering for me. It is easy to get discouraged with work that you complete during classes because you complete a paper, receive a grade, and then that paper is simply stored in a folder on your computer. This process helped me to look back on the work that I had completed in prior classes and place more value on the work that I had created. I was able to value the work because each assignment that I complete I have taken one step closer to completing a personal or professional goal of my own. (Miller & Morgaine, 2009, p. 8)


E-portfolios, along with more traditional material for assessment like timed essays, artifacts and performances, constitute only the "content" of assessment. The "form" for assessment in these cases is generally provided by rubrics. In some ways, rubrics are much like "scaffolding" in that they provide a description of both the characteristics and levels by which to either achieve or evaluate performance. They increase cognitive engagement by matching expectations for a task or performance with a description of the demonstrated developmental level. Rubrics tell the student and the assessor what performance should look like at each stage or level.

The creation and application of rubrics is often the task of the single professor, who may well use them not only to judge student work but also to guide that work, demonstrating expectation as well as evaluating performance. Rubrics may also be the joint effort of faculty participating in a specific discipline or teaching a specific skill, like writing. Rubrics can provide a structure for assessing general education learning outcomes or institutional goals. The larger the group creating the rubric, ranging from the single professor to the institutional level, the more the rubric reflects consensus about expectations for student learning, but also the more diffuse and general the rubric becomes. Rubrics can be agreed upon by disciplinary bodies and by accrediting agencies as well as by educational organizations seeking to define "fundamental, commonly held expectations for student learning, regardless of type of institution, disciplinary background, part of country, or public or private college status" (Rhodes, 2009, p. 5).

Portfolio-based Assessment Projects

To illustrate the nature and potentials of portfolio assessment, this section will examine in detail two portfolio-based assessment efforts, one national in scope (American Association of Colleges and Universities' [AAC&U] VALUE initiative) and the other institutionally-based (University of South Florida's Cognitive Level and Quality of Writing Assessment [CLAQWA] program).


The Association of American Colleges and Universities (AAC&U) is working with institutions which have a history of successful use of student e-portfolios to develop "meta-rubrics," or shared expectations of student learning, that institutions can apply across 14 of the AAC&U's designated essential learning outcomes. This project, the Valid Assessment of Learning in Undergraduate Education (VALUE) is developing meta-rubrics in the areas detailed in Table 2.


In developing the VALUE project, the AAC&U is basically challenging the arena of competency testing (MAPP, CLA, CAAP) by creating a scalable assessment process that does not depend on sampling small numbers of students outside their required courses, does not depend on the unmotivated good-will of students, and does not neglect the learning feedback loop to students and faculty. Instead, the VALUE project is based on locally generated products of student achievement across a wide variety of types, including graphical, oral, digital and video, since it is based on e-portfolio collections of student work. This project has employed rubric development teams in all of the above areas and tested the resulting rubrics on a range of individual campuses. Researchers are currently in the process of creating national panels to apply, review and test the usefulness of the rubrics. The three national panels will consist of faculty who are familiar with rubrics and e-portfolios but were not involved in the development of the rubrics, faculty who are unfamiliar with the use of rubrics and e-portfolios, and a panel of employers, policy makers, parents and community leaders (Rhodes, 2009).

The VALUE project argues that faculty, academic professionals and public stakeholders can develop and apply national standards of student learning and that those standards should both arise from and be applied to locally produced authentic student learning products. These products would be easily housed in an e-portfolio system, but could be compiled in other formats, since campuses that did not gather student work electronically also examined selections of student products and participated in developing the rubrics. This project garnered much attention from the assessment community during its roll out by AAC&U in late 2009 and 2010. However, it is important to remember that achieving a scale that is sufficient for accountability efforts and for useful comparison of institutions, reflecting consensus about expectations for student learning, can lead to diffuse and general rubrics. Such rubrics can be difficult to link back to the individual classroom, thus having little impact on the individual student and potentially creating problems for scorers resulting in low inter-rater reliability. Participants in the VALUE project are aware that the promise of scale brings with it the problems of scale, and have suggested attempting to make the project useful in more specific learning environments:

The VALUE rubrics, as written, must then be translated by individual campuses into the language, context, and mission of their institution. Programs and majors will have to translate the rubrics into the conceptual and academic constructs of their particular area or discipline. Individual faculty will have to translate the rubrics into the meaning of their assignments and course materials in order for the rubrics to be used effectively to assess their student assignments. (Rhodes, 2009, p. 7)

University of South Florida CLAQWA Project

While it is a promising beginning, the VALUE project will prove useful to produce assessment that affects student learning as well as measures it only if individual faculty and students are willing to make and value these modifications. It is further enlightening to examine programs which have managed to scale up at least to the institutional level but still have learning impact in the classroom. One such program is the CLAQWA (Cognitive Level and Quality of Writing Assessment) program, developed by Teresa Flateby and her associates at the University of South Florida.

For many years the University of South Florida had used timed-writing assessments scored with the writing portion of the CLAST (College Level Academic Skills Test), which measured reading, writing, and mathematics skills. The essays produced by the timed-writing assessment were holistically scored by external evaluators. Assessment leaders on campus were discouraged both by the weaknesses revealed about students' writing skills and the inability of the assessment method to identify forms of remediation. As Flatby wrote,

Although determining the achievement level of our students is important, assessment's major contribution to learning is providing the information needed to enhance student learning outcomes. In addition to having little formative data, our assessment process was further flawed by its lack of inclusion of our faculty. (2009, p. 216).

Around 1999 USF began to assess writing with a campus-developed instrument that had originally been created to measure learning in the interdisciplinary portion of the University of South Florida's General Education Learning Community. The instrument, the Cognitive Level and Quality of Writing Assessment (CLAQWA), incorporated both skills and cognitive level evaluation, based on the work of Benjamin Bloom. Flately explained that it "encourages faculty users to consciously consider the cognitive level expected for an assignment, enables self and peer review, and facilitates a multidisciplinary approach to writing assignments" (Flateby, 2009, p. 214). The CLAQWA uses 17 writing elements organized into the following categories: assignment parameters, structural integrity, reasoning and focus, language, grammar and mechanics.

The University of South Florida has trained undergraduate and graduate assistants as scorers who work with faculty scorers to develop consistency in scoring. Most significant, the CLAQWA helps faculty not only assess assignments but create assignments; it has also been widely used in peer review activities in which students read and make suggestions for revision of other students' work. Faculty "found that their students' writing and thinking skills improved . . . with the new CLAQWA adaptation [for peer review]" (Flateby, 2009, p. 65). Indeed, the remarkable thing about the CLAQWA is the way it has impacted instruction significantly while also providing assessment data. In fact, Flateby claimed that "many faculty members [who use the CLAQWA rubric for peer review] report improvements in their own writing" (2009, p. 221).

For the Association of American Colleges and Universities' VALUE program to have a similar impact, it must pay considerable attention, as the University of South Florida did, to the local and the disciplinary uses of its rubrics. Faculty will value and support assessment projects that they perceive to have a real and demonstrable relationship to student learning. Students will value and support assessment that allows them to reflect on their practice and gives them feedback about their performance. Thus learning outcomes assessment does not necessarily thrive in an environment in which the highest priorities are clarity and simplicity in the system and a common language that can be used consistently within the higher education community.

Rather than invest in bureaucratic structures of assessment and large scale competency testing that tend to oversimplify and homogenize the task, institutions might see more gains in assessment by investing in faculty development and increasing faculty understanding and pedagogical use of cognitive engagement practices. Both constituencies, faculty and students, might profit from a greater understanding of how a focus on authenticity, inquiry, collaboration and technology can increase learning. In Proclaiming and Sustaining Excellence: Assessment as a Faculty Role, Karen Maitland Schilling and Karl L. Schilling (1998) have listed conditions by which faculty, and by extension students, will identify assessment as worthy of meaningful engagement:

  • Assessment must be grounded in significant questions that faculty [and students] find interesting.
  • Assessment must rely on evidence and forms of judgment that disciplinary specialists find credible [and students in that discipline find applicable].
  • Assessment must be identified as a stimulus to reflective practice.
  • Assessment must accommodate the nature of faculty [and student] life in the academy. (p. 85)

When conditions like these are met, both faculty and students will value the process of assessment and gain from it.

Organizational Support

As noted in the opening section, we argue that assessment processes that both measure student learning and contribute to student learning have strong links to faculty development, augmented by other campus offices (e.g., institutional research, assessment, planning, etc.). Of particular interest is the potential for increasing faculty members' understanding of their role in the "learning sciences," which encompass design-based instruction, research, and assessment. Our argument places the locus of assessment within the actual learning context. To this end, we find it critical that higher education organizations "re-think" their structures for facilitating engagement in processes that improve student learning while measuring learning outcomes.

Faculty, program developers, and research staff are keys to constructing a meaningful approach to the design and enhancement of learning environments consistent with these assessment goals. This participation is likely to occur at sufficient levels only when an institution provides the organizational support to advance instructional and program research and design work pertinent to the particular learning context of a content or disciplinary area. Specifically, evidentiary approaches aimed at fulfilling external accountability requirements are necessary but not sufficient for sustained and meaningful assessment work. Rather, institutional-level support must be given to assessment work carried out in departments which generate evidence-based claims about learning which address performance and accountability requirements. Such evidence should simultaneously addresses the contemporary theoretical issues of a particular field and the professional requirements found in a given institution (e.g., tenure-promotion). In essence, design-based assessment frameworks at their core should spur the creation of "theoretically-inspired innovation" that translates into enhanced practices aimed at addressing outcomes within a particular learning environment (Barab, 2008, p. 155). This engages faculty and research staff where they live and recognizes the unique and varied contexts of learning in a higher education organization.

An institution that supports such a focus will afford faculty and research staff opportunities to carefully acquire and utilize the appropriate instructional designs and tools for increasing strategic and content knowledge among students. Further, those engaged in assessment must be able to disentangle the particular conditions under which an interaction or occurrence happens within a particular learning context, recognize the complexity of these iteratively changing environments, and collect evidence pertaining to these variations as it may inform future assessment designs and curriculum enhancements. This work then becomes scalable for the institution's accountability and effectiveness needs and will have a higher likelihood of being efficiently implemented and saving time, energy, and resources. Thus it may impact curricular practices in similar contexts, enabling faculty and research staff to address general knowledge development in their fields and fulfilling the external and internal accountability and performance requirements of the current policy environment.

In many respects, the organizational and fiscal support to aid participants in understanding the complexity that exists within a particular learning context, fully appreciating the nuances of discipline-based curricula and providing tools for sound methodological and instructional approaches, might include a variety of campus offices (i.e., assessment and accountability offices, institutional research offices, planning offices, school/college or department assessment functions, teaching and/or instructional learning centers, instructional technology offices, measurement centers, etc.). Many of the aforementioned offices have important pieces of their operations carved out to support key activities that serve learning design and assessment. But the aggregate effect of this multi-layered and distributed approach is imbalance in staffing, budgets, and technical resources as well as inconsistent alignment of work by these offices and their consumers to institutional mission and goals.

Despite these obstacles, many faculty and staff manage to become deeply engaged in particular assessment initiatives and validate the findings they generate from this work. However, given organizational and funding realities, the support given to faculty and researchers engaged in design of instruction and assessment is rarely adequate to allow real, systemic connections to discipline- or content-specific requirements that have a direct influence on understanding learning in given instructional context. As such, the actual professional or disciplinary requirements that call for advancing theory and knowledge in the field for these individuals ultimately gets neglected or diminished. The faculty member or researcher has little incentive for participating, and the institution loses out on a richness of work that would likely have more significant impact if support was better organized and more broadly provided.

In conclusion, campus leaders are challenged to ensure that organizational structures either leverage current funding and support or receive additional funding and support for cultivating work within a specified learning context. Otherwise, the credibility of the enterprise and the campus-wide staff involved centrally in assessment and accountability endeavors will be compromised out of existence. This diminishes buy-in across and within programs and units and also facilitates cynicism and a notion that participants have heard it all before. Additionally, and possibly more important, the nuanced, integrative, and adaptive requirements necessary for the subsequent construction and assessment of specific learning environments might be better supported, enhanced, and managed through synergies developed with an integrated approach that intentionally and consistently solicits department-specific expertise to support this complex of activities. To this end, there is not sufficient evidence in the literature or in practice to support the notion that most campuses are adequately supporting faculty or research staff within departments or programs with the requisite funding and flexibility required to iteratively design, implement, and assess the effects of a particular learning context.

We contend that a campus that addresses the requirements of assessment for learning in a nuanced and thoughtful manner will simultaneously recognize the accountability and performance demands placed upon it and efficiently address the teaching, research, and methodological support necessary for faculty and researchers to fully engage in authentic assessment. This approach will yield "theoretically-inspired innovation" in a particular field and inform implementation while allowing the institution to say something contextually meaningful about student learning while in the process increasing the odds that the learning will be significant.


The VSA, developed by university leaders, d is sponsored by two higher education associations: the Association of Public and Land-grant Universities (APLU) and the Association of State Colleges and Universities (AASCU). Initial funding was provided by the Lumina Foundation. For more information, please visit the following url:


  • Annis, L. & Jones, C. (1995). Student Portfolio: Their objectives, development, and use. In P. Seldin & Associates, Improving college teaching. Bolton, MA: Anker.
  • Banks, B. (2004). E-Portfolios: Their uses and benefits. Retrieved December 7, 2009, from .
  • Banta, T. W & Associates (2002). Building a scholarship of assessment. San Francisco, CA: : Jossey- Bass.
  • Banta, T. W., Black, K. E., & Jones, E. A. (2009). Designing effective assessment: Principles and profiles of good practice. San Francisco, CA: Jossey-Bass.
  • Banta, Trudy W. (2003). Portfolio assessmentAssessment uses, cases, scoring, and impact. San Francisco, CA: Jossey-Bass.
  • Barab, S. (2006). Design-based research. In R.K. Sawyer (Ed.)..), The Cambridge handbook of the learning science (ppp.153-169). New York, NY: Cambridge University Press.
  • Bates, A.W. (2003). Technology, e-learning and distance education (2nd ed.)..), London, UK: Routledge Falmer.
  • Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education , 5 (p . 7-71). PLACE: PUBLISHING PLACE
  • Blumenfeld, P. C., Kempler, T. M., & Krajcik, J. S. (2006). Cognitive engagement in learning environments. In R.K. Sawyer (Ed.), The Cambridge handbook of the learning science (p.pp. 475-488). New York, NY: Cambridge University Press.
  • Bond, D. (1995). Enhancing learning through self-assessment. London, UK: Kogan Page.
  • Bransford, J. D., Brown, A. L., & Cocking, R. C. (2000). How people learn: brainBrain, mind, experience, and school. Washington, DC: National Academy Press.
  • Bresciani, M. J. (2007). Assessing student learning in general education: Good practice case studies. Bolton, MA: Anker Publishing Company.
  • Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA: Sterling Publishing.
  • Brown, S., & Glasner, A. (1999). Assessment matters in higher education: Choosing and using diverse approaches. Philadelphia, PA: SRHE and Open University Press Imprint.
  • Bryan, C., & Clegg, K. (2006). Innovative assessment in higher education. New York, NY: Routledge Taylor & Francis Group.
  • Burnett, M. N., & Williams, J. M. (2009). Institutional uses of rubrics and e-portfolios: Spelman College and Rose-Hulman Institute. Peer Review, 11(1), 24-27.
  • Cambridge, B.L. (Ed.). (2001). Electronic Portfolios: Emerging practices in student, faculty, and institutional learning. Washington, DC: American Association for Higher Education.
  • Carver, S.M. (2006). Assessing for deep understanding. In R.K. Sawyer (Ed.), The Cambridge handbook of the learning science (p.pp. 205-221). New York, NY: Cambridge University Press.
  • Chen, G., Liu, C., Ou, K., & Lin, M. (2001). Web learning portfolios: A tool for supporting performance awareness., Innovations in Education and Teaching International, 38(1), 19-30.
  • Clark, E. J., & Eynon, B. (2009). E-portfolios at 2.0 - Surveying the field. Peer Review, 11(1), 18-23.
  • Collins, A. (2007). Cognitive apprenticeship. In R. K. Sawyer (Ed), The Cambridge handbook of the learning science (p.pp. 47-60). New York, NY: Cambridge University Press.
  • Collis, B., & Moonen, J. (2001). Flexible learning in a digital world. London, UK: Kogan Page.
  • Cotterill, S., Bradley, P., & Hammond, G. (2006). Supporting assessment in complex educational environments. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 191-199). New York, NY: Routledge Taylor & Francis Group.
  • Dwyer, C. A., Millett, C. M., & Payne, D. G. (2006). A Culture of Evidence: Postsecondary Assessment and Learning Outcomes. Princeton, NJ: ETS.
  • Hart Research Associates. (2009). Learning and Assessment: Trends in Undergraduate Education A Survey Among Members of The Association of American Colleges and Universities. Retrieved on November 15, 2009 from
  • Ferguson, M. (2005). Advancing liberal education: Assessment practices on campus. Washington, DC: The Association of American Colleges and Universities.
  • Fink, L.D. (2003). Creating significant learning experiences in college classrooms: An integratedintergrated approach to designing college courses. San Francisco, CA: Jossey-Bass.
  • Hin ett, K. (1997). Towards meaningful learning: A theory for improved assessment in higher education. Unpublished Ph.D. thesis, University of Central Lancashire.
  • Jafari, A., & Kaufman, C. (2006). Handbook of research on ePortfolios. Hershey, PA: Idea Group Inc.
  • Kimball, L. (1998). Managing distance learning-New challenges for faculty. In S.H.R. Hazemi & S. Wilbur (Eds.), The digital university: Reinventing the academy (p.pp. 25-38). London, UK: Springer-Verlag London.
  • Knight, P.(1995). Assessment for learning in higher education. London , UK : Kogan Page.
  • Klenowksi, V., Askew, S., & Carnell, E. (2006). Portfolios for learning, assessment and professional development in higher education. Assessment & Evaluation in Higher Education, 31(3), 267-286.
  • Lave, J., & Wenger, E. (1991). Situated learning: Legitmated peripheral participation. New York, NY: Cambridge University Press.
  • Liberal Education Outcomes: A Preliminary Report on Student Achievement in College. (2005). Washington, DC: Association of American Colleges and Universities.
  • Laurillard, D. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London, UK: Routledge.
  • Lopez-Fernandez, O., & Rodriguez-Illera, J. L. (2009). Investigating university students' adaptation to a digital learner course portfolio. Computers & Education, 52, 608-616.
  • Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus Publishing.
  • Maki, P. L. (2009). Moving beyond a national habit in the call for accountability. Peer Review, 11(1),13-17.
  • Miller, R., & Morgaine, W. (2009). The benefits of e-portfolios for students and faculty in their own words. Peer Review, 11(1), 8-12.
  • Murphy, R. (2006). Evaluating new priorities for assessment in higher education. In C. Bryan & K. Clegg (Eds.). Innovative assessment in higher education (pp. 37-47). New York, NY: Routledge Taylor & Francis Group.
  • Newman, F.M., Marks, H.M., & Gamon, A. (1996). Authentic pedagogy and student performance. American Journal of Education, 104, 280-312.
  • Palincsat, A.S. (1998). Social constructivist perspectives on teaching and learning. Annual Review of Psychology, 49, 345-375.
  • Perry, M. (1997). Producing purposeful portfolios. In K.B. Yancy & I. Weiser (Eds.), Situating portfoliosPortfolios: Four perspectives (p. 182-189). Logan, UT: Utah State University Press.
  • Preston, D.S. (2005). Virtual learning and higher education. Amsterdam: Rodopi.
  • Rhodes, T. (2009). The VALUE project overview. Peer Review, 11(1), 4-7.
  • Ross, S., Jordan, S., & Butcher, P. (2006). Online instantaneous and targeted feedback for remote learners. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (p.pp. 123-131). New York, NY: Routledge Taylor & Francis Group.
  • Sawyer, R. K. (2006). The Cambridge handbook of the learning sciences. Cambridge, NY: Cambridge University Press.
  • Scardamalia, M., Bereiter, C. (1994). Computer support for knowledge-building communities. Journal of the Learning Sciences, 3(3), 265-283.
  • Schank, R.C. (1997). Virtual learning: A revolutionary approach to building a highly skilled workforce. New York, NY: McGraw-Hill.
  • Schilling, K. M., & Schilling, K. J. (1998). Proclaiming and sustaining excellence: Assessment as a faculty role. (ASHE-ERIC Higher Education Report Vol. 26, No. 3). Washington, DC: The George Washington University Graduate School of Education and Human Development.
  • Stahl, G., Koschmann, T., & Suther, D. D. (2006). Computer-Support Collaborative Learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning science (p.pp. 409-425). New York, NY: Cambridge University Press.
  • Stefanakis, E. (2002). Multiple intelligences and portfolios. Portsmouth, NH: Heinemann.
  • Sutton, R. (1995). Assessment for learning . Manchester, UK: R.S. Publications.
  • Torrance, H. (1995). Evaluating authentic assessment . Buckingham, UK: Open University Press.
  • Torrance, H., & Prior, J. (1998). Investing formative assessment: :Teaching, learning, and assessment in the classroom. Buckingham, UK: Open University Press.
  • Yancey, K. B. (2009). Electronic portfolios a decade into the twenty-first century: What we know, what we need to know. Peer Review, 11(1), 28-32.
  • Zubizarreta, J. (2009). The learning portfolio: Reflective practice for improving student learning. Bolton, MA: Anker.

Foundations for Cognitive Engagement

  • What is the theoretical and/or disciplinary rationale behind desired proficiencies and are they clearly articulated at the program or course levels?
  • Are the intervention/instructional rationale(s) clearly articulated and linked to desired learning outcomes for programs, services, and/or courses?
  • Are the outcomes related to facilitating cognitive engagement precisely clarified (i.e., metacognition, learning approaches, attitudes, motivation, etc)?
  • How do the curricular and co-curricular intersect to support the developmental path of students?
  • What are the modes for facilitating learning and how do they support the overall instructional and assessment design philosophy as well as documentation needs at the institution and program levels (e.g., reflection, facilitative techniques, technology, collaboration, inquiry-based techniques).
  • Informal Assessment Design Decisions

  • Is the full set of learning goals covered by the proposed set of assessments?
  • To what degree of specificity are the unique characteristics of the population factored into collection, instruction, and analysis?
  • Do baselines and post-measures map onto desired outcomes for the following: specific content-knowledge; transferable strategic knowledge (content-neutral); motivation, self-efficacy, attitudinal, and cognitive and metacognitive skills?
  • Are the participants adequately trained to reliably score and generalize findings derived from learning activities for assessment purposes at a level beyond an individual student (e.g., capstone projects, reflection and writing prompts, etc.).
  • At the program and campus-wide levels, who are the individuals/offices charged with the actual implementation and documentation strategies, ongoing review of design fidelity, and incorporation of fidelity review into relevant formative and summative analyses?
  • Who is responsible for ensuring technology is available, and adapted, and how is it resourced to meet the initiative's facilitative and documentation requirements.
  • Note: Adapted from "Assessing for Deep Understanding," by S. Carver, The Cambridge Handbook of The Learning Science, p. 207. Copyright 2006 by Cambridge University Press.

A. Intellectual and Practical Skills

  1. Inquiry and analysis
  2. Critical thinking
  3. Creative thinking
  4. Written communication
  5. Oral communication
  6. Quantitative literacy
  7. Information literacy
  8. Teamwork
  9. Problem solving

B. Personal and Social Responsibility

  1. Civic knowledge and engagement --- local and global
  2. Intercultural knowledge and competence
  3. Ethical reasonin