top of page
Tenets of Learning Improvement

from conversations and deliberations among the Learning Improvement Community*

Please read the current, 2021 version of the tenets.

Below is the previous version from April 2020.


The purpose of this document is to promote a shared understanding of learning improvement in academic programs in higher education. Our efforts are, in part, a response to a wash-rinse-repeat formulation of learning outcomes assessment that often does not focus on demonstrating learning improvement, leaves many feeling exhausted, and may not attend to all groups of students.

Our tenets are divided into two sections: (a) goals and scope and (b) mechanisms for engaging in student learning improvement.

Goals and scope

1. Our purpose is to promote the improvement of student learning in higher education. We do not focus on other student success areas like retention and graduation rates.

2. Definition of learning improvement: Intentional changes in an academic program’s learning environment that produce better student learning achievement. For example, students demonstrated better writing skills (evidenced by higher rubric scores) thanks to a new writing curriculum, effectively delivered. The learning environment includes elements such as teaching methods, curricular change, student habits/behaviors/development, learning support, resources.


3. Faculty/instructors are integral to learning improvement. Faculty/instructors are essential to identifying a learning area to be improved, implementing or coordinating the implementation of better learning environments, assessing learning, and making sense of the assessment results.

4. Our current focus is on academic program-level learning improvement efforts. In other words, a learning improvement effort for a program (e.g., History B.A., general education, a course sequence, a multi-section course) that aims to improve learning for all students in that program. Changes to the learning environment, therefore, should affect all students in the program. For example, add/increase the scaffolding of learning across the program curriculum. We recognize the importance of individual faculty investigating the learning improvement in their section, e.g., through scholarship of teaching and learning activities. Our scope is larger: intentional, coordinated learning improvement across courses and beyond sections. Eventually, we would like to expand to institution-level learning improvement and learning improvement in student affairs/co-curricular programs; however, concentrating on the academic program level in the short term will sharpen our strategies.

5. Within learning, we emphasize intellectual/professional skills and dispositions important in the field of study and/or in society. The rapid growth and availability of new knowledge requires a shift in priorities to higher-order thinking, the application of knowledge, and shaping of one’s value system. A few examples of intellectual/professional skills and dispositions include writing, critical thinking, working effectively in diverse teams, openness to multiple perspectives, sense of social responsibility, and learning how to learn, as opposed to memorization and recall. The program faculty are in the best position to identify appropriate intellectual/professional skills and dispositions (see also #3).


Improving student learning is complex and challenging. Programs themselves vary in complexity—from a two-course sequence to a flexible interdisciplinary B.A. program—and developing students’ intellectual/professional skills and dispositions requires a high level of coordination. The next section highlights considerations for engaging with such improvement projects.

Mechanisms for engaging in student learning improvement

The traditional, typical assessment cycle involves (1) defining what students need to learn and offering courses/experiences, (2) gathering learning evidence, (3) evaluating and interpreting evidence, (4) using results for improvement. Then, the cycle begins again. The learning improvement community calls for a heavy expansion of the “last step” (using results for improvement) and by including the following:

  • Collaboration among faculty.

  • Student voices.

  • Being mindful of assessment systems and learning environments that affirm all students and are inclusive.

  • Identification and articulation of the learning area to improve.

  • Gauging what teaching/learning is currently happening in relation to the identified learning area (e.g., articulating a program theory).

  • After an initial evaluation of student learning, implementing changes in the learning environment hypothesized to positively affect the targeted learning. Ideally, the changes are supported by learning theory and the learning sciences literature and/or research conducted at one’s local institution.

  • Evaluation of learning before and after implementing changes in the learning environment. For example, evaluating learning between two distinct graduating cohorts where the most recent cohort experienced a significant change in their learning environment.

  • Drawing conclusions about learning improvement by interpreting information such as student performance results before and after the learning environment change and descriptions of the changes in the learning environment.

These Tenets suggest mechanisms for gathering the evidence of improvement.


6. Scaling learning improvement requires strategy, collaboration, coordination, and, importantly, a sustained supportive environment/culture. Reflective faculty members adjust the learning environment in their own sections to improve student learning. To improve learning across courses or in a program requires scaling these efforts which may seem straightforward but indeed is complex. Fulcher and Prendergast (in press) provided the following perspective on learning improvement efforts that involve multiple faculty and courses:

“When we consider program-level…improvement efforts, eliciting and coordinating commitment from faculty is complex. For example, imagine if an improvement initiative required coordinated interventions across two courses, and that three faculty taught each course (six faculty total). That would mean that each set of three faculty must coordinate with each other (horizontal alignment) and coordinate with the faculty teaching courses before or after their own (vertical alignment). See Figure 1 below. Such coordination has proven extremely challenging. For the effort to be the most effective, all six faculty must be committed to working together.”


Figure 1. Horizontal and vertical alignment.


7. Support in the form of expertise, time, space, and money is critical. The following groups support learning improvement in complementary ways.

  • Administrators such as department heads, deans, and provosts provide time, space, and money.

  • Faculty are the champions, instigators, and implementers (noted in bullet #3 above).

  • Educational developers** support faculty in defining the learning targeted for improvement, developing high-quality student learning experiences, and designing/refining the curriculum.

  • Assessment professionals support faculty in defining the learning targeted for improvement, selecting or creating a way to evaluate student learning, collecting learning evidence (e.g., a written project, an oral presentation), interpreting findings, and writing reports.

  • Students who represent different groups are essential to understanding whether the assessment system and learning environments are working for different student populations. 

Ideally, the faculty and the expert(s) in educational development and assessment work together as a team throughout the life of the learning improvement project.

8. To claim improvement, we must measure the impact of change. Unlike the traditional, typical assessment cycle that does not specify an examination of the impact of the use of results/improvement plan, learning improvement methodologies highlight comparisons of students who have received a changed learning environment with those who have not. A straightforward, cross-sectional example is to compare a prior cohort of students that has not received the changed learning environment to the cohort of students that has received the changed learning environment. Higher scores associated with the change are consistent with a learning improvement claim. To strengthen the claim, additional information is needed. For example, the verification by involved faculty that the learning environment had changed (i.e., implementation fidelity); a review of student characteristics to ensure similarity of the compared cohorts; faculty reflections on the extent to which the intervention was successful, why, and for which groups of students.

*Members of the Learning Improvement Community, 2019-2020:

Andrea Pope, James Madison University; Andrea Willis, Syracuse University; Cara Meixner, James Madison University; Charlie Blaich, Wabash College; Chris Coleman, University of Alabama; Cindy Crimmins, York College of Pennsylvania; Diane Boyd, Furman University; Erick Montenegro, National Institute for Learning Outcomes Assessment; Gianina Baker, National Institute for Learning Outcomes Assessment; Jason Lyons, Christopher Newport University; Jillian Kinzie, Indiana University Center for Postsecondary Research; Jodi Fisler, State Council of Higher Education for Virginia; Kathleen Gorski, Waubonsee Community College; Kathy Wise, Wabash College; Katie Boyd, Auburn University; Keston Fulcher, James Madison University; Kelsey Kirland, Old Dominion University; Lee Rakes, Virginia Military Institute; Linda Townsend, Longwood University; Mays Imad, Pima Community College; Megan Good, Auburn University; Michael Reder, Connecticut College; Monica Stitt-Bergh, University of Hawai‘i at Mānoa; Natasha Jankowski, National Institute for Learning Outcomes Assessment; Scott Benson, Washington State University; Pamela Tracy, Longwood University; Pat Hutchings, National Institute for Learning Outcomes Assessment; Yao Hill, University of Hawai‘i at Mānoa

**A unique vantage point of educational developers and assessment directors is that by working with different constituencies across campus, they develop a deeper overall view of the institution’s teaching and learning.

bottom of page