New Directions in Assessment Support
On the whole, I find the new directions to be a positive movement, one that will offer new insights into particular program strengths and weaknesses, and it will offer more significant data, information that describes the true impact of liberal learning on student outcomes. It is a positive, more productive direction, one that will offer hope and a greater sense of purpose, particularly for those of us in the disciplines of the Arts and Sciences. The following descriptions illustrate the work of two important support organizations for institutions committed to the important process of strategic planning and assessment.
The Association of American Colleges and Universities (AAC&U) has long supported institutional accountability, and as accrediting agencies, government, and social pressure for colleges and universities to provide "proof" of learning outcomes increased substantially during the 1980s and 90s, the AAC&U provided a framework of means and support for that initiative. During these decades, institutions were asked to decide what they wanted graduates to learn through their educational experience, and they were asked to demonstrate that those desired outcomes were achieved. In 2004, the AAC&U's board of directors elevated teaching standards when they decided that America's colleges and universities should hold themselves accountable for assessing students' higher levels of achievement not only for "generic skills and introductory levels of learning" but also for information about more fundamental skills, abilities, and perceptions.
The AAC&U's Liberal Education and America's Promise (LEAP) initiative was designed to work with campuses by providing assessment procedures that would help achieve this end. In a report released in 2007, the AAC&U recommended that institutions tie assessments to the curriculum and to faculty priorities for student learning (2007, 40), acknowledging the fundamental importance for informed and involved teaching faculty. Hundreds of American institutions, including Jacksonville State University, currently administer the National Survey of Student Engagement (NSSE), an instrument that annually gathers information about student participation in practices and activities that are provided for their learning development. The survey measures how students spend their time and what learning benefits they derive from their college experience.
In 2004, the same testing service issued the Faculty Survey of Student Engagement (FSSE). This instrument, as the name suggests, measures faculty perception of student behaviors, particularly, the behaviors that are linked with high levels of learning. The instrument also allows faculty to respond to questions about their typical time allotment for various teaching activities in addition to their perception of student engagement. It is an interesting approach, one that offers considerable information regarding the intersection of faculty and student perception of the learning process, but, equally important, this survey acknowledges the power of faculty in the learning process. Ultimately, of course, it is the programs and their effectiveness on learning outcomes that we assess, but it seems that this new survey recognizes the capacity for individual faculty to affect, channel, and direct positive student learning outcomes. The greatest student learning takes place with the support, direction, and encouragement of faculty. It must be recognized, of course, that this is a synergistic process, and we educators must aim to make the process work as well as possible. Good assessment practices are not only "outcomes based," but also recognize that much happens before the desired outcomes can be achieved. The College of Arts and Sciences is pleased to announce that, beginning this Spring semester, the College will provide a large sample of teaching faculty the opportunity to participate in the Faculty Survey of Student Engagement. We have sought information about student perception of their learning process, now it is the turn for faculty to add their understanding of the learning process between them and their students. We hope to expand this initiative to include all faculty in the College.
The American Association of Higher Education has also been an invaluable resource for institutional assessment practices, and the organization provides support and guidelines for faculty and administrators. The AAHE's claim to distinction is that it is the oldest U.S. organization dedicated to accreditation and assessment. The Association has now added the term "Accreditation" to its title, underscoring its work in setting and supporting standards for education. The now AAHEA has also taken a philosophic direction in assessment that should be reassuring for teaching faculty. Dr. Jose Luis Gomez, the current President of the organization, writes in his website welcome letter to new members that,"AAHEA supports certain core academic values such as Institutional Autonomy, the valuing of the intellectual and academic authority of the faculty, education programs, site-based education, and a community of learning, and collegiality and shared governance..." These are encouraging words indeed for those dedicated to learning in the Arts and Sciences. With this understanding in place, assessing our programs becomes not a threatening menace; rather, it is a means of demonstrating that our commitment to core academic values and standards is essential to excellence in education.
Academic Program Planning
IMPORTANT TERMS AND CONCEPTS
Mission: The mission statement should include the purpose of the program with a focus on educational values, major areas of knowledge covered in the curriculum, and careers or future studies for which graduates are prepared.
Goals: Two or three most important goals of the program, which are commonly agreed upon by the faculty. These may be drawn from areas such as teaching effectiveness, service to the community, and research productivity.
Objectives: A subset of goals that provide a more concrete statement of the manner in which goals will be achieved.
Program goals and objectives usually include at least one goal related to learning, but they also include program metrics over and above student learning.
Learning outcomes refer to the demonstrable 1.) knowledge, 2.) skills, and 3.) attitudes students acquire as a result of their experiences in the program. Learning outcomes are a result of the program experience while goals are a desired end. For example, a program might have as a goal, "All students in the Humanities will develop critical thinking skill." A Student Learning Outcome that would demonstrate that this end had been achieved might be stated, "Students will be able to use data to make complex decisions based on logical evaluation of competing ideas, theories, or concepts," or, perhaps, "Students will be able to evaluate quantitative data and identify incorrect conclusions." SLOs must be observable and measurable.
Program Student Learning Outcomes and Course Student Learning Outcomes:
PSLOs and CSLOs
Points to remember:
- Program Student Learning Outcomes drive assessment, and it is the PSLOs that SACS will review.
- Each PSLO must be demonstrated, e.g., supported through assessment measurements.
- Faculty should select outcomes that the department members truly care about and would like to understand better.
- Keep learning outcomes simply stated. Compound descriptors become difficult to assess.
- It is important to recall that we are describing the student and his/her abilities upon completion of the program. We are not describing the experience of the program. Do not begin a PSLO with, "The student will experience..."
- We assess incremental mastery of SLOs by assessing every level of the program from entry-level through upper-level courses. Obviously, program mastery is demonstrated at the upper level and a capstone course is ideal for illustrating the student's acquisition of PSLOs.
- Bloom's Taxonomy and the verbs associated with the various levels of learning are very helpful in demonstrating that programs have assessed all levels of learning.
CSLOs are course specific learning outcomes. Each course should have student learning outcomes and methods of assessment specific to that course. The outcomes should be clearly stated on the syllabus. Faculty should recall that course assessment and student grades are not equivalent.
Direct Assessment Methods:
Defined as a process used to gather data which require students to display their knowledge, skills, or attitudes, direct assessment methods are observable and may be documented. They are preferred in documenting learning outcomes. Examples: oral presentations, research projects, embedded questions, capstone course projects.
Indirect Assessment Methods:
These methods record what students think that they have learned. Indirect measurements include reflective essays, exit interviews, and attitudinal surveys (NSSE).
Assessment data should be used to improve the experience that enhances student learning, and this use can have an effect on every level of program planning. This process is referred to as "closing the loop." Assessment data can illustrate, for example, a need for a new faculty line or more physical space, or it may indicate a need for a new course to fill a demonstrated gap in student outcomes. Benchmark tests such as the Major Field Test (MFT) are useful in revealing these gaps. Of course, faculty may construct their own disciplinary outcomes tests, but they lack the benchmarking capacity of the standard tests. The forms of program as well as course assessment are many and varied, but, ultimately, assessment is a tool to improve the quality of the learning experience so that we may achieve the outcomes that we desire.