As part of its commitment to transparency and public accountability, the Council for the Accreditation of Educator Preparation (CAEP), the nation’s new accreditor of educator preparation, is seeking public comment on the recommendations developed by the CAEP Commission on Standards and Performance Reporting.
We are interested in hearing your feedback on each of the standards and recommendations, as well as your thoughts about the examples of evidence for the standards. The standards are meant to represent the core elements of quality educator preparation. We invite you to rate how well we have captured those elements as well as comment on each standard:
Standard 1: Content and Pedagogical Knowledge
Standard 2: Clinical Partnerships and Practice
Standard 3: Candidate Quality, Recruitment, and Selectivity
Standard 4: Program Impact
Standard 5: Provider Quality, Continuous Improvement, and Capacity
Following this public comment period through March 29, 2013, the Commission will reconvene in June to consider the feedback received and develop final recommendations for the CAEP Board of Directors to consider in summer 2013.
In order to progress through this survey, please use the following tips:
- Answer as many or as few questions as you choose
- Answer the questions in any order.
- Click the “Save All” button on any page to save your answers.
- You do not have to register to start the survey, but it is required before saving or submitting.
If you have any questions about the survey, please contact us at firstname.lastname@example.org or call (202) 223-0077.
Standard 1: Content and Pedagogical Knowledge
The provider ensures that candidates develop a deep understanding of the critical concepts and principles of their discipline and, by completion, are able to use discipline-specific practices flexibly to advance the learning of all students toward attainment of college and career-readiness expectations.
Content knowledge and pedagogical knowledge
Candidates demonstrate an understanding of the critical concepts and principles in their discipline, including college and career-readiness expectations, and of the pedagogical content knowledge necessary to engage students’ learning of concepts and principles in the discipline.
Candidates create and implement learning experiences that motivate P-12 students, establish a positive learning environment, and support P-12 students’ understanding of the central concepts and principles in the content discipline. Candidates support learners’ development of deep understanding within and across content areas, building skills to access and apply what students have learned.
Candidates design, adapt, and select a variety of valid and reliable assessments (e.g., formative and summative measures or indicators of growth and proficiency) and employ analytical skills necessary to inform ongoing planning and instruction, as well as to understand, and help students understand their own, progress and growth.
Candidates engage students in reasoning and collaborative problem solving related to authentic local, state, national, and global issues, incorporating new technologies and instructional tools appropriate to such tasks.
Candidates use research and evidence to continually evaluate and improve their practice, particularly the effects of their choices and actions on others, and they adapt their teaching to meet the needs of each learner.
The Learner and Learning
Candidates design and implement appropriate and challenging learning experiences, based on an understanding of how children learn and develop. They ensure inclusive learning environments that encourage and help all P-12 students reach their full potential across a range of learner goals.
Candidates work with P-12 students and families to create classroom cultures that support individual and collaborative learning and encourage positive social interaction, engagement in learning, and independence.
Candidates build strong relationships with students, families, colleagues, other professionals, and community members, so that all are communicating effectively and collaborating for student growth, development, and well-being.
Candidates reflect on their personal biases and access resources that deepen their own understanding of cultural, ethnic, gender, sexual orientation, language, and learning differences to build stronger relationships and to adapt practice to meet the needs of each learner.
In this report, the term “candidate” refers to individuals preparing for professional education positions. “Completer” is used as a term to embrace candidates exiting from degree programs, and also candidates exiting from other higher education programs or preparation programs conducted by alternative providers that may or may not offer a certificate or degree.
In Standard 1, the subjects of components are “candidates.” The specific knowledge and skills described will develop over the course of the preparation program and may be assessed at any point, some near admission, others at key transitions such as entry to clinical experiences, and still others near candidate exit as preparation is completed.
This standard asserts the importance of a strong content background and a foundation of pedagogical knowledge for all candidates. Teaching is complex and preparation must provide opportunities for candidates to acquire knowledge and skills that can move all P-12 students significantly forward—in their academic achievements, in articulating the purpose of education in their lives, and in building independent competence for life-long learning. Such a background includes experiences that develop deep understanding of major concepts and principles within the candidate’s field, including college and career-ready expectations.[i] Moving forward, college and career ready standards can be expected to include additional disciplines, underscoring the need to help students master a range of learner goals conveyed within and across disciplines. Component 1.6 refers “a range of learner goals,” and these would explicitly include interdisciplinary emphases as a complement to the disciplinary focus in component 1.1. Examples, among others, would be civic literacy, health literacy and global awareness.
Content knowledge describes the depth of understanding of critical concepts, theories, skills, processes, principles, and structures that connect and organize ideas within a field.[ii] Research indicates that students learn more when their teachers have a strong foundation of content knowledge.[iii] “Teachers need to understand subject matter deeply and flexibly, so that they can help students create useful cognitive maps, relate ideas to one another, and address misconceptions. They need to see how ideas connect across fields and to everyday life, and how ideas develop a foundation for pedagogical content knowledge[iv] that enables them to make ideas accessible to others.[v] These essential links between instruction and content are especially clear in Linda Darling-Hammond’s description of what the Common Core State Standards mean by “deeper learning:”[vi]
- An understanding of the meaning and relevance of ideas to concrete problems
- An ability to apply core concepts and modes of inquiry to complex real-world tasks
- A capacity to transfer knowledge and skills to new situations, to build on and use them
- Abilities to communicate ideas and to collaborate in problem solving
- An ongoing ability to learn to learn
Pedagogical content knowledge in teaching includes “core activities of teaching, such as figuring out what students know; choosing and managing representations of ideas; appraising, selecting, and modifying textbooks; . . . deciding among alternative courses of action, and analyz(ing) the subject matter knowledge and insight entailed in these activities.”[vii] It is crucial to “good teaching and student understanding.”[viii]
The development of pedagogical content knowledge involves a shift in a teacher’s understanding from comprehension of subject matter for themselves, to advancing their students’ learning through presentation of subject matter in a variety of ways that are appropriate to different situations—reorganizing and partitioning it, and developing activities, metaphors, exercises, examples and demonstrations—so that it can be grasped by students.[ix]
Understanding pedagogical content knowledge is complemented by knowledge of learners—where teaching begins. Teachers must understand that learning and developmental patterns vary among individuals, that learners bring unique individual differences to the learning process, and that learners need supportive and safe learning environments to thrive. Teachers’ professional knowledge includes how cognitive, linguistic, social, emotional, and physical development occurs.[x] Neuroscience is influencing education, and future educators should be well versed in findings from brain research, including how to facilitate learning for students with varying capacities, strengths, and approaches to learning.
The Commission’s development of this draft standard and its components has been influenced especially by the InTASC Model Core Teaching Standards, the Common Core State Standards Initiative[xi], and the National Board for Professional Teaching Standards’ Five Core Propositions.
Examples of Evidence
On content and pedagogical knowledge
a. State licensure exams
- There should be a recommended specific and common cut-score across states, and a pass-rate of 80% within two administrations.
- CAEP should work with states to develop and employ new or revised licensure tests that account for college and career readiness standards, and establish a common passing score for all states. (Note: Recent reports from CCSSO, Our Responsibility, Our Promise: Transforming Educator Preparation and Entry into the Profession, and from AFT, Raising the Bar: Aligning and Elevating Teacher preparation and the Education Profession, address preparation and entry requirements, indicating growing support for vastly improved licensure assessments.)
b. Grade point average (GPA) and/or grades in relevant coursework
- This could be an overall GPA, GPA in the major, or GPA in supporting/integral content coursework related to the area of teaching (e.g. science coursework for early childhood educators).
On instructional practice and the learner and learning
d. Student performance on valid, reliable assessments aligned with instruction during clinical practice experiences.
h. Provider criteria that qualify candidates for completion, with program performance indicating that all completers have opportunities to reflect on their personal biases, access appropriate resources to deepen their understanding, can use this information and related experiences to build stronger relationships with P-12 learners, and can adapt their practices to meet the needs of each learner.
(NOTE: The provider would also monitor data on:
(1) Quality of candidates available in response to Standard 3 on Candidate quality, recruitment and selectivity, and
(2) P-12 student learning, observations and surveys that are available in response to Standard 4, Program Impact)
Clinical Practice and Partnerships
The provider ensures that effective partnerships and high quality clinical practice are central to preparation so that candidates develop the knowledge, skills and dispositions necessary to demonstrate positive impact on all P-12 students’ learning.
Partnerships for Clinical Preparation
Partners co-construct mutually beneficial P-12 school and community arrangements for clinical preparation, including technology-based collaborations, and share responsibility for continuous improvement of candidate preparation. Partnerships for clinical preparation can follow a range of forms, participants, and functions. They establish mutually agreeable expectations for candidate entry, preparation and exit; ensure that theory and practice are linked; maintain coherence across clinical and academic components of preparation; and share accountability for candidate outcomes.
Partners co-select, prepare, evaluate, support and retain high quality clinical educators who demonstrate a positive impact on candidates’ development and P-12 student learning. In collaboration with their partners, providers use multiple indicators and appropriate technology-based applications to establish, maintain and refine criteria for selection, professional development, performance evaluation, continuous improvement and retention of clinical educators in all clinical placement settings.
The provider works with partners to design clinical experiences of sufficient depth, breadth, diversity, coherence and duration to ensure that candidates demonstrate their developing effectiveness and positive impact on all students’ learning. Clinical experiences, including technology-based applications, are structured to demonstrate candidates’ development of the knowledge, skills, and dispositions that are associated with a positive impact on P-12 student learning.
In this report, the term “all students” is defined as children or youth attending P-12 schools including students with disabilities or exceptionalities, who are gifted, and students who represent diversity based on ethnicity, race, socioeconomic status, gender, language, religion, sexual identification, and geographic origin.
Education is a practice profession and preparation for careers in education must create nurturing opportunities for aspiring candidates to practice the application of their developing knowledge and skills. These opportunities take place particularly in real-life situations, but may be augmented by settings and situations enhanced by technology, such as simulations, video and online activities. The 2010 NCATE Panel report, Transforming Teacher Education Through Clinical Practice,[i] identified important dimensions of clinical practice and the Commission has drawn from the Panel’s recommendations to structure the three components of this standard.
Educator preparation providers (EPPs) seeking accreditation should have strong collaborative partnerships with school district and individual school partners as well as other community stakeholders. The term “partnerships” for clinical practice signifies a collaboration among various entities in which all participating members pursue mutually agreed upon goals for preparation of education professionals. Characteristics of effective partnerships include: mutual trust and respect; sufficient time to develop and strengthen relationships at all levels; shared responsibility and accountability among partners and periodic formative evaluation of activities among partners.[ii] Linda Darling-Hammond and J. Baratz-Snowden[iii] call for strong relationships between universities and schools to share standards of good teaching that are consistent across courses and clinical work. The 2010 NCATE Panel proposed partnerships that are “strategic” in meeting partners’ needs by defining common work, shared responsibility, authority and accountability.
Clinical educators are individuals from diverse settings who assess, support, and develop a candidate’s knowledge, skills and dispositions during clinical experience. The literature indicates that the quality of the clinical educators, both school-based and provider-based, can ensure the learning of educator candidates and P-12 students.[iv] Transforming Teacher Education Through Clinical Practice described high quality clinical experiences as ones in which both providers and their partners require candidate supervision and mentoring by certified clinical educators—drawn from discipline-specific, pedagogical, and P-12 professionals—who are trained to work with and provide feedback to candidates. Clinical educators should be accountable for the performance of the candidates they supervise, as well as that of the students they teach.[v]
High-quality clinical experiences take place in a variety of settings including schools; community-based centers; and homeless shelters; as well as through simulations, video analyses, and other virtual opportunities (for example, online chats with students). Teacher candidates observe, critique, assist, tutor, instruct, and conduct research. They may be student teachers or interns.[vi] The experiences integrate applications of theory from pedagogical courses or modules in P-12 or community settings. They offer multiple opportunities for candidates to relate and reflect upon clinical and academic components of preparation.
The members of the 2010 Panel on clinical preparation and partnerships consulted both research resources and professional consensus reports in shaping their conclusions and recommendations, including proposed design principles for clinical experiences.[vii] Among these are: (1) a student learning focus, (2) clinical practice that is integrated throughout every facet of preparation in a dynamic way, (3) continuous monitoring and judging of candidate progress on the basis of data, (4) a curriculum and experiences that permit candidates to integrate content and a broad range of effective teaching practices and to become innovators and problem solvers, and (5) an “interactive professional community” with opportunities for collaboration and peer feedback. Howey[viii] also suggests several principles, including tightly woven education theory and classroom practice as well as placement of teacher candidates in cohorts. An ETS report[ix] proposed clinical preparation experiences that offer opportunities for “Actual hands-on ability and skill to use . . . types of knowledge to engage students successfully in learning and mastery.” Linda Darling-Hammond and J. Baratz-Snowden[x] proposed an extended clinical experience of at least 30 weeks that is carefully mentored and interwoven with coursework.
Examples of Evidence
a. Memoranda of understanding or data-sharing agreements with diverse P-12 and/or community partners
b. Evidence of tracking and sharing data such as hiring patterns of the school district/school or job placement rates contextualized by partners’ needs
c. Evidence of actions that indicate combined resource allocation and joint decision-making, such as:
- program and course adjustments to meet partners’ human capital and instructional needs
- stated characteristics and roles for on-site delivery of programmatic courses
On clinical faculty
d. Plans, activities, and results related to selection of diverse clinical educators and their support and retention, such as training and support protocols, including implementation data, with and for clinical educators in EPP programs
On clinical experiences
e. Performance data such as evidence of how candidates develop high leverage instructional practices/strategies throughout their programs in diverse clinical settings, with continuous opportunities for formative feedback and coaching from high quality and diverse clinical educators
f. Evidence that candidates integrate technology into their planning and teaching and use it to differentiate instruction
g. Evidence of candidates’ graduated responsibility for all aspects of classroom teaching and increasing ability to impact all students’ learning
h. Evidence of candidates’ reflection upon instructional practices, observations, and their own practice with increasing breadth, depth, and intention with an eye toward improving teaching and student learning (e.g. video analysis of teaching, reflection logs)
i. Studies of the effectiveness of diverse field experiences on candidates’ instructional practices.
j. Other evidence, including reliable and valid measures or innovative models of high quality partnerships, clinical educators, or clinical experiences
Candidate Quality, Recruitment, and Selectivity
The provider demonstrates that the quality of candidates is a continuing and purposeful part of its responsibility from recruitment, at admission, through the progression of courses and clinical experiences, and to decisions that completers are prepared to teach effectively and are recommended for certification.
Plan for Recruitment
The provider presents plans and goals for strategic and recruitment outreach to recruit high quality candidates from a broad range of backgrounds and diverse populations to accomplish their mission.
Recruitment of Diverse Teacher Candidates
The provider documents goals, efforts and results for the admitted pool of candidates that demonstrate the diversity of America’s P-12 students (including students with disabilities, exceptionalities, and diversity based on ethnicity, race, socioeconomic status, gender, language, religion, sexual identification, and geographic origin).
Recruitment to Meet Employment Needs
The provider demonstrates efforts to know and address community, state, national, or regional or local needs for hard to staff schools and shortage fields, including STEM, English language learning, and students with disabilities.
Admission standards indicate that candidates have high academic achievement and ability
The provider sets admissions requirements, including CAEP minimum criteria or the state’s minimum criteria, whichever are higher, and gathers data to monitor applicants and the selected pool of candidates. The provider ensures that the average GPA of its accepted cohort of candidates meets or exceeds the CAEP minimum GPA of 3.0 and a group average performance in the top third of those who pass a nationally normed admissions assessment such as ACT, SAT or GRE. The provider demonstrates that the standard for high academic achievement and ability is met through multiple evaluations and sources of evidence. If a program has a model that predicts effective teaching empirically as measured in reliable and valid ways, the cohort group floor must be above the mean of the predicted measure.
Additional selectivity factors
Provider preparation programs establish and monitor attributes beyond academic ability that candidates must demonstrate at admissions and during the program. The provider selects criteria, describes the measures used and evidence of the reliability and validity of those measures, and reports data that show how the academic and non-academic factors deemed important in the selection process and for development during preparation, predict candidate performance in the program and effective teaching.
Selectivity during preparation
The provider creates criteria for program progression and monitors candidates’ advancement from admissions through completion. All candidates demonstrate the ability to teach to college and career ready standards. Providers present multiple forms of evidence to indicate candidates’ developing content knowledge, pedagogical content knowledge, and pedagogical skills, including the effective use of technology.
Selection at completion
Before the provider recommends any completing candidate for licensure or certification, it documents that the candidate has reached a high standard for content knowledge in the fields where certification is sought, and can teach effectively with positive impacts P-12 student learning.
Before the provider recommends any completing candidate for licensure or certification, it documents that the candidate understands the expectations of the profession including codes of ethics, professional standards of practice, and relevant laws and policies.
Educator Preparation Providers have a critical responsibility to ensure the quality of their candidates. This responsibility continues from purposeful recruitment that helps fulfill the provider’s mission, to admissions selectivity that builds an able and diverse pool of candidates, through monitoring of candidate progress and providing necessary support, and to demonstrating that candidates are proficient at completion and that they are selected for employment opportunities that are available in areas served by the provider. The integration of recruitment and selectivity as EPP responsibilities to ensure quality is emphasized in a recent National Research Council report:[ii]
The quality of new teachers entering the field depends not only on the quality of the preparation they receive, but also on the capacity of preparation programs to attract and select academically able people who have the potential to be effective teachers. Attracting able, high-quality candidates to teaching is a critical goal.
The majority of American educators are White, middle class, and female.[iii] A 2006 study reported 75% of teachers are female, 84% are White.[iv] The makeup of the nation’s teacher workforce has not kept up with the changing demographics. At the national level, students of color make up more than 40% of the public school population, while teachers of color are only 17% of the teaching force.[v] The mismatch has consequences. Goldhaber and Hansen[vi] found that student achievement is positively impacted by a racial/ethnicity match between teachers and students.
While recruitment of talented minority candidates is a time- and labor-intensive process,[vii] “teachers of color and culturally competent teachers must be actively recruited and supported.”[viii] Recruitment can both increase the quality of selected candidates and offset potentially deleterious effects on diversity from more selective criteria—either at admissions or throughout a program.[ix] “Successful programs recruit minority teachers with a high likelihood of being effective in the classroom” and “concentrate on finding candidates with a core set of competencies that will translate to success in the classroom.” [x] There is evidence that providers of alternative pathways to teaching have been more successful in attracting non-White candidates. Feistritzer reports alternative provider cohorts that are 30% non-White, compared with 13% in traditional programs.[xi]
The 2010 NCATE Panel on Clinical Partnerships advocated attention to employment needs as a way to secure greater alignment between the teacher market and areas of teacher preparation.[xii] The federal Department of Education regularly releases lists of teacher shortages by both content area specialization and state.[xiii] Some states also publish supply and demand trends and forecasts and other information on market needs. These lists could assist EPP programs in shaping their preparation program offerings and in setting recruitment goals.
There is a broad public consensus that providers should attract and select able candidates who will become effective teachers. The 2011 Gallup Phi Delta Kappan education poll[xiv] reported that 76% of the U. S. adult public agreed that “high-achieving” high school students should be recruited to become teachers. Another example is found in a recent AFT report on teacher preparation.[xv] AFT seeks to “attract academically capable students with authentic commitment to work with children” and would set GPA requirements at 3.0, SATs at 1100 and ACT scores at 24.0.
Researchers conclude that academic quality, especially in verbal ability and math knowledge, impacts teacher effectiveness.[xvi] A study for McKinsey and Company[xvii] found that high performing countries had a rigorous selection process similar to that of medical schools. Whitehurst[xviii] suggests that education providers should be much more selective in terms of their candidates’ cognitive abilities. When looking at the cost of teacher selection, Levin[xix] found “that recruiting and retaining teachers with higher verbal scores is five to ten times as effective per dollar of teacher expenditure in raising achievement scores of students as the strategy of obtaining teachers with more experience.” Rockoff and others[xx] concluded that “teachers’ cognitive and non-cognitive skills…have a moderately large and statistically significant relationship with student and teacher outcomes, particularly with student test scores.”
In measuring teachers’ cognitive and non-cognitive skills, researchers have found that both cognitive and non-cognitive factors “have a moderately large and statistically significant relationship with student and teacher outcomes, particularly with student test scores.”[xxi] There is strong support from the professional community that qualities outside of academic ability are associated with teacher effectiveness. These include grit, the ability to work with parents, the ability to motivate, communication skills, focus, purpose, and leadership, among others. Duckworth et al[xxii] found “that the achievement of difficult goals entails not only talent but also the sustained and focused application of talent over time.” A Teach for America study[xxiii] concluded that a teacher’s academic achievement, leadership experience, and perseverance are associated with student gains in math, while leadership experience and commitment to the TFA mission were associated with gains in English. Danielson asserts that “teacher learning becomes more active through experimentation and inquiry, as well as through writing, dialogue, and questioning.”[xxiv] In addition, teacher evaluations involve “observations of classroom teaching, which can engage teachers in those activities known to promote learning, namely, self-assessment, reflection on practice, and professional conversation.” These “other” attributes and abilities lend themselves to provider innovation. Some providers might emphasize certain attributes because of the employment field or market for which they are preparing teachers.
Several researchers, including Deborah Ball in mathematics education, the MET study[xxv] on components of teaching, and skills approaches such as Lamov‘s Teach Like a Champion, assert there are important critical pedagogical strategies that develop over time. Henry,[xxvi] Noell and Burns,[xxvii] and Whitehurst[xxviii] all found that, in general, teachers became more effective as they gained experience. Both research, as synthesized by the National Research Council,[xxix] and professional consensus, as represented by the Council of Chief State School Officers InTASC standards,[xxx] indicate that the development of effective teaching is a process.
There are various sets of criteria and standards for effective teaching and teacher education; many include performance tasks[xxxi] and artifacts created by the teacher candidate.[xxxii] These standards, like the ones the CAEP Commission has drafted, have a central focus on P-12 learning. Student learning should be a criterion for selecting candidates for advancement throughout preparation. The evidence indicators that appear below can be used to monitor and guide candidates’ growth during a program. The Commission’s draft standard 4 in this report is built around the ultimate impact that program completers have when they are actually employed in the classroom or other educator positions.
Many professional efforts to define standards for teaching (e.g., inTASC; CCSSO, NCTQ, and also rubrics for teaching in observational measures covered in the Gates foundation Measures of Effective Teaching study) recommend that candidates know and practice ethics and standards of professional practice as described in these national standards (such as those in InTASC standard 9 and 9(o)). The Commission recommends that CAEP strongly encourage additional research to define professional practices of P-12 educators, and how these practices, beliefs, and attitudes relate to student learning. (See also CAEP standard 1.9 on equity responsibilities.)
However, many measures of both academic and non-academic factors associated with high quality teaching and learning need to be studied for reliability, validity and fairness. CAEP should encourage development and research related to these measures. It would be shortsighted to specify particular metrics narrowly because of the now fast-evolving interest in, insistence on, and development of new and much stronger preparation assessments, observational measures, student surveys, and descriptive metrics. Instead, CAEP should ask that providers make a case that the data used in decision-making are valid, reliable and fair. States and localities are developing their own systems of monitoring and both providers and CAEP should obtain the data from these systems, where available, to use as valuable external indicators for continuous improvement.
Examples of Evidence
a. Strategic recruitment plans to achieve the EPP mission, taking account of employment opportunities for its completers, needs to serve increasingly diverse populations, and meeting needs for STEM, ELL, special education and other shortage areas
- Plans define outreach efforts to locate and target high quality applicants from a broad range of backgrounds and diverse populations
- Plans contain specific numerical goals and base data
- Progress is monitored and analyzed annually
- Judgments are made about the adequacy of progress toward recruitment goals
- Data are used to make changes in recruitment efforts
- Movement of resources toward the identified areas and away from low need areas is monitored
- Evidence of marketing and recruitment to high schools and colleges that are racially and culturally diverse and reflecting opportunities and needs in areas of shortages
- Evidence of collaboration with other providers, states, and school districts could be an indicator of outreach and provide an awareness of employment needs and opportunities
On admissions in addition to the CAEP floor described in Standard 3.4:
b. Providers set other admissions requirements such as:
- High school course taking indicating rigorous courses (e.g., Advanced Placement, higher level math and languages)
- Academic awards achieved
On non-academic factors at admissions or during the preparation experiences:
c. Programs demonstrate how they assess non-academic qualities of candidates and how these qualities relate to teacher performance. Examples might include student self-assessments, letters of recommendation, Interviews, essays, leadership, surveys, Gallup measures, Strength finder 2/0, Meyers-Briggs, and personality tests.
d. Other examples illustrate candidate commitment and dispositions, such as (1) teaching, volunteerism, coaching, civic organizations, commitment to urban issues; (2) content related, goal oriented, data-driven, contributions/ value-add to current employer or organization; (3) mindsets/ dispositions/ characteristics such as coachability, empathy, teacher presence or “withitness,”[xxxiv] cultural competency, collaboration, beliefs that all children can learn; or (4) professionalism, perseverance, ethical practice, strategic thinking, abilities to build trusting, supportive relationships with students and families
e. The edTPA test,[xxxv] Renaissance, Teacher Work Samples. Sample measures that often appear in these forms of assessment are:
- Differentiated instruction based on group and subgroup results on teacher created or standardized assessments (ELL, special education, gifted, high needs students, etc.)
- Evidence of differentiated instruction in response to student test data
- Evidence of teacher reflection on practice.
f. analysis of video recorded lessons with review and evaluation using rubrics, rater rules and agreement levels
g. observation measures with trained review procedures, faculty peer observations with rubrics
h. appropriate performance measures, including those required by a state
i. content knowledge assessments, standardized test data and general education and content course grades throughout the program with at least a 3.0 average and 3.5 in practica courses.
j. Assessments of specialized abilities when appropriate, such as math content tests or ability to teach reading (as applicable to reading and other content teachers)
k. Data provided by states on student achievement, teacher observations, student and employer surveys (NOTE: see also the Commission’s recommendations for Standard 4)
l. Evidence of candidate ability to design and use a variety of formative assessments with PK-12 students
m. Provider criteria that qualify candidates for completion, with program performance documenting that all completers have reached a high standard for content knowledge
n. Provider criteria that qualify candidates for completion, with program performance documenting that all completers can teach effectively with positive impact on P-12 student learning
o. Provider criteria that qualify candidates for completion, with program performance information indicating that all completers understand expectations set out in codes of ethics, professional standards of practice, and relevant laws and policy
The provider demonstrates the impact of its completers on P-12 student learning, classroom instruction and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation.
Impact on P-12 student learning
The provider documents, using value-added measures where available, other state-supported P-12 impact measures, and any other measures constructed by the provider, that program completers contribute to an expected level of P-12 student growth.
Indicators of teaching effectiveness
The provider demonstrates, through structured and validated observation instruments and student surveys, that completers effectively apply the professional knowledge, skills and dispositions that the preparation experiences were designed to achieve..
Satisfaction of employers
The provider demonstrates, using measures that result in valid and reliable data, and including employment milestones such as promotion and retention, that employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students.
Satisfaction of completers
The provider demonstrates, using measures that result in valid and reliable data, that program completers perceive their preparation was relevant to the responsibilities they confront on the job and that the preparation was effective.
CAEP Commission standards 1 through 3 address the preparation experiences of candidates, their developing knowledge and skills, and their abilities at the point of program completion. Candidate progress and faculty conclusions about the readiness of completers at exit are direct outcomes of the provider’s efforts.
By contrast, Standard 4 addresses the results of preparation programs at the point where they matter—the classroom teaching and other educator responsibilities in schools. Knowing results, learning from that knowledge, and turning the information back to assess the preparation experiences are the expected responsibilities of every provider. The Baldrige education award criteria place 45% (450 of 1000) of their rating points on results. Student results and operational effectiveness are a significant component of those points. For a preparation provider, the student results have a dual meaning: first, candidate mastery of the knowledge and skills necessary for effective teaching, and second teaching that has positive effects on P-12 student learning.
The paramount goal of providers is to prepare candidates who will have a positive impact on P-12 students. Impact can be measured in many ways, and one being adopted by several states and districts is known as “value-added modeling.” A large Gates’ supported research effort, the Measures of Effective Teaching (MET) project, provides useful guidance about the circumstances under which this model can most validly be used. These new findings are consistent with those noted in Preparing Teachers: Building Evidence for Sound Policy (NRC, 2010):[i]
Value-added models may provide valuable information about effective teacher preparation, but not definitive conclusions, and are best considered together with other evidence from a variety of perspectives.
The MET study also provides empirical evidence not previously available about structured teacher observations that employ videotapes and specific evaluation protocols, and it found that “student perception surveys provide a reliable indicator of the learning environment and give voice to the intended beneficiaries of instruction.”[ii] Beyond these sources of evidence, some providers will develop close collaborative relationships with districts in which their completers are employed and construct case studies that examine completers’ impacts on student learning. (NOTE: In addition, the Commission is still considering advice about appropriate conditions for use of evidence, as explained in the penultimate paragraph before Standard 1 on p. 13 of this report.)
Satisfaction measures such as employer surveys can provide useful feedback about completer performance. The Commission recommends that CAEP encourage more consistent use of employer surveys, and collaborate with states and other stakeholders to create more descriptive and more reliable instruments. In addition, the actual employment trajectories of completers—their retention, their promotion, their changing responsibilities—are useful indicators of employer satisfaction. Completer surveys are another source of program impact information. These can describe completer perceptions of the relevance and utility of aspects of their preparation as they view them in their day to day responsibilities.
An exemplary provider will be able to demonstrate superior impact on P-12 students and also the links between program characteristics and P-12 impact. The rationale for this exemplary distinction is that exemplary providers contribute to current P-12 achievement through the work of their own completers and to future P-12 achievement by serving as a model for other providers. (See CAEP Levels of Accreditation in the recommendations, below.)
Examples of Evidence
P-12 student learning
a. Value-added measures of P-12 student learning that can be linked with teacher data
b. State supported measures that address P-12 student learning that can be linked with teacher data
c. Case studies of completers that demonstrate the impacts of preparation on P-12 student learning and can be linked with teacher data
d. Employer surveys and/or focus groups
e. Completer retention
f. Completer promotion and employment trajectory
Observations and surveys
g. EdTPA for in-service teachers (when an in-service version becomes available, or if/when other assessments that provide valid and reliable information about in-service teaching are available)
h. Observations by credentialed evaluators of in-service teachers (e.g., Classroom Assessment Scoring System (CLASS) developed by Bob Pianta and Bridget Hamre; Framework for Teaching, developed by Charlotte Danielson)
i. P-12 student surveys
j. Completer surveys and/or focus groups
Provider Quality, Continuous Improvement, and Capacity
The provider maintains a quality assurance system comprised of valid data from multiple measures, including evidence of candidates’ and completers’ positive impact on P-12 student learning and development. The provider supports continuous improvement that is sustained, evidence-based, and that evaluates the effectiveness of its completers. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning.
Quality and strategic evaluation
The provider’s quality assurance system demonstrates capacity to address all CAEP standards and investigates the relationship between program elements and candidate outcomes to improve graduates’ impact on P-12 student learning.
The provider’s quality assurance system relies on relevant, verifiable, representative, cumulative, and actionable measures, and produces empirical evidence that interpretations of data are valid and consistent. The system generates outcomes data that are summarized, externally benchmarked, analyzed, shared widely, and acted upon in decision-making related to programs, resource allocation, and future direction.
The provider’s quality assurance system is comprised of multiple measures that can monitor candidate progress, completer achievements and the provider’s operational effectiveness. These include measures of program outcomes for:
- Completer or graduation rates,
- Ability of completers to meet licensing (certification) and any additional state accreditation requirements,
- Ability of completers to be hired in education positions for which they are prepared, and Student loan default rates.
The provider regularly and systematically assesses performance against its goals and relevant standards, tracks results over time, tests innovations and the effects of selection criteria on subsequent progress and completion, and uses results to improve program elements and processes. Available evidence on academic achievement of completers’ P-12 students is reported, analyzed, and used to improve programs and candidate performance. Leadership at all levels is committed to evidence-based continuous improvement.
The provider assures that appropriate stakeholders, including alumni, employers, practitioners, school and community partners, and others defined by the provider are involved in program evaluation, improvement, and identification of models of excellence.
The provider assures continuing quality of curricula; educators (faculty); facilities, equipment, and supplies; fiscal and administrative capacity; student support services; recruiting and admissions practices; academic calendars, catalogs, publications, grading policies, and advertising; measures of program length and objectives; and student complaints.
Effective organizations rely on evidence-based quality assurance systems characterized by clearly articulated and effective processes for defining and assuring quality outcomes and for using data in a process of continuous improvement. A robust quality assurance system ensures continuous improvement by relying on a variety of measures, establishing performance benchmarks for its measures (with reference to external standards where possible), seeking the views of all relevant stakeholders, sharing evidence widely with both internal and external audiences, and using results to improve policies and practices in consultation with partners and stakeholders.[ii]
Ultimately the quality of an educator preparation program is measured by the abilities of its completers to have a positive impact on P-12 student learning and development.[iii] Program quality and improvement are determined, in part, by characteristics of candidates that the provider recruits to the field; the knowledge, skills, and professional dispositions that candidates bring to the program and acquire during the program; the relationships between the provider and the schools where its candidates receive clinical training; and subsequent evidence of completers’ impact on P-12 student learning[iv] in schools where they ultimately teach. To be accredited a preparation program must meet standards on each of these dimensions and demonstrate success in its own continuous improvement efforts.
Effective quality assurance systems rely on multiple measures and include a clearly articulated and effective process for defining and assuring quality outcomes. Reasons for the selection of each measure and the establishment of performance benchmarks for individual and program performance, including external points of comparison, are made clear. Providers show evidence of the credibility and dependability of the data that inform their quality control systems, as well as evidence of ongoing investigation into the quality of evidence and the validity of their interpretations of that evidence. Providers must present empirical evidence of each measure’s psychometric and statistical soundness (reliability and validity).[v]
Continuous improvement systems enable programs to quickly develop and test prospective improvements, deploy what is learned throughout the organization, and add to the profession’s knowledge base and repertoire of practice.[vi] CAEP should encourage providers to develop new models for evaluating and scaling up effective solutions to problems in educator preparation. Research and development in the accreditation framework can deepen the knowledge of existing best practices and provide models of emerging innovations to transform educator preparation.[vii]
A provider must have the capacity to support the desired program and candidate outcomes.[viii] Core program elements include curriculum, faculty/educators, administrative and financial support, and candidate services that support candidates’ ability to positively impact P-12 student learning. The adequacy and effectiveness of these elements in relation to candidate outcomes must be investigated as part of the quality assurance system.
Examples of Evidence
Quality assurance system
a. The quality assurance system demonstrates capabilities to compile, store, access, manage, and analyze data from diverse sources, including:
- multiple indicators from standards 1, 2, and 3 of candidate developing knowledge and skills from recruitment and admissions, during the preparation experience, and measures that inform provider decisions at candidate completion, including assessments of candidate performance such as licensure tests and evaluations of student teaching/internship;
- feedback from standard 4 on completers, employer satisfaction surveys, completer retention and employment milestones, state data on the academic achievement of completers’ P-12 students, program completers own evaluation of their level of preparedness, and other sources that provide useful information on professional performance; and
- documentation of program outcomes from standard 5 such as the proportions of a candidate cohort who complete, who are licensed or certified, who are placed in education positions for which they have prepared, and the student loan default rate.
Use of Quality assessment and descriptive measures
b. Practices for investigating the quality of data sources and efforts to strengthen and improve the overall quality assurance system
c. Processes for testing the reliability and validity of measures and instruments used to determine candidates’ progress through the preparation program, at completion of the program, and during the first years of practice. The evidence should meet accepted research standards for validity and reliability of comparable measures and should, among other things, rule out alternative explanations or rival interpretations of reported results.
- Validity can be supported through evidence of
- Expert validation of the items in an assessment or rating form (content validation)
- Agreement among findings of logically-related measures (convergent validity)
- A measure’s ability to predict performance on another measure (predictive validity)
- Expert validation of performance or of artifacts (expert judgment)
- Agreement among coders or reviewers of narrative evidence.
- Reliability in its various forms can be supported through evidence of:
- Agreement among multiple raters of the same event or artifact (or the same candidate at different points in time)
- Stability or consistency of ratings over time
- Evidence of internal consistency of measures
d. Documentation that data are shared with both internal and external audiences and the use of data for program improvement.
Continuous Improvement Process
e. Documentation of innovations that have been tested and improvements that have been made
f. Examples of leadership commitment to continuous improvement such as planning and implementing change
g. Documentation of stakeholder involvement in the provider’s assessment of the effectiveness of programs and completers
h. Curriculum that reflects current needs in P-12 schools as well as national and P-12 state and/or college and career ready standards
i. Quality of faculty members and/or other staff, including the range of relevant experiences such as academic qualifications; P-12 teaching experience and involvement in P-12 schools and districts; and course evaluations by candidates, teaching awards, or P-12 educator feedback to indicate their effectiveness as teachers
j. Facilities that support teaching and learning
k. Fiscal and administrative resources that support programs and P-12 school partnerships; that develop expertise in new assessments (e.g., edTPA, teacher work samples); that support professional development for content area scholarship and expertise in new technologies, pedagogies, and curriculum (e.g. Common Core State Standards); and that support collaborative inquiry to make decisions regarding priorities and their implementation
l. Candidate support services such as academic advising services, and counseling center services
m. Provider’s recruiting and admissions policies and practices, academic calendars, catalogs, publications, grading, and advertising
n. Information that describes the length and objectives of programs
o. Policies for handling candidate complaints and examples of complaints and their disposal
p. Review of any state actions on the institution or program, or any concerns that have come to the state’s attention
Reccomendations on Annual Reporting and CAEP Monitoring
The Commission recommends that CAEP gather the following data and monitor them annually from all providers:
Measures of program impact:
Impact on P-12 learning (data provided for standard 4.1 that include value-added measures in states where they are available, as well as other state-supported P-12 impact measures and/or provider measures)
Indicators of teaching effectiveness, including structured observations for evaluation and student surveys on teacher interactions (data provided for standard 4.3)
Results of employer surveys, and including retention (annually and across five and ten year periods) and employment milestones (data provided for standard 4.2, on a 2-year floating average)
Results of completer surveys (data provided for standard 4.4, on a 2-year floating average)
Measures of program outcomes:
Graduation rates (data provided for standard 5.3 on program outcomes)
Ability of completers to meet licensing (certification) and any additional state requirements (e.g., through acceptable pass rates on state licensure exams; data provided for standard 5.3 on program outcomes)
Ability of completers to be hired in education positions for which they have prepared (by certification area; data provided for standard 5.3 on program outcomes)
Ability of completers to be hired in education positions for which they have prepared (by certification area; data provided for standard 5.3 on program outcomes)
The Commission recommends that CAEP identify significant amounts of change in any of these indicators that would prompt investigation to initiate (1) adverse action that could include revocation of accreditation status or (2) recognition of eligibility for a higher level of accreditation. In addition, the Commission recommends that CAEP include these data as a recurring feature in the CAEP annual report.
The Commission proposes four levels of accreditation decisions:
Measures of program impact:
denial of accreditation—for providers that fall below threshold in two or more standards
probationary accreditation—awarded to providers that meet or surpass the threshold in four standards, but fall below in one of the standards
full accreditation—awarded to providers that meet all five standards at the CAEP established thresholds
exemplary or “gold” accreditation—awarded to a small number of providers that meet the threshold classification set for all five standards and surpass the threshold for a combination of standards
The Commission also recommends that CAEP accreditation be based on a judgment that the provider’s accreditation evidence meets a designated “threshold” for each of the five standards recommended by the Commission. To achieve full accreditation, all components for standard 4 on Program Impact and components 5.4 and 5.5 on continuous improvement must reach an “operating” threshold for evidence.
© 2013 CAEP - Council for the Accreditation of Educator Preparation
Powered by Marketsmart
Returning Users: Please Log In