Instructional Design In Accountability Culture
In January 2026, a US Congressional inquiry raised a timely question for educators, policymakers, and institutions of higher education. The critical question asked, "Are students prepared for college-level mathematics?" Letters sent to top-tier universities requested detailed information about incoming first-year students and their performance on math placement tests, along with other tests used in the admissions decision process (Schwartz, 2026). The inquiry followed a University of California, San Diego report that highlighted the need for remedial coursework among the freshman class students in math skills, signaling a potential disconnect with K-12 education outcomes, the high-stakes accountability testing, and the expectations of postsecondary school for the level of proficiency obtained through K-12 education.
This inquiry highlights a broader issue within education in the United States, that of accountability, high-stakes assessments, and student learning. High-stakes testing has become the primary measure to determine student proficiency, school effectiveness, and quality of instruction. Written into the US educational landscape through legislation such as the No Child Left Behind Act (NCLB, 2002) and the Every Student Succeeds Act (ESSA, 2014), standardized testing has become the cornerstone of educational accountability. With this drive toward quantifiable and data-driven accountability, classroom instructors and Instructional Designers are performing in an environment where high-stakes assessments can dictate the perception of student success.
This article explores the key question for Instructional Designers, "What happens to Instructional Design when evaluation becomes the primary objective?" By examining Instructional Design theories and accountability research, this article asserts that high-stakes testing environments can affect Instructional Design frameworks, which often narrows the curriculum, constrains pedagogical innovation, and can reimagine the role of Instructional Designers. However, these effects can be marginalized through balanced evaluation of these approaches and using learner-centered design models.
In this article...
- Accountability In Education
- Instructional Design And Evaluation
- Response To Accountability Within Instructional Design
- Facing The Challenge
Accountability In Education
Accountability-driven reforms have reshaped education in the United States and around the world over the recent decades. Government policies like NCLB and ESSA institutionalized standardized testing as a tool for monitoring students' progress toward explicit learning objectives and in evaluating the schools' performance in meeting these objectives. No Child Left Behind explicitly prohibited "federal-controlled" curriculum and required states to establish both standards and assessments, thereby linking curriculum development to high-stakes testing outcomes. (NCLB, 2002)
Thus emerged an "accountability culture," as described by Fuller and Stevenson. in which decision-making is guided through performance metrics (2019). In this era of accountability, high-stakes testing, in the form of standardized tests, end of course or end of year tests, serve as both evaluation tools and motivators for curriculum design, including implementation, pacing, and resource allocation. Although high-stakes testing can provide valuable information, Acosta et al. (2020) found that these tests can have shortcomings, such as narrowed instruction, increased time-allotted to test preparation, and disproportionate effect on English learners. Either by educators or by Instructional Design, devoting instructional time to testing strategies and core content comes at the expense of other educational goals, such as conceptual learning (Dintersmith, 2026). The emphasis on accountability through high-stakes testing has increased the emphasis on test performance in the classroom. As such, Instructional Designers are faced with the goal to create a learning experience that can balance the accountability of standardized testing with student achievement through pedagogical innovative practices.
Instructional Design And Evaluation
Instructional Designers use a variety of frameworks to approach developing curriculum and create learning experiences. Within the various frameworks for design, evaluation is a part of the process. However, when designers allow evaluation to dominate the structure, assessment outcomes can overshadow learning. One of the most widely used frameworks for Instructional Design is ADDIE: the Analyze, Design, Develop, Implement, and Evaluate model (Branch, 2009). Within ADDIE, evaluation is an iterative process throughout that, in theory, allows for learner-centered modification to ensure meeting the learning objectives. However, when high-stakes testing becomes the primary benchmark, the Instructional Design can become an evaluation of the standards, which are then refined with lessons that prepare students for the test. Although this can allow for reinforcement of tested skills, it can limit instructional flexibility and narrow the teaching based primarily on standards.
Backward design, or Understanding by Design, authored by Wiggins and McTighe (1998) asserts Instructional Design begins with alignment with learning outcomes. This model aligns well with state-standards based education, where learning objectives are clearly defined and measured by standardized tests. Within this context, learning is measured through results of high-stakes test, such as end of grade or yearly testing. This alignment allows for a connection between test-aligned curriculum and performance. However, as some advocates assert, this reduces inquiry-based, project-based, and other constructivist approaches that cannot be measured explicitly within the standardized test objectives.
Another model used in creating Instructional Design applies a framework created through the work of Robert Gagne. This framework includes instructional events, such as gaining attention, presenting content, eliciting performance, and providing feedback (Gagne et al., 2005). The role of evaluation can be seen in in several of these events. In the second instructional event, students are informed of their learning objectives or expectations, which may appear in a curriculum or classroom as a "I can" statement. Within this framework, there are opportunities for feedback to ensure learning goals are being met before the assessment. In an accountability-centered system, Gagne's model can correlate with performance-based objectives, which can be measured through end of course (EOC), grade promotion, or graduation tests, or college admissions tests. However, with a focus on observable performance, once again, deeper and conceptual learning may not transfer. In using these high-stakes test as a measure of outcome, surface level learning may be reinforced.
The Kirkpatrick Evaluation model can also apply in educational contexts. This model for evaluation has four levels: reaction, learning, behavior, and results (Alsalamah and Callinan, 2021). Applying this process of evaluation within the high-stakes testing and accountability driven educational environment can mean that the steps occur from different stakeholders. The fourth level, "results," can equate to standardized test scores, which are then determined to demonstrate the behavioral change that should occur in the teacher or in the classrooms. It overlooks other indicators of education, including the classroom environment, differentiated learning, and long-term knowledge acquisition. This model could be applied in a holistic manner by Instructional Designers with feedback on the second level, "learning," and third level, "behavior," with the classroom teachers who are expected to implement curriculum.
Response To Accountability Within Instructional Design
Instructional Designers and educators, in responding to high-stakes testing, may begin to narrow curriculum focus to those areas that are tested, namely math and reading. Within these tested subjects, the instruction can further narrow to test-taking strategies and isolated or specific skills, which can reduce time for interdisciplinary learning, social studies, and the arts. This narrowing of curriculum and learning undermines the development of crucial learning competencies that are not part of high-stakes testing, such as critical thinking, problem-solving, and creativity. By replacing learning experiences that encourage creativity and the development higher order thinking in lieu of high-stakes testing areas, researchers argue that real-world and authentic learning is being sacrificed (Dintersmith, 2026).
In the accountability environment, the role of Instructional Designer shifts away from designing engaging learner-centered experiences. Designers may become responsible for aligning curriculum and instruction with the standards to be tested, thereby becoming compliance specialists who draft pacing guides and evaluate the meeting of standardized objectives. The primary task then becomes to provide a means documentation of meeting objectives. This shift can deter creative and transformative application of pedagogical innovations on the part of Instructional Designers by creating constraints by accountability obligations which reduce opportunities to apply constructivist, problem-based, or experiential learning approaches that more align with real world and authentic learning.
In addition to restraining Instructional Design, accountability-driven learning can bring to light concerns regarding equity. Acosta et al. (2020) reported that high-stakes testing can impact learners from marginalized backgrounds as well as English learners disproportionately, which can result in gaps in achievement for schools that serve disadvantaged populations. In focusing Instructional Design on meeting the evaluative element of high-stakes testing through narrow instruction, diverse learners may become underserved.
Facing The Challenge
The Congressional inquiry into math readiness highlights the question of long-term outcomes for instruction that is led by evaluative outcomes, such as high-stakes testing (Schwartz, 2026). With high remediation rates, college freshman may have demonstrated proficiency in testable skills without developing the deep understanding that would be required for college-level performance. This creates a disconnect between the accountability-driven focus with the need for conceptual and deep understanding and highlights the limitations of standardized testing as an evaluative tool for determining educational success.
Instructional Designers, however, can face these challenges with strategies that balance accountability requirements with design that maintains focus on the learner. One such design is Merrill's First Principles of Instruction. Application of this framework can support meaningful learning experiences that support both accountability and deeper learning. Merrill's First Principles includes an emphasis on problem-centered learning, activation of prior knowledge, demonstration, application, and integration (Merrill, 2024). By embedding real-world and authentic problems into Instructional Design, activities that require collaboration and critical thinking, and the practice of reflection, designers can create a learner-centered curriculum that creates meaningful learning experiences that extend beyond the high-stakes test.
Instructional Designers can become the stimulus in expanding the evaluation beyond the standardized or high-stakes test. Learner-centered and problem-based Instructional Design can maintain an objective set by the accountability-driven systems, while providing robust and authentic learning environments. By applying various frameworks, with evaluative insights from the practitioners (i.e. classroom teachers) and other stakeholders, Instructional Designers can apply Kirkpatrick's multilevel evaluation method to provide and adapt student engagement, behavioral change, or long-term and deep learning.
Instructional Designers can balance accountability-driven restraints with authentic learning by:
- Incorporating inquiry-based, project-based, and integrative learning with learning objectives and standards that will appear on high-stakes testing.
- Creating evaluative measures such as formative assessments to empower practitioners to adapt instruction, rather than basing decisions on summative test results.
- Advocating among stakeholders for broader levels of accountability.
- Collaborating with policy makers, educators, and other stakeholders in ensuring that evaluative metrics reflect learning that can be applied beyond the test.
The K-12 educational environment changed fundamentally with the introduction of accountability-driven high-stakes testing. However, the long-term effect of this shift in education is in question. When evaluation becomes the primary objective, Instructional Design frameworks can become repurposed to prioritize test preparation, which narrows the curriculum and reduces pedagogical innovations. Once considered architects for learning, Instructional Designers within the accountability-driven educational environment can shift their purpose toward compliance specialists. This can create implications beyond the design itself, as the long-term effects are beginning to emerge as seen in the recent Congressional inquiry.
However, evaluation is an essential step and an important part of instruction. As such, Instructional Designers can integrate frameworks that are learning-centered and provide for conceptual learning while balancing the role of evaluation in accountability-driven education. By advocating for evaluation as a tool for improvement, rather than a measure for accountability, Instructional Designers can become a catalyst for educational innovation.
References:
- Acosta, S., T. Garza, H. Hsu, P. Goodson, Y. Padrón, H. H. Goltz, H. H., and A. Johnston. 2019. "The accountability culture: A systematic review of high-stakes testing and English learners in the United States during No Child Left Behind." Educational Psychology Review, 32 (2): 327-352. https://doi.org/10.1007/s10648-019-09511-2
- Alsalamah, A., and C. Callinan. 2021. "Adaptation of Kirkpatrick's four-level model of training criteria to evaluate training programmes for head teachers. Education Sciences, 11 (3): 116. https://doi.org/10.3390/educsci11030116
- Branch, R. M. 2009. Instructional design: The ADDIE approach. Springer.
- Department of Education. (n.d.). No Child Left Behind. U.S. Department of Education.
- Dintersmith, T. July 2015. "Why schools should teach for the real world". TED: Ideas change everything.
- Department of Education. 2014. Every Student Succeeds Act (ESSA). U.S. Department of Education.
- Fuller, K., and H. Stevenson. 2018. "Global education reform: Understanding the movement." Educational Review, 71 (1): 1-4. https://doi.org/10.1080/00131911.2019.1532718
- Gagne, R. M., W. W. Wager, K. C. Golas, J. M. Keller, and J. D. Russell. 2005. Principles of instructional design, 5th edition. Cengage Learning.
- Graduation Requirements for Florida's Statewide Assessments. December 2025. Florida Department of Education Home.
- Kurt, S. April 29, 2021. "Kirkpatrick model: Four Levels of Learning Evaluation." Educational Technology.
- McEwen, N. 1995. "Educational accountability in Alberta." Canadian Journal of Education / Revue canadienne de l'éducation, 20 (1): 27. https://doi.org/10.2307/1495050
- Merrill, M. D. 2024. First Principles of Instruction: An instructional design theory. Routledge.
- Schwartz, S. January 23, 2026. "Are Students Prepared For College-Level Math? A Senator Wants To Know." Education Week.
- Wiggins, G. P., and J. McTighe. 2005. Understanding by design. ASCD.