Faculty and staff in higher education face uncertainty about appropriate AI use in academic settings, particularly whether student AI use supports or hinders learning. This uncertainty reveals a fundamental question: how can institutions integrate AI to enhance learning while preserving educational integrity?
Current experiences illustrate this tension between AI’s benefits and potential drawbacks. While some instructors have adopted AI in their courses, others avoid it, finding that AI use may hinder students from achieving learning outcomes. For example, one engineering instructor found that in one of his courses, the use of AI by students promoted a higher quality of work than he had seen before. Nevertheless, students did not learn some of the “things” they learned before. Some students experienced a decline in their writing skills in pseudocode due to the use of AI. Another business instructor, when reflecting on teaching coding, asserted, “I teach some coding classes. So, for me, I think that’s still such a fundamental concept. … I don’t want my students to overly use [generative AI] to circumvent their learning. I want them to use it to support their learning. But I think that’s the thing that we’re all struggling to figure out how to do.” (Baytas & Rudiger, 2025)
To address this challenge, educators can use Bloom’s taxonomy as a framework to determine what types of cognitive work can be offloaded to AI without compromising learning. Using Bloom’s taxonomy, educators can design outcomes and instruction that guide when and how AI can appropriately support student learning.
Oregon State University’s revision of Bloom’s Taxonomy aims to clarify AI’s role in learning by delineating how Gen AI can supplement learning at each level of Bloom’s Taxonomy. However, further distinction is needed regarding which cognitive skills and knowledge types AI can and cannot handle. This clarity supports the effective integration of AI and pedagogy, helping to assess whether students are prepared to benefit from AI-assisted, higher-order tasks.
Effective AI integration requires students to first master foundational skills. This hierarchical approach aligns with Bloom’s taxonomy: students cannot meaningfully engage AI for higher-order critical evaluation without first independently mastering the lower-level analytical and synthesis skills that inform such judgments.
Revised Bloom’s Taxonomy
Anderson and Krathwohl’s (2001) revision of Bloom’s Taxonomy added a knowledge dimension that includes factual, procedural, conceptual, and metacognitive knowledge. Educators use this framework to design instruction that align with specific learning goals—an approach that could also guide thoughtful, pedagogically sound uses of AI in the classroom. Table 1 shows what AI can and cannot do based on Anderson and Krathwohl’s revised Bloom’s Taxonomy.
Table 1. Revised Bloom’s Taxonomy: AI capabilities at each cognitive process level and knowledge type
Table caption: Column one lists the levels of cognitive processes; the other columns display the types of knowledge, and the human and AI capabilities associated with each cognitive level and knowledge type.
Key Takeaways
AI systems extract and apply information by identifying statistical patterns in their training data, rather than relying on conceptual criteria (Vinayakh, 2025; Mols, 2023). While this approach proves useful in many contexts, genuine understanding requires grasping abstract ideas and general principles. As Wiggins and McTighe (2005) explained, “understanding is a mental construct, an abstraction made by the human mind to make sense of many distinct pieces of knowledge” (p. 37). Although AI can store and retrieve factual and procedural knowledge effectively, it cannot comprehend the meaning behind facts.
Wiggins and McTighe (2005) illustrated this distinction with an analogy: “The words on the page are the ‘facts’ of a story. We can look up each word in the dictionary and say we know it. But the meaning of the story remains open for discussion and argument. The ‘facts’ of any story are the agreed-upon details; the understanding of the story is what we mean by the phrase reading between the lines.” (p.38)
The lack of understanding becomes evident when we examine how AI applies information—it may use correct procedures based on training patterns, but without grasping the underlying principles that make those procedures appropriate. Wiggins and McTighe (2005) emphasized that “doing something correctly, therefore, is not, by itself, evidence of understanding. It might have been an accident or done by rote. To understand is to have done it in the right way, often reflected in being able to explain why a particular skill, approach, or body of knowledge is or is not appropriate in a particular situation” (p.39). While AI can perform inferences based on observed patterns in data, it struggles with novel situations that require nuanced understanding.
This limitation stems from fundamental differences in how humans and AI process information. True understanding involves judgment, reflection, and the creation of meaning. It involves drawing on experience, values, intentions, subject matter knowledge, self-knowledge, and empathy (Wiggins and McTighe, 2025; Gärdenfors, 2024). These are human capacities that AI cannot possess. Human intelligence and understanding are rooted in physical experiences and interactions with the world. This “embodied cognition” allows humans to develop concepts that resonate with lived experience, which is absent in AI (Gärdenfors, 2024).
Risks of lacking conceptual understanding in AI
The following limitations of current AI systems concerning conceptual understanding are evident (Dietterich, 2024):
- They can produce incorrect and self-contradictory answers.
- They produce dangerous and socially unacceptable answers, including those that include pornography, racist rants, the spread of sexism, and instructions for committing crimes, for example.
- The lack of attribution: There is no easy way to determine which source documents AI answers were based on.
- They have poor non-linguistic knowledge.
- Dialogues between people and AI can go “off the rails.”
- AI capabilities for planning and reasoning are poor.
The Role of Students in Meaning-Making
Finally, it is essential to stress that only students construct the meaning of knowledge. Although AI can be used to summarize or organize information, it cannot build personal understanding. Students must engage with content, reflect on its significance, and develop their own interpretations. They need to build their own theories about what knowledge means for them.
References
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.
Baytas, C., & Ruediger, D. (2025, May 1). Making AI Generative for Higher Education: Adoption and Challenges Among Instructors and Researchers. https://doi.org/10.18665/sr.322677
Dietterich, T. G. (2024, April 17). What’s wrong with large language models, and what we should be building instead [Video]. YouTube. https://www.youtube.com/watch?v=e8vg1vin78U
Gärdenfors, P. (2024, October 14). AI lacks common sense —why programs cannot think. Lund University. https://www.lunduniversity.lu.se/article/ai-lacks-common-sense-why-programs-cannot-think
Iowa State University, CELT. (n.d.) Bloom’s Taxonomy. https://celt.iastate.edu/prepare-and-teach/design-your-course/blooms-taxonomy/
Mols, B. (2023, April 27). Artificial Intelligence still can’t form concepts. https://cacm.acm.org/news/artificial-intelligence-still-cant-form-concepts/
Oregon State University. (n.d.) Advancing meaningful learning in the age of AI. https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/blooms-taxonomy-revisited/
Teacher Institute. (2023, November 14) Key Theories in Constructivism: From Dewey to Novak. https://teachers.institute/learning-teaching/constructivism-theories-dewey-novak/
University of Delaware Library. (n.d.) AI Literacy: Algorithms, Authenticity, and Ethical Considerations in AI Tools. https://guides.lib.udel.edu/AI/evaluation
Vinayakh. (2025, February 4). How reasoning models are transforming logical AI thinking. Microsoft Developer Community Blog. https://techcommunity.microsoft.com/blog/azuredevcommunityblog/how-reasoning-models-are-transforming-logical-ai-thinking/4373194
Wiggins, G. P., & McTighe, J. (2005). Understanding by design (2nd ed.). Pearson.
Ana Useche is an instructional designer at the Center for Academic Technologies at Santa Fe College. She holds a Ph.D. in Educational Psychology. Over the past eight years, she has collaborated with faculty across disciplines to create engaging, learner-centered experiences—from project- and problem-based learning in STEM to case- and inquiry-based approaches in the humanities.