Context and Emerging Frameworks
Our September Snap Survey results indicated strong sentiment around ethics and Generative AI (GenAI). Building on those findings, the October Snap Survey explored this area in greater depth. Recent literature reflects a similar trend: the conversation in higher education has shifted from whether GenAI should be integrated to how to do so ethically and responsibly.
Scholars have proposed several frameworks for navigating GenAI ethics. Cherner et al. (2025) introduced a GenAI Decision Tree highlighting ethical decision points; Kangwa et al. (2025) advocate for a balanced approach grounded in institutional guidelines and self-regulation; and de Fine Licht (2025) emphasizes AI literacy and modeling ethical use in guided environments. Additionally, the MIT AI Risk Repository offers a taxonomy of AI risks—particularly at the intersection of intent of use and human agency, where many of today’s ethical quandaries emerge.
Survey Participation
The October 2025 Snap Survey achieved our highest response rate to date (n = 25), underscoring the importance of this topic to our community.
Institutional Policies and Readiness
Recent studies (McDonald et al., 2025; Luo, 2024) show that most institutional GenAI policies remain reactive, focusing on potential misuse rather than intentional integration. In our survey, 42 % of respondents reported that their institution’s policies were still in development, 33 % said policies were already in place, and 25 % were either unsure or indicated none existed. Follow up studies on the nature and content of those policies would provide information on how institutions are balancing academic integrity and innovation around GenAI.
Familiarity with Ethical Frameworks
Frameworks such as the GenAI-TPACK (Lan et al., 2025) and ETHICAL (Eacersall et al., 2025) models can guide faculty and staff toward more ethical GenAI integration, yet both are still gaining traction. However, two-thirds of respondents (66 %) were somewhat or very familiar with ethical frameworks or guidelines for GenAI use, indicating that these or similar frameworks are being referenced and consulted as policies and strategies get drafted. Smaller groups of participants in this survey reported being extremely familiar (13 %) or not familiar (13 %), while 8 % were not at all familiar. This may suggest a developing professional literacy—respondents are aware of the issues but not yet experts.
Top Ethical Concerns Identified
Respondents highlighted several key ethical considerations:
- Academic integrity and plagiarism (68 %)
- Transparency and disclosure of AI use (68 %)
- Privacy and data protection (68 %)
- Bias and fairness (56 %)
- Accountability for outputs (68 %)
- Equity and access (40 %)
- Energy and environmental impact (20 %)
- Accuracy (4 %)
The ethical priorities identified by respondents align closely with those highlighted in recent studies (e.g., Luo, 2024; McDonald et al., 2025). Academic integrity, transparency, and data privacy remain top of mind, echoing early institutional responses to GenAI integration. However, issues such as energy consumption, environmental sustainability, and model accuracy—though increasingly visible in global AI ethics debates—appear to receive less attention within higher education contexts, indicating opportunities for further education and awareness. We plan to explore the energy and environmental impact of GenAI more in-depth in a future article.
Confidence and Ethical Practice
An encouraging 80% of respondents usually or always factor in ethics, signaling an awareness culture within professional practice. The 16% “sometimes” group indicates an opportunity for reinforcement through policy, training, or modeling.
A majority (56 %) felt somewhat confident in their ability to identify and address GenAI-related ethical issues, while smaller groups reported being very confident (24 %), not so confident (12 %), or extremely confident (8 %). This suggests comfort identifying obvious ethical issues but less confidence navigating gray areas—a training opportunity for applied ethics scenarios.
These results echo what we often hear across OLC communities of practice: educators are eager to use GenAI responsibly but are navigating complex and evolving expectations. The high rate of ethical reflection paired with moderate confidence levels points to a need for more professional learning spaces where practitioners can share use cases, co-develop ethical decision frameworks, and translate abstract principles into classroom and research contexts.
Looking Ahead
Together, these results suggest a maturing awareness of GenAI ethics across our community—but also reveal a continued need for institutional guidance, shared frameworks, and professional learning that moves from reactive compliance to proactive ethical practice.
References
Cherner, T., Foulger, T.S. & Donnelly, M. (2025). Introducing a Generative AI Decision Tree for Higher Education: A Synthesis of Ethical Considerations from Published Frameworks & Guidelines. TechTrends, 69, 84–99. https://doi.org/10.1007/s11528-024-01023-3
Eacersall, D., Pretorius, L., Smirnov, I., Spray, E., Illingworth, S., Chugh, R., Strydom, S., Stratton-Maher, D., Simmons, J., Jennings, I., Roux, R., Kamrowski, R., Downie, A., Thong, C. L., & Howell, K. A. (2025). Navigating ethical challenges in generative AI-enhanced research: The ETHICAL framework for responsible generative AI use. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.9
Kangwa, D., Msafiri, M.M. & Fute, A. (2025). Balancing innovation and ethics: promote academic integrity through support and effective use of GenAI tools in higher education. AI Ethics, 5, 3497–3530. https://doi.org/10.1007/s43681-025-00689-6
de Fine Licht, K. (2025). Rethinking the Ethics of GenAI in Higher Education: A Critique of Moral Arguments and Policy Implications. Journal of Applied Philosophy, 42, 1317-1337. https://doi.org/10.1111/japp.70026
Lan, G., Feng, X., Du, S., Song, F., & Xiao, Q. (2025). Integrating ethical knowledge in generative AI education: constructing the GenAI-TPACK framework for university teachers’ professional development. Education and Information Technologies, 30(11), 15621–15644. https://doi.org/10.1007/s10639-025-13427-6
Luo (Jess), J. (2024). A critical review of GenAI policies in higher education assessment: a call to reconsider the “originality” of students’ work . Assessment & Evaluation in Higher Education, 49(5), 651–664. https://doi.org/10.1080/02602938.2024.2309963
McDonald, N., Johri, A., Ali, A., & Collier, A. H. (2025). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Computers in Human Behavior: Artificial Humans, 3, 100121. https://doi.org/10.1016/j.chbah.2025.100121
As senior researcher at OLC, Carrie designs, conducts and manages the portfolio of research projects that align with the mission, vision, and goals of the Online Learning Consortium. She brings with her over 15 years of experience as an online educator and instructional designer with a passion for research. She has peer-reviewed publications covering a variety of topics such as open educational resources, online course best practices, and game-based learning. In addition to a strong background in higher education teaching and instructional design, Carrie brings with her extensive experience in customer service and small business management. She holds a PhD in Educational Technology from Arizona State University, an MS in French from Minnesota State University, and BA in French from Arizona State University.