Usage of AI among faculty is on the rise, with 84% of higher education professionals using AI either professionally or personally. National conversations often portray faculty usage in stark terms, with media reports of pervasive classroom deployment contrasting with national surveys that report low or sporadic usage focused mainly on administrative or productivity use cases.  However, sizable proportions of faculty remain undecided on the topic of AI in teaching and learning.

Online faculty perspectives are often a sidenote in these conversations, if mentioned at all. Our previous research has found that perceptions of new technology are impacted by teaching modality, with our 2024 report revealing that 42% of faculty teaching online believe that higher education is headed in the right direction, compared to 20% of faculty teaching face-to-face.

To deepen our understanding of online faculty attitudes and utilization of AI, we recently convened over 1,200 WGU faculty to discuss this emerging technology. What we found echoes our research with students — that faculty are increasingly optimistic about AI’s capabilities within the classroom, but equally concerned about the haphazard integration of AI and the risk it poses to students, faculty, and the educational system. 

In what follows, we examine the patterns emerging from these conversations, providing insights to inform organizational strategy and AI development. What we heard was deep expertise, a clear-eyed view of education’s most persistent problems, and an openness to intentionally use AI to shape a better future for teaching and learning.

Background:

We conducted interactive workshops at the annual summits for WGU's School of Business (n = 1,000) and Leavitt School of Health (n = 293). Workshop participants were primarily faculty but also included administrators, enrollment staff, course design specialists, and educational technologists. Participants were invited to share their concerns about AI in education and what opportunities with AI, if any, excited them.

During these sessions, we demonstrated Lazuli, an AI-powered learning design and development tool built collaboratively by WGU Labs and the Learning Design Alliance (LDA). While we showcased Lazuli, we prompted participants to share attitudes and ideas that extended beyond specific tools and technologies. Participants captured their perspectives on sticky notes or through an online survey, which informed the findings presented in this report.

Key Findings: What Faculty Hope For

Faculty see clear opportunities for AI to make learning more responsive and human-centered, provided it is deployed with intentionality and purpose.

  • Smarter workflows: AI that reduces busywork like communication, grading, and documentation, giving faculty more time to support students directly.
  • Improved learning experiences: Tools that help create personalized, feedback-rich, relevant learning, supported by guidance from human experience. 
  • Enhanced equity in learning: AI that improves access and inclusivity for learners of all backgrounds, including multilingual support and culturally responsive content.
  • Support for student wellbeing: On-demand assistance that guides students when resources aren’t available, including academic and mental health. 
  • Stronger systems: Seamless tech that integrates into existing platforms, rather than requiring faculty to manage yet another tool.

Faculty also emphasized the need for transparent, ethical, and trustworthy systems that protect both students and educators. They’re not opposed to innovation; they just want it to reflect core educational values.

Key Findings: What Faculty Fear

Faculty commitment to important educational outcomes also led them to identify concerns related to AI in education.

  • Loss of human connection: The loss of authentic relationships with students, replaced with emotionless, mechanical, and impersonal systems.
  • Learning integrity: Students using AI to cheat to complete their degree, and the potential downstream impact of AI use on learning, skill development, and work ethic.
  • Data ethics and accuracy: Data privacy and security, and the potential use of inaccurate, biased information produced by AI systems within education.
  • Technology readiness: Maintaining consistent access to evolving AI technology, supporting infrastructure, and the skills to use it effectively.
  • Institutional readiness: A lack of policy and transparency regarding institutional AI adoption, implementation, and investment.

These concerns reflect the rapidly increasing utilization of AI by students, faculty, and administrators, with faculty rightly identifying the need for more strategic deployment of this new technology. Similar to our research with students, they also reflect a rapidly increasing understanding of the new technology’s capabilities and potential downfalls. 

An Inflection Point for Higher Education

Faculty are not obstacles to innovation. They are essential partners in shaping the future of teaching and learning.

What we learned from these conversations with over 1,200 faculty members is that they are ready to experiment, but not at the expense of students. They aim to collaborate with institutions and edtech developers to create and implement tools that foster well-being and human connection, rather than ones that threaten and diminish it. They want AI that reflects diverse learners and enhances real-world readiness, not just efficient systems for homogenous learners. And above all, faculty want AI to uphold and sustain the values that make education transformative.

At WGU Labs, this is the work we’re doing by developing and testing new AI-enabled tools and listening deeply to educators along the way. Faculty voices are guiding us toward better questions, better design, and better outcomes. Because the future of AI in education shouldn’t be something done to faculty. It should be something built with them.