Article
Beyond the Hype: What Campus Leaders Are Really Saying About AI Strategy and Trust
As artificial intelligence reshapes the landscape of higher education, one thing has become crystal clear: institutions can no longer afford mixed messages, half-measures, or reactive approaches. At our recent webinar, “AI on Campus: Building Fair, Transparent, and Forward-Thinking Approaches,” featuring Dr. Robert MacAuslan (VP of AI at Southern New Hampshire University) and Gates Bryant (Senior Partner at Tyton Partners), campus leaders shared unvarnished insights about where institutions really stand—and what it will take to move forward with integrity.
The conversation revealed a sector grappling with fundamental questions: How do you build trust when faculty feel left out of decision-making? What does transparency actually look like when AI policies vary wildly across departments? And how can institutions support innovation without compromising the very values that define academic work?
The answers weren’t simple, but they were clear. Here’s what emerged from both the expert discussion and candid participant feedback.
The Trust Gap Is Real—and Growing
Perhaps the most striking theme from our session was the erosion of trust across campus communities. Participants described a campus climate marked by “mixed emotions,” “confusion,” and feeling “rudderless” when leadership provides vague guidance without clear direction. Here are the results of a live word cloud that described how participants perceive their campus is feeling right now.

The source of this tension? Inconsistent messaging and double standards.
“Mixed messages—such as permitting faculty to use AI while restricting students—are creating confusion, resentment, and double standards.”
Dr. Robert MacAuslan emphasized that transparency isn’t just about announcing which tools are approved. It’s about setting clear expectations, modeling responsible practices, and ensuring that policies reflect institutional values rather than reactive fear.
But transparency requires more than good intentions. As our participants noted, many faculty feel “left out of decision-making” while students are increasingly aware of the inconsistencies in how AI is allowed or discouraged across different contexts.
Most Institutions Are Still Figuring It Out
Despite the urgency of these challenges, institutional readiness remains uneven. Our discussion revealed that most institutions are still in the early stages of formalizing AI guidance, with only a few having developed centralized governance structures, comprehensive training pathways, and vetted tool lists.
“There’s a strong demand for professional development that goes beyond technical demonstrations and addresses how to use AI intentionally and ethically.”
Gates Bryant provided a brief overview of Tyton’s longitudinal study, Time for Class 2025, noting that “most institutions are still in the early stages of developing an institution wide AI Policy, with only 28% of administrators reporting their institution has rolled one out… and even the institutions that have rolled out policy, there is still a fair amount of deference to faculty needing to make the decision around how generative AI can be used in the classroom.”
The gap between need and capacity is particularly stark for smaller and regional institutions, which often lack the resources to implement AI effectively or ethically. This uneven landscape raises serious equity concerns, especially when students and/or faculty are excluded from decision-making processes that directly affect their academic work.
Assessment Is Being Reimagined—By Necessity
One of the most tangible shifts happening across campuses involves rethinking assessment practices. Faculty are moving away from traditional exams (relying heavily on objective questions and short answers) and papers, exploring alternatives that rely more heavily on formative and authentic assessments and account for AI’s capabilities while preserving academic rigor and integrity.
Emerging strategies include:
- Oral presentations and interviews
- Process documentation and learning reflections
- Portfolios and project-based learning
- Group projects and assessments
- Assignments requiring students to critique AI-generated content
- Transparent disclosure and explanation of AI use
“The current moment demands a ‘new grammar of assessment’—one that recognizes the blurred line between human and AI work, encourages transparency instead of surveillance, and measures not just outcomes, but process, thinking, and growth.”
This shift represents more than tactical adjustment. It reflects a broader conversation about the purpose and values behind assessment, with many faculty leaning into more authentic, student-centered learning experiences.
AI Literacy Needs to Be Institution-Wide
Both our panelists and participants agreed: AI literacy can’t be treated as an optional skill or left to individual initiative. Like cybersecurity training, it needs to be mandatory, ongoing, and evolving.
Essential not just for faculty, but for students, staff, and administrators, effective AI literacy programs should include:
- Baseline fluency (understanding what AI can and cannot do)
- Ethical and responsible use guidelines
- Application within specific disciplinary contexts
- Critical evaluation of AI-generated outputs
“Baseline fluency is not enough—institutions need to invest in ongoing, layered development to help faculty progress from awareness to application.”
The challenge is particularly acute for adjunct faculty and those without access to institutional subscriptions or training resources. Collaborative and peer-support models emerged as critical components of sustainable professional development.
Culture Outshines Technology
Perhaps the most important insight from our discussion was this: campus culture—more than technological infrastructure—will determine the long-term success of AI adoption.
Participants voiced serious concerns about data privacy, student surveillance, and the ethical implications of using tools trained on questionable content. But beyond technical safeguards, they emphasized the need for institutional approaches that prioritize collaborative policymaking and inclusive communication.
“Building trust is a central challenge. Faculty and students alike need to feel heard and supported.”
The institutions that will thrive are those that treat AI integration as a cultural transformation, not just a technological upgrade. This means creating cross-functional task forces, aligning AI policy with teaching and learning priorities, and balancing innovation with integrity.
The Path Forward: Systemic Thinking Over Tool-by-Tool Reactions
Rather than responding reactively to each new AI development, forward-thinking institutions are building systematic approaches to AI governance and integration.
The most successful strategies involve:
- Creating cross-functional AI committees with diverse stakeholder representation
- Aligning AI policies with broader teaching and learning priorities
- Investing in scalable faculty development models
- Developing clear guidelines that evolve with technological change
- Prioritizing transparency and ethical use over restriction and surveillance
What This Means for Campus Leaders
The conversations from our webinar point to a sector at an inflection point. The institutions that emerge stronger will be those that:
Move beyond reactive policies to proactive, values-driven frameworks that can adapt as technology evolves.
Prioritize trust-building through inclusive decision-making, transparent communication, and consistent application of policies across all campus stakeholders.
Invest in comprehensive literacy that treats AI fluency as essential infrastructure, not optional enhancement.
Embrace assessment innovation that preserves learning outcomes while acknowledging new realities of human-AI collaboration.
Think systemically about AI integration as cultural transformation, requiring sustained attention to equity, ethics, and institutional capacity.
The urgency is real. As one participant noted, there’s growing concern that institutional focus has “leaned too far into the ‘AI hype’ without enough critical reflection on pedagogy and learning outcomes.” The path forward requires both embracing AI’s potential and maintaining the critical perspective that defines academic excellence.
The question isn’t whether AI will reshape higher education—it already has. The question is whether institutions will lead that transformation or be led by it. If you missed the live event, you can watch the full recording on the Alchemy YouTube channel.