Recap

AI on Campus: Building Fair, Transparent, and Forward-Thinking Approaches

 

As generative AI transforms the academic landscape, institutions are facing urgent questions: How should AI be used in the classroom? Who sets the rules? And how can we ensure transparency and equity in the process?

In Alchemy’s recent webinar, AI on Campus: Building Fair, Transparent, and Forward-Thinking Approaches,” panelists Dr. Robert MacAuslan (VP, Artificial Intelligence at Southern New Hampshire University) and Gates Bryant (Senior Partner, Tyton Partners) joined Carrie O’Donnell and Brett Christie, Ph.D. to explore how higher education leaders can move from uncertainty to action—with strategies grounded in clarity, transparency, and trust.

Campus Climate Check: “Uncertain,” “Conflicted,” and “Hopeful”

To kick off the session, participants shared one word that describes how their campus community feels about AI. The result? A word cloud that spoke volumes:

Most common responses: uncertain, mixed, conflicted, unprepared, excited, hopeful, overwhelmed

This range of emotions reflects a common challenge: campuses are adopting AI tools faster than they’re creating clear, consistent policies. As one panelist put it, It feels like we’re writing everything in pencil, not pen.

AI Trends: What the Research Says

Gates Bryant shared new findings from Tyton Partners’ Time for Class 2025 study, which surveyed over 3,300 students, instructors, and administrators. Three big takeaways emerged:

– AI is shifting from classroom tool to institutional asset, prompting a broader, enterprise-level approach.
– Policy is lagging behind adoption. Many campuses are leaving AI decisions to individual departments or instructors, without coordinated support.
– AI literacy is becoming essential. There’s growing agreement that students must learn to use AI responsibly for future careers.

 

“We’ve got to think of generative AI as part of the curriculum, not just a risk to manage.” – Gates Bryant

 

From Uncertainty to Action: What Institutions Can Do

Throughout the discussion, panelists emphasized that effective AI policy starts with culture, not just compliance. Here are a few standout recommendations:

Lead with empathy and listening: Create space for concerns—about bias, ethics, labor, and equity—to be heard and validated.
Support contextual training: AI in a sociology course looks different than in accounting. Tailor support to disciplines, not just roles.
Be transparent about tools: Clearly communicate which tools are approved or banned, and why.
Rethink assessment: Traditional testing models are vulnerable in a generative AI world. Authentic, project-based assessment is key.
Include adjuncts and part-time faculty: Many campuses overlook this group when rolling out new tools and policies.

 

“We can’t just tell people to ‘play with it.’ We need to prepare them to make informed, ethical decisions about AI use.” – Dr. Robert MacAuslan

 

Final Takeaways

Redesign assessments: Objective testing is no match for generative tools. Embrace authentic, real-world evaluations.
Invest in capacity-building: Equip faculty with context-specific tools, support, and guidance—not mandates.
Transparency builds trust: From tool vetting to policy formation, bring stakeholders into the process.

What’s Next

As institutions continue to navigate the evolving role of AI in higher education, thoughtful leadership and inclusive design will be key. If you missed the live discussion, you can watch the full recording on our YouTube to gain deeper insights and actionable strategies.

Watch Now
Recap

December 12, 2025

Strategies for Connecting Learners to the Workplace

Replay
Article

November 26, 2025

Authentic Assessment in the AI Era: What Educators Are Doing Now, What Comes Next

Read More
Recap

November 20, 2025

Authentic Assessment in the Era of AI

Replay