Recap

AI for Academic Research: Balancing Innovation & Integrity


As academic research budgets face increasing cuts, institutions are under pressure to sustain critical research efforts while maintaining excellence. How can institutional leaders and their faculty strategically implement AI to help optimize resources and enhance research efficiency while upholding academic integrity in this shifting landscape? That’s the question tackled in this dynamic session hosted by
Brett Christie, Ph.D., Vice President for Educational Innovation at Alchemy, and Tracy Mendolia, Ph.D., Associate Director at the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical University. Together, they explored how AI is reshaping academic research—and how institutions can guide its responsible use.

 

Key Takeaways


AI is changing how researchers work—and think

From literature reviews to data analysis, AI is playing an increasingly active role across the research lifecycle. As Dr. Mendolia noted, this shift isn’t just about efficiency—it’s about rethinking the research process itself. Faculty and students are learning to frame better questions, critically evaluate AI-generated outputs, and use these tools to push inquiry further. But the speed and power of these tools also raise new ethical and methodological considerations.

Integrity isn’t a checklist—it’s a conversation

Maintaining academic integrity in an AI-driven world goes beyond citing sources. It requires ongoing dialogue around authorship, transparency, and the ethical use of generative tools to help faculty navigate these gray areas. Dr. Mendolia emphasized that institutions should foster a shared language and culture around responsible AI use—especially as norms continue to evolve.

Support matters—especially for those just getting started

For many faculty, especially early-career researchers, navigating AI in research is new territory. Faculty development programs, library services, and centers for teaching and learning all have a role to play in guiding the responsible integration of AI. Building confidence and fluency with AI tools starts with accessible, nonjudgmental support. Cross-campus collaboration is key.

Start small, stay curious

Both speakers encouraged a mindset of exploration over perfection. Whether it’s experimenting with AI for literature reviews or modeling ethical tool use in class, small steps can lead to meaningful shifts. Transparency with students and colleagues—about both successes and missteps—helps build a culture of trust and shared learning.

 

Conclusion

As AI continues to evolve, so too will the ways faculty engage with it in their research. Sessions like this one remind us that the goal isn’t to have all the answers, but to ask better questions—together. What can AI make easier? Where could it introduce bias or error? Creating space to explore these questions collaboratively leads to more thoughtful, intentional adoption—without sacrificing academic rigor. 

For more in-depth insights and strategies, watch the full webinar on our YouTube channel. Stay tuned for more insights, resources, and strategies to support thoughtful, inclusive innovation across teaching, learning, and research.

Watch Now
Article

March 14, 2025

Future-Proofing Engagement: Preventing the Slump Before It Starts

Read More
Article

March 12, 2025

Reignite the Spark: Mid-Semester Strategies to Re-Energize Students

Read More
Article

March 10, 2025

Spotting the Signs – How to Identify a Mid-Semester Slump Early

Read More