Article

Navigating the Frontier: What Educators Are Saying About AI in Academic Research

Brett Christie, Ph.D.
VP, Educational Innovations & Inclusivity

On March 26, 2025, Alchemy hosted a dynamic session titled AI for Academic Research: Balancing Innovation & Integrity. The webinar was co-facilitated by Tracy Mendolia, Ph.D., Associate Director of the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical University, and myself.

This session built upon insights from our 2024 webinar and aimed to explore how AI is transforming academic research. Attendees were presented with research tools and trends while also grappling with ethical concerns and implementation barriers. Together, we considered how AI can streamline literature reviews, enhance data analysis, support new discoveries, and reshape workflows. We also shared a curated set of resources to help educators and researchers stay current in this rapidly evolving space.

As part of the session, we invited the 431 live participants to respond to three key questions using a collaborative Padlet. Their insights reflected the opportunities, hesitations, and aspirations shaping AI adoption in academic research today.

Institutional Challenges: “We don’t know what AI is, what we’re allowed to do, or how to share a framework.”

When asked about the biggest challenges their institutions face in supporting AI in research, participants overwhelmingly pointed to a lack of clarity. Many described the absence of institutional guidance, policies, or infrastructure that would support responsible and productive AI use. This institutional uncertainty isn’t just theoretical—it creates real hesitation among researchers who might otherwise be eager to experiment.

Concerns about ethics and integrity were deeply intertwined with this lack of structure. Without policies to govern attribution, transparency, and reproducibility, faculty and graduate students are left to navigate a murky terrain on their own. A few respondents shared that institutional fear and inertia were also playing a role—the default stance seemed to be caution, if not outright avoidance.

Others highlighted gaps in faculty training and access to tools. Even where interest exists, researchers often face logistical barriers, from limited funding to insufficient IT support. The most-liked Padlet entry in this section captured it well: “We don’t know what AI is, what we’re allowed to do, or how to share a framework.”

Personal Uncertainties: “Understanding what’s appropriate and ethical in AI research use.”

On an individual level, participants shared a mix of curiosity and caution. Many researchers are interested in using AI to enhance their work but feel uncertain about how to begin, what tools to trust, or how to use them ethically. Several mentioned that while they had heard of tools like ChatGPT or Elicit, they weren’t sure how to evaluate AI-generated content or cite it responsibly in academic work.

There were also significant knowledge gaps in terms of prompting, tool selection, and integration into specific disciplinary methods. One response that received multiple likes summed up a key concern: “Understanding what’s appropriate and ethical in AI research use.” This speaks not just to the novelty of the tools, but to the shifting expectations around authorship, originality, and integrity in the research process.

Researchers from qualitative fields in particular shared concerns that AI may not align with their epistemological frameworks or methodological needs. Others worried about accuracy, hallucination, and misinformation embedded in AI outputs—especially when conducting literature reviews or synthesizing complex concepts.

Moving Forward: “Learn about AI ethically for research.”

After exploring examples and discussing practical applications during the webinar, we asked participants to share what step they might take next to integrate AI into their research or to support others in doing so. Their answers revealed a strong desire to move from hesitation to exploration.

Many participants expressed interest in learning more through ethical and discipline-specific training. There was enthusiasm for developing AI literacy communities, organizing workshops, or even co-learning with students. One highly endorsed response captured the sentiment of the group: “Learn about AI ethically for research.”

Several others said they planned to start small—experimenting with tools to assist with literature reviews, coding qualitative data, or simply improving workflow efficiency. Rather than jumping in blindly, these researchers are seeking intentional, well-informed ways to make AI work in service of research, not in substitution for it.

Looking Ahead

This Padlet discussion provided a valuable snapshot of how educators and researchers are approaching the promise and complexity of AI in academic research. While institutional barriers and personal uncertainties remain, it’s clear that there is both interest and momentum toward responsible integration. What comes next will depend on our collective ability to provide clear guidance, foster ethical awareness, and create spaces for experimentation and collaboration.

If you missed participating in the session, you can access the session recording and resources.

Article

April 4, 2025

From Ideation to Publication: A Use Case for AI to Support Ethical Research

Read More
Article

March 31, 2025

Curious but Cautious: Educators Weigh In on AI in Research

Read More
Recap

March 27, 2025

AI for Academic Research: Balancing Innovation & Integrity

Replay