Article
Curious but Cautious: Educators Weigh In on AI in Research

As part of our continued effort to support responsible and innovative AI integration in higher education, Alchemy hosted the webinar AI for Academic Research: Balancing Innovation & Integrity on March 26, 2025. Co-facilitated by Tracy Mendolia, Ph.D. (Associate Director of the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical University), and myself, the session explored how AI is transforming the research process. We focused on how AI can support literature reviews, data analysis, and idea generation, while also addressing ethical concerns and institutional readiness.
Before the session even began, we asked registrants to respond to two key questions. With 834 participants weighing in, their responses offered a compelling snapshot of where the academic community stands right now when it comes to AI in research. Here’s what we learned in preparation for the session.
Comfort with Selecting and Using AI Tools
When asked, “How comfortable are you with selecting and using AI tools for research tasks (e.g., literature reviews, data analysis, or visualization)?” The responses were revealing. Only a small portion of registrants (approximately 9%) felt highly confident in their abilities. A large proportion (approximately 47%) placed themselves at the mid-point of the scale, indicating they were moderately comfortable, while a significant group (46%) rated themselves on the lower end of the spectrum.
This distribution suggests that while awareness of AI tools is growing, many educators and researchers are still in an exploratory phase. They may be aware of prominent tools like ChatGPT, Scite, or Elicit but are unsure how to apply them meaningfully or ethically in their scholarly work. The responses point to a field that is actively navigating a learning curve: cautiously interested, but not yet equipped with the fluency or confidence needed for widespread implementation.
For institutions and faculty developers, this presents a powerful opportunity. Targeted professional development, hands-on training, and discipline-specific examples could go a long way toward building both skill and trust in AI-supported research workflows.
Primary Challenges and Concerns
The second question asked was, “What is your biggest challenge or concern about using AI in research?” Responses to this prompt were multiple choice, with participants selecting from five predefined concerns. The most selected concern was ethical use of AI (36% of respondents), including how to cite AI-generated content, maintain transparency, and ensure originality—highlighting a strong desire for clear guidance on academic integrity. This was followed closely by worries about the accuracy and trustworthiness of AI outputs, as many respondents (approximately 26%) feared hallucinations or factual errors could compromise scholarly work. Another prominent concern was the lack of institutional policies and support, which many noted made it difficult to navigate or even initiate AI adoption. These concerns point to a community ready to explore, but calling for structure, clarity, and support before doing so.
The Path Forward
Together, these pre-webinar insights reveal a dual narrative: educators are intrigued by the possibilities of AI in research, but many lack the tools, training, and guidance to use it with confidence and integrity. If we want to move toward responsible and innovative adoption, we must first meet faculty and researchers where they are—with resources, policy support, and real examples they can trust. View the webinar recording to better understand the most effective and ethical applications of AI for research.