Common Misconceptions about Generative Artificial Intelligence (GenAI)

Common Misconceptions about Generative Artificial Intelligence (GenAI)

As artificial intelligence tools become more integrated into teaching, learning, and academic workflows, they bring opportunities and confusion. This document outlines some common misconceptions about AI in higher education, from concerns about student learning and academic integrity to misunderstandings about AI’s capabilities and risks. These short responses are designed to clarify what AI can and cannot do, and to support more informed, constructive conversations about its use.

Staff and faculty in the Center for Teaching and Learning and the eCampus Center can support you with your needs. If you’d like to consult with someone after reviewing this, please reach out to someone: 

For further insights and research-based perspectives, see:

Misconception: Generative AI trains on the information I enter.

The following tools are safe for use with sensitive information:

For more information on the accepted tools at Boise State, please refer to the Boise State AI Tools website

The AI tools accepted for use by Boise State University in the teaching and learning realm do not train the Gen AI system based on what you enter into the chat. The tools listed above also won’t send what you type back to the developers. These tools may retain your chats for your convenience and to create a larger context (the ‘context window’) for your interactions with the chatbot.

Some tools accepted for use at Boise State may use the information summarized in the context window for purposes other than training the core model. While these tools may use your input during a session to maintain context or improve the user experience, the data stays local and is not stored or used for future model training. This makes them appropriate for use with copyrighted or sensitive materials.

If you use a tool outside the Boise State infrastructure, you must know your privacy rights and opt out of data collection.

According to research by Professors Hitesman and Macklin, Boise State students are not receiving guidance about which AI tools they can use and how to protect their privacy when using such tools. Instructors should consider providing such education to their students. 

Misconception: Students know how to use AI.

Students carry a lot of emotions regarding artificial intelligence. In a study of online students at Oregon State University, “students were skeptical about information provided by generative AI tools” and had a lot of concerns about the use of AI. They also “agreed that generative AI tools would impact their careers.” 

At Boise State University, many students are using AI, and generally, at least some of their professors address AI in surface-level ways. However, in a recent study by Brian Stone, 71% of students reported that none of their professors have integrated or required AI usage. Those with more professors integrating AI used the technology more across all contexts. In many cases, if they are using AI, they may not be doing so in experienced ways. In a study by Tiffany Hitesman and Ti Macklin at Boise State University, only 6 out of 93 students were aware of and used Gemini for Education or Boise State AI, the official teaching and learning tools at Boise State University. Instructors can foster AI literacy by teaching students to use AI by modeling ethical and responsible use, by integrating it into coursework as a tool for learning, not a shortcut to answers. 

Instructors should consider developing AI literacy among learners by fostering a foundational understanding of AI tools, emphasizing data evaluation, ethical usage, and responsible application of AI as a resource companion in coursework. For example, in a research study, Boise State instructors Karen Krier, Karen Nicholas, and Yu-hui Ching found that one significant theme was students utilizing AI to clarify assignment structure and formatting, generate and organize ideas, and interpret rubrics rather than relying solely on AI to complete tasks without meaningful engagement. This finding underscores how students can leverage AI as a practical guide to navigating assignments, highlighting its supportive role without diminishing the importance of independent, critical engagement with course content, though they need education on how best to do so. 

Research at Boise State University, led by Steven Hyde, validates this supportive approach to AI integration, demonstrating that students can effectively develop both subject-matter expertise and AI literacy simultaneously. In a business strategy course, students engaged in scaffolded AI activities that built both content mastery and AI fluency. Students applied AI across diverse tasks as a thinking partner throughout the course. The results showed a strong positive correlation (r = 0.84) between students' mastery of course concepts and their AI literacy development, with 93% of students reporting positive outcomes. Student evaluations highlighted that they learned to use AI "as a tool to aid my learning" with "critical thinking," finding the integration "very eye-opening and extremely valuable" for professional preparation. Using Generative AI can enhance both digital literacy and subject-matter understanding.

Hyde, S. J., Busby, A., & Bonner, R. L. (2024). Tools or fools: Are we educating managers or creating tool-dependent robots?. Journal of Management Education, 48(4), 708-734.

We encourage instructors to consider these tools use to help educate students about AI:

Misconception: You can detect text generated by Gen AI.

Detecting AI-generated text involves identifying linguistic patterns and features that suggest Gen AI writing, such as unusual word choice, overused phrases, and a lack of depth or analysis. While AI detection tools can be perceived as helpful, they are not always accurate and may produce false positives. Research has shown that detection tools may be biased against non-native English writers, and some students may have access to or knowledge of effective methods to mask AI usage.

There may be cases where instructors want to identify if a student used AI in ways that were not allowed. Boise State University’s AI Teaching and Learning Committee drafted this key position statement on the topic: Position Statement on AI Detection which includes the conversation guide. 

Here is more information on the technical details regarding AI detection for those who want to dig deeper. 

There are two main  methods used to detect AI currently:

  1. Training a machine learning model to identify AI-generated content.

  2. Embedding imperceptible "white noise" (or digital watermarks) in AI-generated content by the LLM provider.

At this stage, AI detection methods still have a relatively high rate of false positives. As the position statement linked above explains, “As an example, currently Turnitin’s AI detector has a 98% confidence rate that text may be AI, with the resulting percentage likely it is AI having a margin of error of 15 (Turnitin, 2023). If Turnitin’s AI detector detects 23% AI-generated content in a writing sample, it could contain anywhere from 8% - 38% AI-generated content.” 

One underlying assumption of these detection tools is that AI-generated content will not be extensively revised by another AI. As a result, highly structured and heavily proofread documents, such as legitimate reports or policy papers, are more likely to be flagged as AI-generated. This also means that if a student writes a high-quality report on his/her own, it may be mistakenly labeled as AI-generated.

Misconception: AI is accelerating environmental destruction, and there's nothing we can do about it.

While it’s true that large AI models consume significant energy and contribute to carbon emissions, not all AI use is equally harmful. In the Boise State AI tool, users can choose smaller, more efficient models for many tasks, dramatically reducing environmental impact. Thoughtful model selection empowers educators and students to use AI responsibly, balancing innovation with sustainability, rather than contributing unnecessarily to environmental harm. When selecting an AI tool to use, consider the resources that will be used and run the queries with purpose. 

Misconception: All the references Gen AI creates are fake.

TL;DR: Not all of the references Gen AI creates are fake. Some of them are, so users should check their references prior to using them. 

More information: Today's large language models (LLMs) can be integrated with external tools to reduce the likelihood of hallucinations. The most common tools include search engines and vector databases. The approach is so-called Retrieval-Augmented Generation (RAG). RAG can significantly decrease the chances of AI hallucinations.

Some references in Gen AI are fake, but many tools will now cite real resources and provide sources. Gemini usually does this in prompt responses. Gemini responses will provide you with a list of links. Still, users need to look up each item, confirm the citation is the article cited, read the article, see the relevancy, and ensure the content is accurate. Generative AI may give you a citation, but it may not be the correct citation. 

The Deep Research model in Google Gemini generates a detailed research plan and searches available content on the open web for sources, then synthesizes the content for you and provides a report. 

Regarding which content is used, such as the open web for sources, there are limitations to using this approach as a comprehensive list enough to deem it deep research. It may not go very deeply at all. While we may not be sure of all of the sources used, the main sources are available on the open web or in specific training sets (which may or may not include massive pirated sets of books like LibGen; unknown if journals from Sci-Hub). Most AI models have a large blind spot, neglecting the gigantic majority of research which is behind a paywall (and historically, the higher quality journals and research are behind a paywall). Gen AI, in other words, provides a selective and filtered view and will drive people to some types of sources over others.

As with all Gen AI text, carefully review and check all sources to ensure they are authentic, accurate, and align with your research topic.

Misconception: If students use AI, they won’t learn anything.

When used thoughtfully, AI can help students actively engage with the material, deepen their understanding, and build critical thinking skills. The concern is that AI might just do the work for them, leading to passive reception instead of active learning.

However, just like calculators didn't stop us from learning math, or the internet from researching, thoughtfully integrated AI can actually boost learning. AI can help students engage with material in new ways. For instance, it can present different perspectives on a topic, prompting students to compare and contrast ideas, a key critical thinking skill.

Of course, the concern about students simply copying and pasting AI-generated essays is valid. The key difference lies in the intention. Is the AI being used to do the work, or as a study tool to support learning? Imagine a student using AI not to write an essay for them, but as a thought partner to brainstorm ideas, get feedback on their drafts, or even explore different ways to structure their arguments. This kind of iterative process, guided by thoughtful prompts and critical evaluation of the AI's output, can deepen understanding and strengthen writing skills.

Ultimately, the effectiveness of AI in education hinges on how we guide its use. Just like any tool, it has the potential to be misused. But when implemented thoughtfully, AI can become a powerful ally in fostering active engagement, deeper understanding, and crucial critical thinking abilities.

That is why instructors may want to develop AI agents with learning theories embedded into courses. Instead of simply providing answers that students can copy and paste, these AI agents are designed to offer support and scaffolding. This highlights that learning to use AI effectively to support teaching and learning is more important than just accessing the answers.

Misconception: “AI” means tools like ChatGPT and image generators.

Generative AI tools get a lot of attention, but they’re only one category of AI. Many AI systems don’t generate content at all; instead, they classify data, recommend actions, or automate tasks. For example, Netflix uses AI to suggest a new show based on your watch history, adaptive learning platforms use it to adjust lesson difficulty in real time, and early alert systems analyze student data to flag those who might need support.

Misconception: AI will replace teachers.

AI can support teaching, but it can’t replace the relationships, judgment, and adaptability that educators bring to learning. In an online classroom, for example, AI might help automate quiz grading or suggest resources based on student progress, but it’s the teacher who guides discussion, provides meaningful feedback, and creates a sense of community.

Misconception: I can make my course AI-proof.

Students can use Gen AI with varying levels of sophistication, which may mask their use of Gen AI tools. In addition, AI detection tools are inaccurate. The technology that might help fix some of these problems is the same technology that makes AI detection unreliable to begin with. 

Instructors will want to clearly articulate how students can use Gen AI throughout the course, demonstrating which uses are encouraged and which are out of bounds. If instructors wish to minimize the use of Gen AI on assignments, they should be clear about what is out of bounds. If an instructor wishes to make changes to their online course to minimize suspected AI use, since AI detection is unreliable, we recommend requesting a consultation from eCampus Center.

Instructors will want to model the allowable and non-allowable uses of AI. An example from the AI Teaching and Learning website states how to explain to students in the syllabus how to use Gen AI: Sample AI use statement for syllabi – some AI use

 


Can't find what you're looking for? TALK Article Suggestions.
Return to TALK