About AI mini-series: Plagiarism and Copyright

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The fifth topic in our series focuses on that ever-present topic…. is using AI “cheating”? What constitutes plagiarism with AI ? This mini-lesson will consider:

  • When is it OK to use AI tools in school?
    • When teachers give you permission 
      • (if you’re not sure, ask!)
    • When you properly cite the AI
    • If the assessment is NOT assessing your writing
  • How to cite AI with MLA & APA formats
    • MLA:Write prompt here” prompt. Tool Name, Version, Company Name, Access date, website.
    • Example: 
      • “Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
    • How do I cite an AI?  – APA
    • APA: Company name. (Access Year). AI Tool Name (Version date) [Large language model]. Website address
    • Example: 
    • Note that the slide in the video has an error in the APA example. Use the example above.

This video also suggests two helpful templates to use with students as you assign work in an AI era and have discussions about what constitutes cheating.

Discuss the following “Cheating Spectrum”, created by Matt Miller. Read an article about the AI Stoplight model here.

Check out the rest of the series from this launch page.

About AI mini-series: Safety & Privacy

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The fourth topic in our series focuses on the need to consider safety and privacy issues when using generative AI. This mini-lesson:

  • discusses why we need to consider safety & privacy
  • considers basic differences between consumer AI sites and educational AI sites (with information from Dr. Torrey Trust)
  • lists types of information we need to avoid feeding into our AI prompts; (this list is used directly from Eric Curts’ post “AI Tools & Student Data Privacy
  • provides ideas for anonymizing information that we use in prompts
  • suggests a “press pause” practice activity for teachers or students to identify and discuss information that should NOT be included in an AI prompt

Sample TMI prompt for teachers:

Create a reading comprehension exercise for Sarah Johnson, a 9-year-old with dyslexia who lives with her single mother, Jane, who works nights as a nurse at Mercy Hospital on Elm Street. Sarah receives free lunch at school and struggles with attention deficit hyperactivity disorder(ADHD)

Sample TMI prompt for students:

Write a personal essay about overcoming a challenge. Focus on how I dealt with being bullied throughout middle school by Jessica Rodriguez and her group.  It got so bad that I switched schools in 8th grade and started seeing a therapist to cope with the depression and social anxiety it caused. And then to top it off, my parents got divorced.

Hopefully, these simple sample prompts lead to some good discussions about the types of information that we should avoid using in our AI prompting. In general, consider this general guideline when prompting: always aim to share the minimum necessary information.

Check out the rest of the series from this launch page.

About AI mini-series: Misinformation

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The third topic in our series focuses on how AI produces misinformation or “hallucinates”. This mini-lesson:

  • discusses the term AI “hallucination” and typical causes
  • includes several examples of AI inaccuracies
  • reminds users of the 80/20 rule: you can let AI help with 80 % of your workload, but you must be prepared to do 20% to ensure accuracy and appropriateness
  • identifies types of AI-generated information that especially need fact-checking
  • suggests perplexity.ai as a safe, log-in-free AI tool that also provides citations
  • provides an activity that teaches users to test AI-generated information with “lateral reading”

Here is a lateral-reading prompt/activity that works well. This idea is from Holly Clark. “Which 5 countries have the highest life expectancy?” Use this prompt in several different AI tools (ChatGPT, Google Gemini, Microsoft Bard, Claude, etc.) and use citations (if provided) and class discussion to decide upon the ultimate “Top 5” list.

Students certainly need to see lots of examples of how generative AI “gets it wrong”. In fact, you should keep your own “hallucination library! But most importantly, we need to provide students with the procedures and opportunities to practice identifying when AI is steering us wrong.

Check out the rest of the series from this launch page.