About AI mini-series: Safety & Privacy

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The fourth topic in our series focuses on the need to consider safety and privacy issues when using generative AI. This mini-lesson:

  • discusses why we need to consider safety & privacy
  • considers basic differences between consumer AI sites and educational AI sites (with information from Dr. Torrey Trust)
  • lists types of information we need to avoid feeding into our AI prompts; (this list is used directly from Eric Curts’ post “AI Tools & Student Data Privacy
  • provides ideas for anonymizing information that we use in prompts
  • suggests a “press pause” practice activity for teachers or students to identify and discuss information that should NOT be included in an AI prompt

Sample TMI prompt for teachers:

Create a reading comprehension exercise for Sarah Johnson, a 9-year-old with dyslexia who lives with her single mother, Jane, who works nights as a nurse at Mercy Hospital on Elm Street. Sarah receives free lunch at school and struggles with attention deficit hyperactivity disorder(ADHD)

Sample TMI prompt for students:

Write a personal essay about overcoming a challenge. Focus on how I dealt with being bullied throughout middle school by Jessica Rodriguez and her group.  It got so bad that I switched schools in 8th grade and started seeing a therapist to cope with the depression and social anxiety it caused. And then to top it off, my parents got divorced.

Hopefully, these simple sample prompts lead to some good discussions about the types of information that we should avoid using in our AI prompting. In general, consider this general guideline when prompting: always aim to share the minimum necessary information.

Check out the rest of the series from this launch page.

About AI mini-series: Misinformation

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The third topic in our series focuses on how AI produces misinformation or “hallucinates”. This mini-lesson:

  • discusses the term AI “hallucination” and typical causes
  • includes several examples of AI inaccuracies
  • reminds users of the 80/20 rule: you can let AI help with 80 % of your workload, but you must be prepared to do 20% to ensure accuracy and appropriateness
  • identifies types of AI-generated information that especially need fact-checking
  • suggests perplexity.ai as a safe, log-in-free AI tool that also provides citations
  • provides an activity that teaches users to test AI-generated information with “lateral reading”

Here is a lateral-reading prompt/activity that works well. This idea is from Holly Clark. “Which 5 countries have the highest life expectancy?” Use this prompt in several different AI tools (ChatGPT, Google Gemini, Microsoft Bard, Claude, etc.) and use citations (if provided) and class discussion to decide upon the ultimate “Top 5” list.

Students certainly need to see lots of examples of how generative AI “gets it wrong”. In fact, you should keep your own “hallucination library! But most importantly, we need to provide students with the procedures and opportunities to practice identifying when AI is steering us wrong.

Check out the rest of the series from this launch page.

About AI mini-series: Unimaginative

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make that easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The second topic is a fun lesson idea to show students how unimaginative and predictable AI really is. This mini-lesson:

  • has students brainstorm 3 lists of terms: dog names, pig names and famous New York City landmarks (or Paris, London, Los Angeles, etc).
  • has teachers use the same simple story prompt (see below) in several different AI generators
  • encourages classes to compare the outputs from the prompts, but also to compare the outputs to the original brainstorm list – there will indeed be similarities

Here is a simple story prompt that works well. You can change the output grade level to suit your class: Tell me a story of a dog, a horse and a pig having a grand adventure in New York City. Write it at the fifth-grade reading level

Showing students the similarities and predictability of AI helps them to understand why their work may be easily identifiable as AI generated.

Check out the rest of the series from this launch page.

Fun with AI: A-Z

Of course, this slide deck will be constantly out of date, but you still might find some new ideas.

Slide deck updated March 2024.

Get the slide deck here: bit.ly/FunWithAIA-Z

About AI: 10 ways to be a Human AI Detector

Many school districts and institutions are suggesting that their teachers and faculty NOT use AI detectors. You can google many supporting documents. Here is one that gives a good overview. Basically, they are unreliable and we risk accusing students inaccurately. Additionally, certain groups of students, like English language learners, are more likely to be “detected”. So what’s the alternative?

Check this video (and post below) for 10 ways that you can use your own skills to be a Human AI Detector.

1. Prompt…. Prompt… Prompt – try out your own assignments as prompts in AI.

  • You must use generative AI regularly (and in your content area) to more readily identify its style
  • You might also consider adding an “identifier” sentence in white font between the paragraphs or steps of your assignment. If a student copies your prompt into an AI tool and carelessly pastes their output back to you, it would leave these “fingerprints”. Add a sentence something like this:
    • Why hotdogs are the most nutritious food.
    • How are rainbows formed

2. The student response is too long

  • Significantly longer submission than required
  • Uncharacteristic length for student

3. The writing style is different from student hand-submitted or previously submitted work:

  • “voice” is different/ missing
  • student can suddenly write correct sentences
  • As students get better at AI, they will ask AI to add errors to their writing, or write at a lower grade level to avoid suspicion

4. Student is off topic or has used suspicious examples

  • General topic is close, but doesn’t address specific assignment instructions
  • Uses examples that you didn’t discuss in class
  • Uses examples that student is unlikely to have encountered
  • Uses examples that YOU have never heard of

5. Writing adheres to topic too perfectly

  • A complex multi-part assignment has each part much more specifically addressed than is common for the assignment
  • ….“to a tee”

6. Writing is very generic.

  • missing topic-specific examples
    • Missing course-specific vocabulary that you would normally see
    • Uses synonyms or equivalent terms you have never used in class
      • Eg. ✅ Free-market economy,  ✅ capitalism ❌ Self-regulating market

7. As you use AI more and more, you will notice that it typically has a “recognizable” voice

  • “stilted cheeriness” –
  • sounds phony → read examples and you will catch on quickly

8. The format looks like a typical AI output – use of headings, lists and bullets

a) Lots of headings (eg. Body Paragraph 1)

b) The list is exceptionally consistently formatted /well-organized

c) Lots of point form with many colons. This example has lots of colons. Additionally, if I were to copy this from ChatGPT to a document, it would identify headers with **askterisks**. If you wonder why students have suddenly started using so many asterisks in their work, it’s because they’ve been chatting with AI.

d) content suggestion brackets haven’t been removed/replaced

9. Blatant inaccuracies

  • If quotes are provided, check them – AI will confidently make up quotations  
  • → Still important to teach students internet Search Skills. (Search Skills Blit Playlist)
  • Swapping of character traits or facts

10. Try these technology assistants:

a) Version history in Google Docs or in Microsoft Word

b) Draftback Extension. See overview/ Find it in the Chrome Web Store or watch end of video above.

c) Revision History Extension. See overview/ Find it in the Chrome Web Store or watch end of video above.

d) The AI tool Brisk also has an “Inspect Writing” tool that will do some of the same things. Be sure you are looking more at student workflow than attempts at guesses of how likely it was written by AI.

So there’s been AI use that goes against assignment instructions? Now what?”

  • use suspected cases of AI as conversation starters rather than to make accusations
  • as a teachable moment to explain the problems with using AI-generated work. (See “About AI” series link below”.)
  • Normalize responsible student use of AI
    • we need to TEACH & MODEL what this is
    • Think of AI as a First Draft
    • “Prompting and Pasting is Pathetic”
    • 80/20 rule – let AI do 80 % of the hard work, but the human/teacher/student still needs to put in the other 20 % to make the work effective and correct

Check out the rest of the growing “About AI” play and pause / Co-taught lesson mini-series here.

I’m sure I’m missing lots of great “Human AI Detector” strategies – please add them in the comments and I’ll add them to this postl

Navigating the Generative AI Frontier in the Classroom

I recently had a great opportunity to present an AI session called “Navigating the Generative AI Frontier in the Classroom” for ISTE x TakingITGlobal. You can catch the webinar recording here on YouTube.
Extra bonus for 🇨🇦 Canadian educators! Free ISTE Membership & ISTE Books by completing the survey (look for the video) at info.iste.org/en/takingitglobal.

Access the slide deck at bit.ly/kannAIstudents.  Access the resource Padlet here

Watch it here!

About AI mini-series: Bias

As the world of generative AI in education continues to rapidly unfold, it is becoming increasingly clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make that easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then replicate it in your own class, perhaps during that perfect teachable moment. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

The first topic is to teach students about the bias that is generated based on how AI is trained. This mini-lesson:

  • introduces a brief history of AI and gives you some brief talking points for how AI has changed over the decades
  • introduces the vocabulary input and output
  • offers a “Google image search” activity that helps students to visualize bias in AI training data

Check out the rest of the series from this launch page.

About AI: a mini-series

Use this short link to access this launch page: bit.ly/kannAIabout

As the world of generative AI in education continues to rapidly unfold, it is becoming more and more clear that we must teach our students about AI, before we encourage them to use it in our classes.

To make it easier for teachers, we are creating a series of “About AI” mini-lessons. These mini-lessons can be used in your classroom as a Play & Pause format… you press the play button, we teach, and you and your class pause and/or rewind as you follow along.

Alternatively, you might find it useful to watch the video for the mini-lesson idea and then just replicate it in your own class, fitting it in when the time is right. Either way, we hope that you will take the time to teach your students (and yourself!) about AI.

Watch this quick series overview below and check out (or subscribe) to the YouTube playlist here.

Topics will include:

BONUS Episode: 10 Ways to be a Human AI Detector

More thoughts ABOUT AI. Check out this webinar with Cammie + ISTE + TakingITGlobal. (Canadian educators can complete a survey at this link (January 25th recording) for a free ISTE Membership (value of $95 USD).

My 2023 in Review

In a blog like mine with a fairly low readership, one of its most useful purposes is that of a historical map. A trail or witness to the learning and creating that I’ve done. Creating this list helps me express my gratitude and appreciation for all the people who helped me reach my goals and learn and grow. 

2023 was a good year. Here’s a recap (before January 2024 passes!) … well, this post apparently didn’t publish in January, so we’ll go with “better late than never.”

My one word was Engage. Considering that I spent most of the year thinking that my word was Amplify (from 2022), I might just recycle the 2023 word and try over.

This Blog: “What I Learned Today…”

  • # of posts: 42
  • # of words: 18900 (not my highest)
  • Views: 18101; Visitors: almost 10 000

YouTube

  • Published videos: 68
  • YouTube Live videos: 11
  • Views: 30 681 (only 11% from Canada)

Podcasts (Prairie Rose Possibilities, The Podcast)

Podcast guest: 3 times

Professional Development Offered:

  • sessions at ISTE23 in Philadelphia: 6
  • ISTE Digital Escape Rooms created with Greta Sandler, Amanda Nyguen and Maggie Pickett: 2 (topics of UDL and Digital Citizenship)
  • AI sessions delivered virtually: 12; in person 6; consultations with AI company founders: 3
  • Technology training in Africa (Uganda and Zambia): 7 days
  • Sessions presented or hosted for Logics Academy: 29
  • Sessions presented at the regional level: 4
  • PD video series created:
    • After School AI mini-series (6 episodes)
    • Google Site mini-tutorials (15 videos)
    • Search Blitz (10 episodes)
    • Clean-Up Blitz (4 episodes with 4 more to come)
    • Fun with Google A-Z (completed January 1, 2024)
    • Outlook email tips (4 episodes)
    • PRPS Word Work series (13 videos)
  • Giant Map of Canada: 7 schools, 30 classes, 865 km

Professional Development Learning highlights

  • Attending ASCD for the first time (Denver in March)
  • Learning and presenting at ISTE23 in Philadelphia (June)
    • Receiving the ISTE Silver Volunteer Award
  • Being selected to attend the Google Champions Symposium in Dublin, Ireland (November)
  • Watching dozens and dozens of hours of webinars about generative AI
  • Becoming one of 2 Canadian Ambassador for StickTogether; 

Other interesting things:

  • Hosted BreakoutEDU physical kit games 37 times (and getting to hang out with CEO Adam Bellow at ISTE23 in Philadelphia.)
  • Virtual Reality Sessions hosted: 12 (Oct- Dec)
  • Canva documents created or collaborated on: 127
  • My first full year on Facebook and the start of dabbling in Linked In and Threads
  • In the garden, it was a good year for tomatoes, peppers, cilantro, carrots and beans; it was a dismal year for brussel sprouts, beets, and romaine lettuce.

That’s my professional year of creating and delivering content in a nutshell. Of course, not everything is quantifiable, but the value of a space like this is as a reflection tool. There will be days when I need to peek back here to remember that I am creating content and impacting at least a few folks.

On AI grading of student writing

I was an English language arts, and social studies teacher for 20 years. The worst part of my job by far was evaluating essays. Even if we resist machine grading by AI for now, it will eventually be here to stay. And even if the AI grading isn’t perfect, we can work on our prompts to make it better and better. As a human grader/evaulator, despite my best intentions, I am certain that my essay evaluations were likely uneven. If I had 40 essays to mark, while they were hopefully fairly consistent. I’m sure there were still inconsistencies and I’m certain that I gave better feedback to some students than others.

Perhaps the most important thing to consider is that it takes a human a long time to grade 40 or 60 essays, especially if you aim to provide helpful feedback. If AI can provide reasonable and consistent and immediate feedback to our students as writers, I don’t think it is fair to ask them to wait for two weeks while a human marks a whole stack of essays to hand back at once.

Perhaps the human marks the final essay, or the final draft of the essay. In the in between, however, I think we need to consider teaching our students how to ethically use generative AI to access immediate and specific feedback about parts of their writing.

I know there are lots of other points to consider, but as a life-long English teacher, I believe AI is a path to a better way of assessing.

Thoughts?