NOTE: The following has been edited for length and clarity.
Larry: Hi, I’m ConnectSafely CEO Larry Magid and this is Are We Doing Tech Right, the twice monthly podcast from ConnectSafely. And we always talk with luminaries and leading experts and brilliant people, and this time we’re fortunate to have such a person right here on our own team, our very own Kerry Gallagher, who is ConnectSafely’s Director of Education, but also serves as a full time Assistant Principal for Learning.
I hope I get this right, Kerry, at St. George Prep School, and a teacher. You’re in the classroom, you’ve been a veteran classroom teacher, so who better to talk about than this emerging AI technology, which everyone is talking about. I’m not sure anybody fully understands, but along with asking ChatGPT, why don’t we just ask Kerry Gallagher, how is AI being used in the classroom, and what are your thoughts on it?
Kerry: Thanks for asking. So I think one of the great things about the way that I serve my school is that as the Assistant Principal for Teaching and Learning, my job is to really dive deep into what are what’s kind of the magic that needs to happen in the classroom in order for kids to engage with the content that we need them to learn. The skills that we need them to build over time, so that they’re ready for the workplace and the kind of careers of the future. And one of the tools that has presented itself as a tool that has been greeted with both great excitement and great concern is generative AI.
So we’ve had AI for a while, right? We’ve had smart speakers and we have our phones and the various AI services that are provided there, but generative AI presents kind of a different conundrum because teachers work hard at crafting assignments and prompts for their students to respond to. Whether they’re in the form of math problems or, you know, an essay prompt or something like that, AI can respond to those and answer them. And that’s the kind of thinking we’re trying to get our students to do. And so it’s led teachers to start asking some questions. What does this mean about the future of school? What kind of learning is valuable now? Will that same kind of learning be valuable in the future if generative AI is able to do some of the things that we’ve been trying to teach our little humans to do?
Larry: You know, I’m old enough to remember when calculators didn’t exist and when they came in and I remember when the four-function calculator came on the scene. There were some educators who thought any use of a calculator was cheating. But now we’re at the point where I think people recognize that calculators can help take away some of the drudgery of arithmetic and lead students to discover and experience mathematics at a higher level. Do you see any connection between that notion and where we are today?
Kerry: Yeah, I mean, I’ve heard that in many other metaphors in terms of, like, advancements in the past and how they were initially greeted with concern. I think there are instances, though, where using a calculator still is considered cheating, right?
Absolutely. And the ability to use a calculator has allowed our math students to advance to more complicated math than we did when we were in school because they can move through some of the steps faster through the help of the calculator. So, I think it’s both and right, I think we’re going to move in a probably a similar direction with generative AI.
Although it’s hard to know, right? The potential for what generative AI could do in the future is not something we really know, as compared to what it can do now. We know generative AI, the quality of its output is always improving. Some of the initial answers to questions that I posed to, to AI, platforms, the questions were like, mediocre.
I mean, the answers were like mediocre at best in terms of, you know, the information may have been mostly right, although some of the details may have been wrong. The quality of the writing in terms of the prose or the rhetoric that we would ask for students in an academic setting was, again, mediocre. It sort of met the basic frame guidelines that we would want in a paragraph.
But we really want our students to understand that written communication, there’s an art to it where you add your own voice and your own value as a human. So, but I have to say though, it’s gotten better, right? And, it’s improved in quality. So I think we were, that’s where we are right now. We feel both excited by the opportunity and also maybe intimidated by the responsibility of figuring out where are we going to draw that line of when using generative AI is something that can help our students kind of move through learning some of the skills and content maybe a little bit faster than we did because they have this tool and where will the use of generative AI actually be kind of considered cheating because you’re circumventing the learning process of a really essential human skill. Skill that needs to be developed through practice.
Larry: When we talked about this a few months ago, you said that you actually had a student using one of these products, and you could tell because you knew your students and you knew their writing style, did that scale and will that be true two years from now?
I mean, is that, that was great that you could do it then, but will you be able to do it going forward?
Kerry: I will. You know why? Because first, I just started teaching a new class of students. I think you shared with our audience, I work as a school administrator.
But I also get to teach one class. So I get to apply some of the theories and practices to students myself and test them out a little bit. One of the things that I did this year in the first two weeks of school is I had this one of their first assignments was to do a writing sample, and it was just two paragraphs, and I gave them feedback and asked them to make revisions, and now I have their best effort with feedback from their teacher, their own writing, with their own voice and without the aid of any generative AI because they did it in class.
They had to do it with me observing them as they were using technology. They were using devices to do it but I was still, I was supervising them using some of our technology, supervising software that we have so I know it’s truly their own work. And now, when I send them home to do some kind of written work, if the style doesn’t really, or the voice in the work, or the quality of the work isn’t a match for what they’ve submitted to me in that live writing sample, I can make the comparison and start asking the student questions, like, “Wow, this looks really different from what I’ve seen before. I’d love for you to walk me through your writing process this time because it produced a significantly different quality of work. Let’s see if we can figure out what your secret is.” And that’s how I ask the question. So I’m not accusing them of anything. I’m demonstrating curiosity and my experience with students who did choose to use generative AI is number one, they own up to it really fast because they realize that I know them better than I think they do.
And that’s really the key here, right? Generative AI is a machine. We are humans. Human relationships are the foundation of a high quality education. The student, if he knows I know him, he is going to own up to it quickly because he will know that I’ve kind of seen through the ruse, right?
Larry: But secondly, if they’re using AI (you teach at a boys school) which is why you’re using the male pronoun. Your students are all boys.
Kerry: Yeah, yeah. One more point I just wanted to make is that I also think that the students who use generative AI to complete assignments. I like, maybe I’m too optimistic, but I like to assume they do it because they’re overwhelmed and stressed and not because they’re lazy. They’re doing it to try to get one of their many, many tasks that are their responsibility done so that they can move on to the other thing. Maybe the other thing is literally sleep because they’re exhausted.
Maybe it’s getting to a practice for a coach. Maybe it’s a boatload of homework and they have to check all the boxes and it’s getting late at night. And so I think there’s an opportunity there for another conversation about the child’s wellness and whether there are other supports that they need in addition to being held accountable for.
Larry: You know academic integrity practices, of course as you know Kerry, in addition to running ConnectSafely, I’m also a professional journalist who has written lots of books, thousands of articles. If I were to write something for you on a typewriter or a computer that simply had word processing and no advanced tools, there’s a pretty good chance there’d be a spelling error or two and maybe even a couple grammatical errors, but anything that I would write today and submit to you because I’ve got Microsoft Word or something like it is going to have few if any spelling errors and few if any grammatical errors because I’ve got these tools.
Is that cheating? How does that compare to what you’re talking about with generative AI?
Kerry: Oh, I think it depends on the parameter of the assignments, right? If part of my assignment expectations is that I want them to focus on the skill of spelling and grammar, then using spelling and grammar auto correcting tools is cheating.
But if that is not something that I’m… specifically focusing on in terms of the skill that I am grading them on for that assignment that it’s not cheating. It depends on the intentions behind the assignment. Not every teacher is grading every skill in every assignment. We focus on one or two skills per assignment and we let the other mistakes go because otherwise nothing would ever be good enough and we would deflate our students’ willingness or desire to learn if we were picking on any and every error that they made.
Larry: Let me see, of my experiences in elementary school, they did back then. No, but it’s true. It’s just like the same thing with a calculator. If your purpose is to teach the ability to do, I don’t know, multiplication or addition, the calculator would be cheating. If your purpose is to do statistical analysis, then the calculator would be a tool that any smart person would want to use for that kind of work.
So let’s talk about the ways in which you envision it either currently or in the future being legitimately used as a way to enhance your students’ learning and how they’re showing you that they’re doing a competent job expressing their knowledge.
Kerry: I want to go back to my previous answer. It all depends, right? The goal of a particular assignment is for students to really hone their revision skills of their own work. They shouldn’t be using an AI platform to revise their writing. They should be doing that themselves because that’s the skill the teacher wants them to practice.
But if the skill the teacher wants them to practice is putting together like a nice cohesive five paragraph essay and they’re focusing on thesis statements with supporting evidence. But not necessarily like tweaking each individual paragraph to grammatical perfection. It’s more about the content and how the content is organized.
You could take a well organized draft and hone the grammatical elements of it using AI because the teacher wants to focus on the content, maybe not the grammar. So I think, it really does depend, and what I think is important for teachers to do in this new context is be explicit and proactive about telling students that.
Part of my expectation is that any revisions that you are doing on these drafts are based on my human feedback, and that you’re taking that and you’re making the revisions yourself. You’re not relying on any kind of… revision software, something like Grammarly or something like that, or AI to do those revisions.
I will tell you as a teacher, I have said to them, listen, I want you to draft a really solid paragraph about the advancements that happened in the area of art during the Renaissance. Write the paragraph. If you submit a paragraph to me that has missed capitalizations, mispunctuation, and misspellings, like you have software right on your device that can make those revisions for you. Like I’m, I am grading your ability to express your understanding of the content. There’s no reason you should lose points on careless grammar errors when you have this tool at your fingertips. But it’s on me as the teacher to very proactively tell them when it’s okay to use it and under what circumstances for each assignment until they develop their own inner calibration of what’s ethical and what isn’t. We can’t assume that kids already have ethics. We have to teach them actively by modeling it.
Larry: So let’s imagine for the moment you were teaching woodshop instead of what you teach. Sure. And you happen to be one of these people that was trying to replicate, I don’t know, you were trying to rebuild some ancient palace.
And actually this happened in the UK, I think, when one of the big palaces burned down. They used the very tools, as much as possible, that were used to create those palaces. So your purpose was to teach kids how to use a handsaw, how to use a hammer, and how to use very primitive, old fashioned tools.
You would have one style of teaching. But imagine your real goal was to simply make them really good carpenters in today’s modern world. You would teach them how to use an electric saw and whatever other technologies are available. I don’t know much about carpentry, but whatever other modern technologies are available to make it easier to create better products, right? So that brings back the question. How do we integrate AI into a learning environment so that it can be used for what is beneficial and not used for what some people call cheating?
Kerry: I think we’re still working on that right? I think we’re, that’s the big question. I can give you some examples of how it’s been integrated at our school and I know that there are many more examples out there that if there are teachers who listen to this they’re going to be like “Oh, I wish I had talked to her about my example before she recorded this podcast so that she could share my example.” But our AP European History teachers found a generative AI platform that was developed last spring specifically to help students, respond to, mock AP exam prompts, submit their answers to the AI tool, and get feedback.
And the students and the teacher were really enthusiastic about this tool because it’ll, the AI platform could give more students more feedback, more than a human could, just in terms of she didn’t have the time to be able to read that many drafts of AP responses. And what we know about success on the AP test is that repetition in practice is what is likely to put them in a position where they’re more likely to be successful and earn that five, which is the max score you can get on an AP test.
So they were able to get more practice because the AI platform was able to give them more repetition than the teacher could. She just didn’t have enough hours in the day to read that many practice essays. So that’s one. And that’s a really simple one. And then, you know, our students went into that AP test feeling, you know, more confident.
I know that I have teachers who they’re coming into a class one day, and they know they have a lesson on, you know, whatever topic it is. They’ve been using that lesson for the last three or four years. They know it needs something new, but they’re kind of out of ideas. You know, the creative juices aren’t flowing that day.
Maybe, you know, they only had decaf coffee at home. So, they plug in. I love a lesson on, you know, this, and what ChatGPT responds with is usually like an okay, decent lesson plan, but a skilled teacher can take a little bit of that and turn it into something amazing. Just because ChatGPT response plants a seed or gives that one little inspirational idea that the teacher can turn into something great.
And that’s where I think the great uses are right now. It’s not that. The GAI is doing the work, it’s inspiring the work.
Larry: It’s funny you say that because I remember saying this many, many years ago in the early days, I’m showing my age here, but in the early days of personal computers, and I made the comment that a word processor doesn’t turn a mediocre writer into a great author any more than a food processor turns a mediocre cook into a great chef.
But obviously, great writers use word processors, maybe some still use typewriters, and great chefs typically use food processors. And it occurs to me, and this is just an aside, and I’ve thought about it mostly in the professional terms, but maybe it applies to education as well, that If you’re a mediocre writer or a mediocre creator or whatever, whatever job you do, you might have a reason to be threatened by generative AI.
But if you’re really good at your craft, not only will you not be threatened by it, but the technology might make you even better at your craft. I mean, that’s my own personal thoughts on this. I’m curious about your response, both as an educator and as somebody who works with students.
Kerry: Yeah, I mean, I’m not sure that I could agree with that statement more.
One of the things that we have access to now within the last 10 years are some pretty robust data dashboards that track student understanding over time. And because like if they interact with an online platform that allows them to have practice with answering certain questions or analyzing passages or solving equations or what have you, we are able to see instantly on the teacher side of the dashboard, which types of questions students are struggling with. Whether it’s the way the question is asked, or the format of the question, or whether it’s questions that are specific.
Like, let’s say it’s a geometry class. Is it questions with area that’s befuddled them? Or is it volume? That’s a problem. So, it’s not necessarily the score on the whole test, but it’s this. It can break down the data into a ratio. Bits and pieces and that but it auto sorts it for us and we can import our own filters could teachers track data like this before.
Yeah, but do you remember it was in those grade books that were basically like spreadsheets with a spiral down. Right, so that took a lot of math to track that kind of data. We can do it instantly by like clicking a button and look at bar graphs, change colors and move, and then we’re able to respond faster to what our students’ learning needs are.
I think AI is going to do something like that for us in the future. I think it’s going to be a game changer. There are concerns with tracking data like that, right? Yeah. That means feeding student information that’s created by the students into a technology cloud. And that means the company that operates that cloud has a responsibility to protect that information that comes from minors, right? So that’s like from a policy perspective, we need to be careful about the companies we trust with that kind of information. And I think that’s kind of what’s coming up also with generative AI. If we’re having children input information into these platforms, is that feeding that sort of generated AI brain, right, with that child’s information.
And do we want our children to be a part of that development of that technology? I think that’s another ethical side of this. I think it’s easy, I think the teaching and learning part, the academic integrity part, when it’s ethical to use it when it’s not in schools. I actually think that’s kind of, that’s where the most robust conversation is, because that’s almost the easier conversation, the conversation about who has access to what’s been inputted, what ages are appropriate and how it’s used.
I mean, that’s where all the lawsuits are coming from too, right? There was a group of authors that just launched a lawsuit saying that the generative AI platforms are producing literary works that are in the style of the author and they’re like, they’re basically like mimicking my work.
And it’s something I’ve taken decades to, you know, really cement and narrow into something that’s like an artistic expression of who I am. And it’s just being mimicked by the AI. I think that’s kind of where the debate is.
Larry: Not to mention, at least ChatGPT doesn’t typically cite its sources. Which to me is very frustrating. And I think ethically very challenging because I think all of us in education and in journalism and any other field should be citing our sources when we say anything that doesn’t come out of our own heads.
Kerry: Yeah, we have an obligation to do lateral reading and corroboration. We have an obligation to look beyond the one initial source where we tap the question into whether it’s Google or a chat GPT to then go out and find other sources to confirm that.
Larry: Impact on especially now that I’m sure there’s nothing new about misinformation, but there is misinformation and there are people who are deliberately spreading misinformation and there are now tools. You enable people to more effectively spread information. And as you know, it’s not only about things that are happening contemporaneously, there’s even misinformation about history, about what things that happened in the past.
Kerry: Well, and I think in terms of misinformation, the other thing that’s important to note is bias, right? There’s bias in everything. So chat GPT is being fed with information that was produced by humans. Who each inherently have their own implicit bias. So a lot of when you said you know, misinformation in history, things that I used to teach that I would not teach as fact now, because more information has revealed itself to help me better understand the history.
And people say like, “Oh, history doesn’t change, so you can use the same textbooks”. Actually, it does, right? It does because more information is revealed. More perspectives on what happened are revealed. More context helps us understand it and so I think when we’re looking at information that is given to us or generated by AI, I think we need to be asking really thoughtful questions.
Where, you know, what were the sources that this could have come from? What, what ideas or perspectives are maybe not represented in this answer? Where can I find those perspectives to add to this? And I think in particular, there isn’t as much information out there, there just isn’t as broad, there isn’t the breadth of information out there on times in history from certain groups that are underrepresented in history books.
And we need to make sure that we take the responsibility to make an extra effort to make sure those voices are heard.
Larry: And also generative AI could help with that because it can scour many more information sources than your typical person could do, you know, in a library. I want to ask you a question that, and we need to wrap up soon, but I want to think about the broader implications of what you as an educator do.
When I think back on my education, I didn’t take a single course in grade school, middle school, college, graduate school that was specifically about what I actually do for a living today. You know, I never took a single course specifically training me to do what I do. Yet, I can think of many classes that I took which helped prepare me for what I do.
I remember once giving a talk at a university and I said, how many, this was back in the early 2000s, how many people here took a class on how to be a webmaster? And of course, nobody raised their hand. How many people here are thinking you might want to do that kind of work? And several people raised their hands.
I’m just curious when you think about not just chat GPT, but all of the technologies that are flooding the world today, how are you able to think about what your students are going to be doing four, five, 10, 15, 20 years from now, and what it is you are really doing to prepare them? I don’t think the specific, you know, how you use this particular app is going to be necessarily an important skill to have 20 years from now, but there may be some aspect of that that is important.
Kerry: Yeah. Do you know what I love about being an educator and a mom of teens is that they’re at this really important time where not only are they developing a sense of identity, but leave an imprint on the world that matters and so the most important thing we can do in schools is make sure that we are teaching them the skills that they will need in order to be a good human. And if you are a good human who has a really solid calibration about what is the right choice that will do the most good in the world. Then you are more likely to make great choices with any future career that you have now.
Of course, we also want you to be successful and make money and all of those things. But if you feel fulfilled and you feel like you’re making a difference, you’re more likely to find something that’s a good fit for you and kind of take joy in doing it every day. And if they’re going to be the developers of those future technologies that are even more powerful than generative AI, we want them to have that good human at the core, making those developments so that they can anticipate some of the ethical problems or questions or the biases that might be involved because we’re always going to have these questions, like what, what is next?
How are we going to face it? How are we going to interpret it? So to me, those are the skills like, okay, yeah, yeah, you submitted a paragraph about the advancements in art during the renaissance. Yeah, it’s correct, but did you do it the ethical way? And why does that matter? Yeah, I think that’s really what is at the core of being an educator.
And that’s why human relationships matter more than almost anything else. And like you said, if you’re a great teacher who has had to make great relationships, really cool technology is going to make you even better, right? But if you’re like a mediocre teacher and you’re not good at relationships, really great technology. Doesn’t mean that kids are going to learn a ton. It’s, it’s, it’s that human element that’s the most important.
Larry: And what’s amazing about this conversation is we could have had this conversation 50, a hundred years ago and the technologies would have been different, but the concepts would have been the same.
And you know, and it makes me realize that the title of this podcast, Are We Doing Tech Right? probably has a lot more to do with human elements and decency and character and ethics than it does technology.
Kerry: Yeah. I look forward to more conversations where we get to talk about how we apply decency and ethics to all the different pieces of technology. I think it’s when we start asking this question, I think that means we’re probably doing it right. Well, if you’re not asking the question and you’re not struggling with it right then, is ethics a part of your every interaction?
Larry: And you’re absolutely right. One of the things I remember, Mother Teresa questioned her faith. And I’ve often thought about that. It had nothing to do with religion. It has to do with the fact that asking the questions and rethinking it and always questioning even your own directions contributes to some, you know, to, to making sure you’re doing it right.
Kerry, as always, it’s been a great conversation. I really appreciate you taking the time. And I just want to say to you as witnessed by everybody listening, it’s such a great honor and pleasure to have you as part of our team.
Kerry: Oh, well, I feel the same way. I feel like I’ve learned so much from you over time. I mean, you picked a topic that I could talk about for hours. So thank you so much for inviting me.