We did a unit on the ethics of AI at school this past academic year.
First, I asked students if anybody had used it already, and if, so, how. A few students said they used it to help study by summarizing text, "What were Wilson's Fourteen Points?" Or they help solve mathematics and physics problems.
I asked them to come up with some ideas about what was right and what wasn't. We had a good discussion, I presented a slideshow, and they made some attractive posters we put on the walls.
In deliberate irony, the lesson plan was generated by AI, as an experiment. I used it to help organize a slideshow on using AI ethically. Honoring our consensus, I cited it as a tool.
One of my students said that his parents, both teachers, use AI in preparing for their classes. I wrote at least one unit lesson plan with AI, and the results were good. I told it what elements and concepts to include, and I was satisfied by the results, though I didn't continue the practice.
Recently a college student filed a complaint against her school, demanding refund of her tuition for a course where she alleges the professor created his lessons by AI. She demands to have a humane relationship with a teacher based on intelligence, knowledge, and classroom interaction.
I called a boy up to my desk one day and quietly asked him if AI had written his assignment. The student, a smart and easygoing kid, said it had "helped." I said don't do it again. He didn't, and we shook hands on his last day in my class.
I've since seen another assignment where I know some kids submitted work without attributing to AI. I think they copy one of my online assignment texts and ask AI to summarize it. On the particular assignment I used, ChatGPT misread it just a bit, and I picked up the same words in several responses.
One big concern in education is the authenticity of what people know. If you want to learn what folks know about a subject, just sit down and talk to them for some time. The International Baccalaureate curriculum includes an oral component, in which I and other IB teachers would sit in one-on-one oral examinations with all our students.
But that's very expensive. AI is incredibly cheap, and it can save time on activities that are beyond rote -- planning and communicating and documenting.
As for students, some can use it as a tool to push ahead, to learn and develop knowledge and skills in great depth and with deep expertise. Others go through the motions and get a superficial and probably temporary understanding of what's on the test. So it has always been, and so it shall be until the AI guardian angel avatars intervene.
You know that you can usually trust AI, but not always. We call it "hallucinating." Recently the Chicago Sun-Times and the Philadelphia Inquirer published a plausible but totally fictitious reading guide about nonexistent books by real authors, complete with cover and blurbs. It was written by AI and doubtless in an incredibly short time, and even cheaper to pay than an East Texas columnist.
If they were human we would call it lying, or at best, fibbing, but they're not so we call it hallucinating. Humans too can hallucinate plausible nonsense. Some call it a gift for fiction. Some call it political rhetoric.
AI elicits fear from some, and enthusiasm from others.
It's natural to worry how humane characteristics like authenticity, autonomy, sympathy, and responsibility can be transformed in some dark ways. We'll look at that again.
On the other hand, Tyler Cowan, an academic economist at George Mason University, has written:
"Lately I have been using the o3 model from OpenAI to give my PhD students comments on their papers and dissertations. I am sufficiently modest to notice that it gives keener, smarter, and more thorough suggestions than I do. One student submitted a dissertation on the economics of pyramid-, tomb-, and monument-building in ancient Egypt, a topic about which I know virtually zero. The o3 model had plenty of suggestions. How about: 'Table 6.5's interaction term '% north × no-export' is significant in model 3 but not 4. Explain why adding period FE erodes significance; maybe too few clusters? Provide wild-bootstrap p-values." Of course I would have noticed that point as well.
"Maybe they are not all on-target -- how would I know?! -- but the student, who has studied the topic extensively, can judge that for himself. In any case the feedback was almost certainly much better than anything I might come up with.
"Suddenly we are realizing that the skills we trained our faculty for are also, to some degree, obsolete."
Cowan has some specific recommendations that would help college teachers adjust to the new reality. And, like me with my lesson plan, he used AI to generate a perfectly reasonable list of steps, including mandatory boot camps for faculty, reverse mentoring, where graduate students coach the professors into proficiency, and including advanced AI on dissertation committees as a replacement for the "outside reader."
Cowan believes that AI will do wonderful things for us, but that it will change what it means to be human. Many of the adaptations he foresees are an attention to the physical world and a move toward social interactions and providing services for each other. He expects sunshine after some cloudy turbulence.
I don't have a "position" on AI. In subsequent columns we'll look at techno-optimism and its promises before turning to the shadow side.