“Am I being artificial?”
In my defense I’ve been really busy travelling the last week and got behind on my writing.
I just returned from the South Carolina Chamber of Commerce Annual Summit during which there was a very interesting panel discussion on the use of Artificial Intelligence. The panel included Dr. Nathan McNeese, an assistant professor of Human Centered Computing at Clemson.
Dr. McNeese observed that the writers of AI algorithms have focused on performance and overlooked that AI doesn’t understand us as humans. He stressed that AI must do a better job of accounting for human needs and wants.
Well, I’m a human and I needed a column for the SCICU Newsletter. With a sense of irony I asked ChatGPT to write 750 words on the ethical use of Artificial Intelligence.
I must say that in 533 words (not the 750 I requested) ChatGPT gave me a variety of perspectives on the ethical implications of AI. The writing was pedestrian but clear with only one typo. The paragraph and sentence structure were repetitive to the extent that I think I could have picked this writing out as artificially generated.
I should add there were no footnotes, so it’s impossible for me to divine what was generated and what might have been plagiarized. I’ve read that ChatGPT is so eager to please that it may gloss over academic or journalistic standards to provide what it’s been asked for.
Of course, I didn’t ask for footnotes, so I revised my query to request them. The new essay covered the same issues but added eight footnotes. One of them: Diakopoulos, N. (2016). “Accountable Algorithmic Decision-Making: A Medium for Design.” Data Society Research Institute.
I googled that reference and indeed there was an article by a Nicholas Diakopoulos in February of 2016 on the topic, but its title was “Accountability in Algorithmic Decision Making,” and it was written for the Communications of the ACM (Association for Computing Machinery).
If I was grading the paper, I’d give credit for the footnote but take off points for the sloppy citation.
To test how thorough ChatGPT had been I did my own Google search: “ethical use of artificial intelligence.” It pulled up several good sources including an article in the Harvard Gazette and a 44-page set of recommendations by UNESCO. I must give credit to ChatGPT for succinctly covering all the ground covered in these sources.
ChatGPT did present me with angles I had not considered. This isn’t surprising as my graduate work was on 18th century Virginia, at which time software referred to a satin brocade waistcoat.
I had thought about the capacity for AI to improve our lives. The S.C. Chamber panelists all agreed that AI would make dramatic changes in our world in the next few years, noting the possibility for more highly personalized health care. But I also thought about decisions regarding level of care that may be the product of an algorithm rather than a human review.
ChatGPT added that care must be taken that any biases of the programmer not be integrated into the AI, and stressed the importance of creating global standards for ethical practices.
For all its power AI is the same as any other technology – it’s how people build it and use it that will define its capacity to improve our lives or threaten them.
Interestingly, ChatGPT didn’t note plagiarism of AI generated content as an ethical concern. That’s at the top of the list for professors.
Well, I ended up relying on ChatGPT to get me started, and I certainly used some issues it identified as ethical challenges. Nevertheless, the words were my own.
Did I cheat? I’m not sure.
I’ll ask Alexa.