Business leaders and educators worried over the growing intrusion of artificial intelligence into the written word got a bit of relief this week with the release of Turnitin’s new AI-detection capability.
Now bundled as part of its anti-plagiarism software, the AI-detection capability alerts Turnitin customers when it detects writing that was probably created by AI and not a human author.
Targeted primarily at the education market, it is the first tool designed to help educators keep track of potential AI use and have meaningful discussions with students about that practice.
However, the AI-detection tool may quickly find a much wider market. Many organizations and individuals have expressed alarm at the quality of AI-generated content from chatbots such as ChatGPT and DALL-E.
The concern is at how “good” the content can seem. Even Turnitin executives acknowledge that the finished product that ChatGPT can “build” on a desired topic can be very impressive at first glance.
The biggest fears over this use of AI are over what impact such content creation could have on specific jobs and basic learning skills, according to Turntin’s Principal Machine Learning Scientist, David Adamson. Industries that may be at exceptionally high risk of being impacted live by the written word, such as media and marketing.
Does AI represent the dumbing-down of the written word?
American schools and businesses could soon find that a significant portion of writing will soon be done by chatbots, with few being the wiser. For many tasks, that may be no problem, Adamson notes. But it certainly doesn’t encourage good learning habits or thinking skills.
Adamson explains that the Turnitin AI-detection software tool has been in development for two years. It was put on the fast track last year as the company followed improvements being made to the ChatGPT chatbot by software firm OpenAI. Turnitin is known for its anti-plagiarism tool that tracks the use of sentences, paragraphs, or chapters lifted from copyrighted sources found on the Internet.
In the plagiarism-detecting business for over 20 years, Turnitin’s software-as-a-service platform is found in over 16,000 schools and universities, potentially reviewing the work of over 40 million students. While AI-generated content would not violate copyright since it grabs words and not passages, Adamson worries about its impact on how individuals learn and communicate; and if artificial intelligence will become a crutch.
It’d be fair to say that AI can make us lazy in how we “write” and express ideas. Instead of constructing documents about a given topic from first-person ideas and research, an individual can simply tell ChatGPT to do the task. ChatGPT would oblige very quickly and with little regard for where it found the information it used to build the document.
According to Turnitin’s Vice President of Artificial Intelligence, Eric Wang, Internet-based AI tools such as ChatGPT have conceivably all human knowledge at their fingertips. When an AI application is asked to write on a specific topic, it searches out everything written on the subject. It identifies the most commonly used words to describe it.
As ChatGPT “builds” its document, it grabs the most common – or “average” word in each case, word by word. The finished product in an AI-generated document will be the absolute most common example of what a document on the topic would look like, Wang stresses.
Another shortcoming of AI-generated content noted is that it doesn’t discriminate on the information it finds and uses – from hard facts to “fake news” – it’s all the same to AI. The result is that AI can repeat bias into an otherwise thoughtful commentary.
AI to AI-Detection: Catch me if you can
The goal of the Turnitin AI-detection capability is not to “catch” or “punish” those detected using AI to write – whether it be for class assignments, homework, or business reports. Instead, Adamson says it is intended to be a tool that spurs conversation.
It should also be noted that the Turnitin tool doesn’t catch every single case of possible AI-generated content.
“We’re showing a percentage of the sentences that we predict or believe in having been AI-written, as well as highlights of where those sentences are in the document. We focus primarily on paragraphs written in English prose and do not scan poetry, screenplays, or tables. And we’re leaving out bulleted items.“, says Adamson.
Overall estimates state that the AI-detection tool reveals suspect text in 85% of cases. That means it is missing AI-generated content almost 15% of the time.
When potentially AI-generated content is detected, an educator or business manager will receive notice on where they can review the affected text and then engage with the student or individual about the discovery. It is essential to connect discovery to action, Adamson stresses since that is the whole point of the process.
Adamson says he expects teachers or managers first to review the content flagged by the AI-detection tool with the individual and have a conversation around the following pointers:
- Ask if any AI tool was used. If yes, then why so.
- Explain how to use AI appropriately and when it shouldn’t be.
- Most notably, direct the conversation to discuss how using AI to find and generate content works against the student or employee from learning the necessary skill of researching or being able to put ideas into well-constructed written form without depending on AI tools.
For instance, in most cases, an educator may need to ask students to demonstrate their understanding of a topic. The following right action is to allow the student to express the ideas in the document aloud or in other formats of their choice. Finally, the conversation should demonstrate that the student or employee truly understands the topic and that they are not just regurgitating something back that they found elsewhere.
The idea isn’t to punish students for using AI but to make them aware of the consequences of relying on AI for every task and how it will affect their skill levels and capabilities.