ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions

User-friendly AI tool ChatGPT has attracted hundreds of millions of users since its launch in November and is set to disrupt industries around the world. In recent days, AI content generated by the bot has been used in US Congress, Columbian courts and a speech by Israel’s president. Is widespread uptake inevitable – and is it ethical?

In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that was set to make history: “I am truly proud to be the president of a country that is home to such a vibrant and innovative hi-tech industry. Over the past few decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI), and big data are truly impressive.”

To the surprise of the entrepreneurs attending Cybertech Global, the president then revealed that his comments had been written by the AI bot ChatGPT, making him the first world leader publicly known to use artificial intelligence to write a speech. 

But not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT on the floor of the House of Representatives. Another first, intended to draw attention to the wildly successful new AI tool in Congress “so that we have a debate now about purposeful policy for AI”, Auchincloss told CNN. 

Since its launch in November 2022, ChatGPT (created by California-based company OpenAI) is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history. 

The user-friendly AI tool utilises online data to generate instantaneous, human-like responses to user queries. It’s ability to scan the internet for information and provide rapid answers makes it a potential rival to Google’s search engine, but it is also able to produce written content on any topic, in any format – from essays, speeches and poems to computer code – in seconds.  

See more: Wall Street loved Mark Zuckerberg’s plans for 2023 to be a “year of efficiency”

The tool is currently free and boasted around 13 million unique visitors per day in January, a report from Swiss banking giant UBS found.

Part of its mass appeal is “extremely good engineering ­– it scales up very well with millions of people using it”, says Mirco Musolesi, professor of computer science at University College London. “But it also has very good training in terms of quality of the data used but also the way the creators managed to deal with problematic aspects.”  

In the past, similar technologies have resulted in bots fed on a diet of social media posts taking on an aggressive, offensive tone. Not so for ChatGPT, and many of its millions of users engage with the tool out of curiosity or for entertainment

“Humans have this idea of being very special, but then you see this machine that is able to produce something very similar to us,” Musolesi says. “We knew that this this was probably possible but actually seeing it is very interesting.” 

A ‘misinformation super spreader’?

Yet the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and different sectors from education, to law, to science and business are braced for disruption.    

Schools and colleges around the world have been quick to ban students from using ChatGPT to prevent cheating or plagiarism. 

Science journals have also banned the bot from being listed as a co-author on papers amid fears that errors made by the tool could find their way into scientific debate.  

OpenAI has cautioned that the bot can make mistakes. However, a report from media watchdog NewsGuard said on topics including Covid-19, Ukraine and school shootings, ChatGPT delivered “eloquent, false and misleading” claims 80 percent of the time. 

“For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative,” NewsGuard said. It called the tool “the next great misinformation super spreader”. 

Even so, in Columbia a judge announced on Tuesday that he used the AI chatbot to help make a ruling in a children’s medical rights case. 

Judge Juan Manuel Padilla told Blu Radio he asked ChatGPT whether an autistic minor should be exonerated from paying fees for therapies, among other questions.  

The bot answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” 

Padilla ruled in favour of the child – as the bot advised. “By asking questions to the application we do not stop being judges [and] thinking beings,” he told the radio station. “I suspect that many of my colleagues are going to join in and begin to construct their rulings ethically with the help of artificial intelligence.” 

Although he cautioned that the bot should be used as a time-saving facilitator, rather than “with the aim of replacing judges”, critics said it was neither responsible or ethical to use a bot capable of providing misinformation as a legal tool. 

An expert in artificial intelligence regulation and governance, Professor Juan David Gutierrez of Rosario University said he put the same questions to ChatGPT and got different responses. In a tweet, he called for urgent “digital literacy” training for judges.

A market leader 

Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi expects it will be used “extensively” for both positive and negative purposes – with the risk of misinformation and misuse comes the promise of information and technology becoming more accessible to a greater number of people. 

OpenAI received a multi-million-dollar investment from Microsoft in January that will see ChatGPT integrated into a premium version of the Teams messaging app, offering services such as generating automatic meeting notes. 

Microsoft has said it plans to add ChatGPT’s technology into all its products, setting the stage for the company to become a leader in the field, ahead of Google’s parent company, Alphabet. 

Making the tool free has been key to its current and future success. “It was a huge marketing campaign,” Musolesi says, “and when people use it, they improve the dataset to use for the next version because they are providing this feedback.” 

Even so, the company launched a paid version of the bot this week offering access to new features for $20 per month.

Another eagerly awaited new development is an AI classifier, a software tool to help people identify when a text has been generated by artificial intelligence.

OpenAI said in a blog post that, while the tool was launched this week, it is not yet “fully reliable”. Currently it is only able to correctly identify AI-written texts 26 percent of the time.

But the company expects it will improve with training, reducing the potential for “automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”.  

Source: France 24