AI joins the fake news party

Unpopular opinion: Just because a wealthy tech guy thinks a product is a good idea doesn't mean it is. And it also shouldn't mean that we abandon rational thought in the rush to play with a shiny new thing.

Here's a real-life example of why caution and a healthy dose of scepticism are critical when using new tech, generative AI, to generate content.

This week, a group of academics offered an unreserved apology to the big four consultancy firms after admitting they used AI to create a report that made false allegations of serious wrongdoing in a submission to a parliamentary inquiry.

Critically, the report was submitted under parliamentary privilege, which means it's protected from defamation action under Australian law.

<takes a deep breath>

Yes, this happened. The authors of the report used Google Bard, failed to fact-check its output and submitted it to our lawmakers. This can't be the first time it's happened but it's a high-profile mistake, so I feel for the authors to an extent.

As I've said to my parents for many years: just because it's on the internet doesn't mean it's true. Let's remember that the so-called 'godfather of AI' quit Google earlier this year in part over concerns of misinformation that AI would create.

Globally, industry and governments are scrambling to create a united approach to regulating AI, primarily with a focus on issues of national security and protecting vulnerable people.

Personally, I believe we all need to challenge the 'tech bro' culture that prioritises profits and product over humanity and useful solutions. For instance, the gross harm and negligence perpetuated by Meta on its platforms is one of many, many examples of unintended damage and I can see generative AI being even more dangerous than Web 2.0 tech

Until that happens, we need to exercise caution and professionalism when we use AI tools. I don't know the circumstances which led to these academics using AI, and then failing to check the facts in the content it produced. But it's a timely reminder for all of us to apply due diligence to anything that AI creates.

This should include fact checking everything the AI-produced content claims to be true. And check with more than one credible source. I'd also like to see some clear disclosure for when generative AI is used to write or contribute to a report or document.

“I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased,” the lead author of the report that was tabled in parliament said.

Well, quite.

Previous
Previous

What’s the F got to do with business writing?

Next
Next

New website? Don’t forget the content