'Guardrails' introduced to make AI safer

QUESTION TIME

Industry Minister Ed Husic has announced 'guardrails' to make artificial intelligence safer. Source: AAP / LUKAS COCH/AAPIMAGE

Get the SBS Audio app

Other ways to listen

Artificial intelligence is considered one of the most complex policy challenges for governments around the world. The federal government has announced a set of voluntary guidelines for businesses using AI, as it looks towards legislating a mandatory code.


Listen to Australian and world news, and follow trending topics with

TRANSCRIPT

Twenty-five years ago, if you'd told someone they could talk on their home phone at the same time as using the internet, they would have thought you were dreaming.

And if you'd told them that via the internet, you could have a pretty decent essay about the rise in popularity of polyester through the 20th century written for free, in seconds, they would have called you crazy.

But now, artificial intelligence is everywhere.

It's just shy of two years since the large language model AI system Chat GPT went live.

And it's clear the way we use technology has been changed forever.

"We heard loud and clear the message from the broader Australian public that while AI has huge benefits, the public wants to be protected if things go off the rails."

That's Science and Industry Minister Ed Husic.

He says it's been hard to regulate.

"This is probably one of the most complex policy challenges facing the government world over, and the Australian Government is determined that we put in place the measures that provide for the safe and responsible use of artificial intelligence in this country."

But the government now has a plan.

Step one is a voluntary code to help guide businesses using AI.

While around 80 per cent of businesses using the technology believe they're doing the right thing, less than a third are actually following best practice, according to the government's responsible AI index.

It's available now, and Minister Husic says it will help businesses reach their own standards.

"What the Australian Government wants to do is create that bridge between best intention and best practice. The voluntary code gives practical ways for businesses to achieve what they want to achieve, and that is the safe and responsible use of AI."

Nicole Gillespie is a Professor of Management at the University of Queensland and Chair of Organisational Trust, who has been looking at attitudes toward AI in Australia over a number of years.

She says the regulations are what the public wants.

"They really do a good job of responding to the public demand for regulations. So, our 2023 survey showed that 70 per cent of Australians believe AI regulations are required, and that they have a really clear preference for AI to be regulated by government and existing regulators or by a dedicated, independent AI regulator."

There are 10 areas, or guardrails as the government's calling them, in the voluntary guide.

The first section is around processes, ensuring businesses have risk management systems in place and staff are educated in the use of AI, that data is protected, and that records are kept for compliance checks.

Chief Executive of the Australian Chamber of Commerce and Industry Andrew McKellar says education is critical.

"Many businesses are starting to grapple with this, but we have to boost AI literacy in business. We have to understand that the risks and the benefits that come with these new technologies, and we're obviously, we've got to start that education process."

It also sets out requirements for human oversight, establishing that companies are responsible for their own use of AI, that they must test the systems they're using, there must be human oversight, and an opportunity for decisions made by AI to be challenged.

Those creating AI models need to be transparent about how they work, and businesses creating or using AI need to work with groups that could be impacted by AI systems.

Co-founder of the Human Technology Institute at the University of Technology Sydney, Professor Ed Santow, says that will help mitigate the biases in AI.

"You have this phenomenon of algorithmic bias, in other words, where the AI system goes off the rails and starts to treat people less favourably on the basis of their gender, their skin colour, disability, other things that they can't control."

That means that when AI is used to make decisions in settings like recruitment, or approving applications, it can have discrimination built in.

Professor Gillespie says knowledge can help people challenge those decisions.

"AI is increasingly being used in a very invisible behind the scenes way to make important decisions about people. So, if it's not transparent that AI is being used, it can often be difficult for people to ascertain if there perhaps has been some bias or if there a decision's been made against them that's not correct."

The next step is a mandatory code for high-risk use of AI.

What is considered high risk is still being considered, but the government is looking at areas like education, law enforcement, employment, biometrics, justice systems, health and safety, and access to services.

Professor Santow says there are situations where AI isn't appropriate.

"For high stakes decisions when people's human rights are engaged, it's very important not to have a machine make that decision. You need to have good human oversight, in other words, people who are responsible for the decision, but also can unpick the decision if something has gone wrong."

Minister Husic says the mandatory code would also apply to those supplying AI models or products.

"They're going to require organisations developing and deploying artificial intelligence models to properly assess those risks, and they've also got to plan how they'll manage those risks, test their AI systems to make sure that they're safe, share all those processes transparently, make it clear when AI is being used, especially when it looks human like. Make sure that a person can take over from AI at any moment, and that people are held accountable for safety issues that might arise."

Professor Gillespie says that regulation is key to building trust.

"Australians are amongst the most wary and the most cynical about the use of AI. We do need these interventions. We need stronger regulation to provide one of those bases for providing a foundation for us to trust in these technologies."

Share