Tuesday, May 10, 2022

Toby Walsh not in conversation with Andrew Leigh on the Morality of AI

Toby Walsh
Andrew Leigh at
ANU Meet the Author 
Greetings from the Australian National University (ANU) where Professor Toby Walsh is supposed to be in conversation with Andrew Leigh MP, on Toby's book "Machines Behaving Badly: The Morality of AI". Andrew, is our local MP, and a former ANU lecturer, but he stuck on his way back from Jervis Bay, where he has been working on improving telecommunications (Ironic given the topic of the talk, and that this is Australia's high tech capital). Toby started with an anecdote about a Google AI system which umms in its synthetic voice to sound more human. Andrew turned up and apologized his Tesla ran out of charge due to a non-functioning super charger (also ironic).

Toby gave a broad definition of AI, as being something a computer does which would be considered intelligent for a person. That seems reasonable to me. In practice most AI works by "training" software with lots of examples.

Toby seems, as he says, to be a glass-half-empty person. One negative outcomes of cheap mass air travel, he claimed, was the spread of Hong Kong flu. He went on to mention that most of the small number of people developing AI are while males "on the spectrum", so the rest of the world gets left out.

However, not all the faults of AI can be blamed on the AI, or the people who built it. Another example given by Toby was software making sentence recommendations in the USA, which turned out to be biased against non-white defendants. However, the software was just doing what humans previously did. The software highlighted an existing bias. 

In the Australian situation with Robodebt, there was systematic persecution of a group of disadvantaged people by the Australian Government.  However, this can't really be blamed on AI. There was not very sophisticated software used for this project, so it is not the case that the discriminatory behavior emerged from AI, it was designed into the project. It was clear from the outset that disadvantaged groups who did not vote for the parties making up the government were to be targeted. In a way it might have been better had AI been used, so the illegal nature of the project could have been clear from the outset.

Toby went on to mention intelligence is not just one thing, and consciousnesses is an illusion: we are a collection of intelligences. Andrew raised an interesting question as to if robots should have rights. I was not convinced by Toby's response that robots are not self aware and so do not suffer. However, the same argument used to allow the mistreatment of animals, which today would get you arrested.

The conversation then got on to free will, and consciousness. Toby argues AI research might provide insights. 

Toby ended by proposing "Turing's Red Flag". The idea is to have a warning sign to say you are interacting with a machine, not a human. I don't find this convincing as many people working for organisations are so constrained in what they can say and do their behavior is just as constrained and predetermined as a machine.

ps: One of Professor Walsh's previous books "It's Alive!: Artificial Intelligence from the Logic Piano to Killer Robots". Economist William Stanley Jevons, best known for the Jevons paradox, had a Logic Piano built in 1886, after living in Sydney.

No comments:

Post a Comment