Monday, September 4, 2017

Genevieve Bell Heads ANU 3A Institute of AI

Greeting from the National Film and Sound Archive in Canberra, where Professor Genevieve Bell is speaking on "Managing the Machines: building a new applied science for the 21st century". This is also live streaming on Facebook and YouTube


Dr. Bell will be presenting the ABC 2017 Boyer Lectures and has been appointed the inaugural McKenzie Chair at the Australian National University College of Engineering and Computer Science. She will also head the new "Three A" institute (although I am not sure what that is).

I first came across Dr. Bell, at the Realising Our Broadband Future forum, in Sydney, 2009, where it was refreshing to hear ideas about broadband for people to use. She talked at the Innovative Ideas Forum 2010 at the National Library of Australia, where she noted English was not longer the dominant language of the Internet. The next week I bumped into Dr Bell at the State Library of South Australia, where she had been the state's Thinker in Residence on South Australia’s Digital Futures.

You have to listen carefully to a speech from Dr. Bell, not just because she talks fast  and with enthusiasm (reminds me of Pia Waugh).

Dr. Bell stared with a history lesson, on the start of the engineering as a profession at the time of the French Revolution in Paris, then Constantinople and the UK. She characterized engineers as managing systems and being certified and licensed by the state. She then jumped to the USA and the creation of the Wharton School of the University of Pennsylvania. I lost the thread at this point, but we ended up in the creation of Artificial Intelligence as a discipline. 

The story then jumped to Australia, with the building of SILLIAC, an Australian 1950s computer (built after  CSIRAC Australia's first computer, around 1949). But the story was really about how to combine theory and practice, plus what exactly is it that we want to research and do? 


Dr Bell had three questions on Autonomy, Agency and Assurance. She suggested we need to think about in what sense machines are "autonomous". I am not sure this is such a new issue. We have had machines which can act on their own for decades and also have such legal structures as companies as persons. A non-trivial case is in the law of war, where autonomous weapons have existed for more than a hundred years.

The next question was how much Agency, machines should have. This seems to be the same question as the first, being how much autonomy there should be.  A current example of this is the Commonwealth Bank, accused of 54,000 cases of money laundering. The Bank is not a person and it was automated teller machines which processed the cash, so who is responsible?

The third question is assurance: how we can be sure these machines are safe? This is also no a new question.  Engineers and more recently software engineers, have had to consider how safe sould be and how to work out if it is. This is made more difficult by AI, but it is something I routinely as of students when I am teaching professional ethics.

At this point I finally worked out what the 3A Institute was to be: the Autonomy, Agency and Assurance Institute.

No comments:

Post a Comment