Tuesday, March 12, 2024

Catastrophic Risks in Artificial Intelligence

Greetings from the Australian National University Colloquium on Artificial Intelligence and Catastrophic Risk. Normally I attend weekly AI, ML and Friends Seminars in the ANU School of Computing, but today I am in the social sciences building, with philosophers. The colloquium is by the ANU Machine Intelligence and Normative Theory Lab. In the first presentation by Professor Seth Lazar points out that "transformer based" generative AI is less brittle: much harder to get it to produce weird results. He also claimed Google was barely able to match Chat GPT's performance. Professor Lazar argued catastrophic and current risks of AI could be addressed together, including through regulation. 

On a positive note Professor Lazar argued AI could be used to efficiently improve human welfare, more than inefficient manual systems. However, I suggest this presents a rosy view of human nature. Resources are not inequitably distributed today due to inefficiency, but because those who have the resources have made a conscious decision to deprive others of them. With an efficient AI system they could implement this deliberate inequity much more effectively. 

Some feasible catastrophic risks Professor Lazar mentioned were discovering new chemical and biological weapons, cyber attacks, and safety critical attacks. A current worry he mentioned is targeting conventional weapons using complex computer systems, as is being used in Gaza now

At question time I asked Professor Lazar what advice would give the federal government, which has announced a trial of Microsoft Copilot in 50 government agencies. He suggested a Chief AI Officer in an AI Agency to oversee this. Also he suggested funding an AI Safety Institute. He hoped that Copilot would just be used for wording letters.

Professor Lazar  used computer generated images to illustrate his talk. These were based on the poem 'The Second Coming' by William Butler Yeats. This theosophical work has echoes where I am sitting today. The location of the ANU was decided by two theosophists Walter and Marion Mahony Griffin.

Professor Cameron Domenico,
Rutgers University–Newark
Professor Lazar, will be followed by Professor Cameron Domenico, Rutgers University–Newark, and Professor David Thorstad, Vanderbilt University.

ps: If all this catastrophic risks of AI sounds excessively alarmist, consider that Australia is going to build six optionally crewed ships. Each armed with 32 missiles, these ships will be able to sail thousands of kilometers with no one on board. Given the possibility of an enemy jamming the link to the ship, it will be tempting to build in an autonomous mode. 

No comments:

Post a Comment