On a positive note Professor Lazar argued AI could be used to efficiently improve human welfare, more than inefficient manual systems. However, I suggest this presents a rosy view of human nature. Resources are not inequitably distributed today due to inefficiency, but because those who have the resources have made a conscious decision to deprive others of them. With an efficient AI system they could implement this deliberate inequity much more effectively.
Some feasible catastrophic risks Professor Lazar mentioned were discovering new chemical and biological weapons, cyber attacks, and safety critical attacks. A current worry he mentioned is targeting conventional weapons using complex computer systems, as is being used in Gaza now.
At question time I asked Professor Lazar what advice would give the federal government, which has announced a trial of Microsoft Copilot in 50 government agencies. He suggested a Chief AI Officer in an AI Agency to oversee this. Also he suggested funding an AI Safety Institute. He hoped that Copilot would just be used for wording letters.
Professor Lazar used computer generated images to illustrate his talk. These were based on the poem 'The Second Coming' by William Butler Yeats. This theosophical work has echoes where I am sitting today. The location of the ANU was decided by two theosophists Walter and Marion Mahony Griffin.
Professor Cameron Domenico, Rutgers University–Newark |
ps: If all this catastrophic risks of AI sounds excessively alarmist, consider that Australia is going to build six optionally crewed ships. Each armed with 32 missiles, these ships will be able to sail thousands of kilometers with no one on board. Given the possibility of an enemy jamming the link to the ship, it will be tempting to build in an autonomous mode.
No comments:
Post a Comment