Professor Ashley Deeks |
At a practical level it is not that difficult to test if an AI weapon is at least as reliable as a human operator. This could improve procedures by making explicit the decision making processes. There will be pressure to use advanced automated systems, just as there are for current simple ones, such as mines.
Professor Deeks is presenting a US-centric view of the issues. However, the US is not a leader in development of AI weapons. Any country with a university having a computing school has the capability to make advanced AI weapons. Recently I was assessing a university student project for a small autonomous vehicle. This was for civilian purposes, but one version was tracked, and just needed a weapon added to be a robot tank.
The problem, I suggest, could be far harder than Professor Deeks suggests. The magic sauce for an AI weapon is in the software. The physical weapon can be upgraded over the air to have new capabilities. Some of this has been seen with missiles, where air launched missiles have been adapted for surface launch & surface for air. An example is the US Navy's SM-6 ship missile adapted for air launch against surface, air and space targets. Deciding of something is an anti-satellite weapon or not is a matter of software.
Professor Deeks mentioned her paper "The judicial demand for explainable artificial intelligence" (2019) which argued for lawyers to get AI savvy. Some are thinking tech, such as Herbert Smith Freehills.
No comments:
Post a Comment