Dr Lushenko did not find much research on trust of AI, apart from ground personnel directing crewed versus uncrewed airstrikes. It occurs to me that issues of trusting AI would be much the same as for allied forces.
Dr Lushenko conducted a survey of US military personnel and found they were most comfortable with non-lethal AI. They would trust lethal AI more if it provided protection for their own troops.
I suggested to Dr Lushenko it might be interesting to compare the views of military personnel to civilians in non-military agencies which are authorized to use lethal force.
After it occurred to me that a Turing test could be used to see if military personnel can tell if they are interacting with humans, or AI. In many cases personnel now interact using a text and data interface, with no voice. It would be possible to run a test in a simulator or on a range, where the human might be communicating with a human or a machine. This would be relatively simple to set up, as simulators often use synthetic elements, but which usually have very limited intelligence.
* The ANU Mills Room reminds me of the war room in Dr Strangelove, which is disturbingly appropriate for the topic.
No comments:
Post a Comment