Thursday, June 6, 2024

Will the Military Trust AI on the Battlefield?

Greetings from the council room* of the Australian National University in Canberra for "Battlefield Trust for Human-Machine Teaming: Evidence from the US Military" by Lieutenant Colonel Dr Paul Lushenko. The question he is investigating is if military personnel will trust AI. One obvious and reassuring finding is personnel trust lethal AI less. He pointed out that personnel can influence what gets implemented by being part of testing and commissioning equipment. 

Dr Lushenko did not find much research on trust of AI, apart from ground personnel directing crewed versus uncrewed airstrikes. It occurs to me that issues of trusting AI would be much the same as for allied forces. 

Dr Lushenko conducted a survey of US military personnel and found they were most comfortable with non-lethal AI. They would trust lethal AI more if it provided protection for their own troops.

I suggested to Dr Lushenko it might be interesting to compare the views of military personnel to civilians in non-military agencies which are authorized to use lethal force.

After it occurred to me that a Turing test could be used to see if military personnel can tell if they are interacting with humans, or AI. In many cases personnel now interact using a text and data interface, with no voice. It would be possible to run a test in a simulator or on a range, where the human might be communicating with a human or a machine. This would be relatively simple to set up, as simulators often use synthetic elements, but which usually have very limited intelligence.

* The ANU Mills Room reminds me of the war room in Dr Strangelove, which is disturbingly appropriate for the topic.




No comments:

Post a Comment