Assistant Professor Luke Stark, from Western University (Canada) |
More interestingly Professor Stark compared attempts in past centuries to correlate facial shape with behavioral characteristics to current AI work which is similarly misapplied. He suggested there are open questions on how inference should be applied. Also Professor Stark suggested AI could learn a lot from medicine and the discussion around the applicability of evidence based treatment. On the surface it seems obvious that medical treatment should be based on evidence from trials, but if the people conducting the trails are not like the patients, then the results may not be applicable.
Professor Stark's analysis seemed a little idealistic, in that it assumes users of AI (and previous technology) were driven by a quest for the truth and equity. However, researchers repeatedly produce AI systems, which discriminate against particular groups. Rather than see this as an unfortunate side-effect of the technology, I suggest it be acknowledged as one of the main uses of AI, and measures to minimize it be put in place.
No comments:
Post a Comment