Jag Duggal of Quantcast highlights a short-term issue with AI that leaders, brand managers, and storytellers using machine learning and artificial intelligence to augment current communications efforts. The AI carries the biases of the programmers. That’s not to say that there are programmers out there plotting to topple a brand with culturally insensitive AI (though the problem has repeatedly plagued early bot tests), it means people focused on solving a coding issue do not necessarily review their reasoning for cultural impacts.
We suggest teaming coders with artists, rhetoricians, and organizational historians to prevent ignorant bots from subverting the potential for improved human connections supported by AI.
Most veteran AI scientists see potential doomsday scenarios as quite a ways off but note AI presents other pressing human welfare challenges. Like, how we can keep human bias and prejudice out of our algorithms. Rachel Urtasun, an AI researcher recently hired by Uber to head a high-profile, advanced technology group, champions this issue. She points out that people train networks to mirror human thought processes. And that, often, we pass on not only our powers of perception but our misconceptions as well.