Artificial intelligence is gaining traction in enterprises, with many large organizations exploring algorithms to automate business processes or building bots to field customer inquiries. But while some CIOs see self-learning software as a boon for achieving greater efficiencies, others are leery about entrusting too much of their operations to AI because it remains difficult to ascertain how the algorithms arrive at their conclusions. CIOs in regulated industries in particular, such as financial services and any sector exploring autonomous vehicles, are grappling with this so-called "black box problem." If a self-driving rig suddenly swerves off of the road during testing, the engineers had darn well better figure out how and why. Similarly, Finservs looking to use it to vet clients for credit risks need to proceed with caution to avoid introducing biases into their qualification scoring. Because of these and similar issues around risk, companies are increasingly seeking ways to vet, or even explain, predictions rendered by their AI tools. Most software developed today that automates business processes is codified with programmable logic. If it works as intended it does things that its programmers told it to do. But in this second wave of automation, software capable of teaching itself is king. Without a clear understanding of how this software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines. Our new gaming site is live! Gamestar covers games, gaming gadgets and gear. Subscribe to our newsletter and we’ll email our best stuff right to your inbox. Learn more here.
Saturday, August 26, 2017
THE HIDDEN RISK OF BLIND TRUST IN AI’S ‘BLACK BOX’
Artificial intelligence is gaining traction in enterprises, with many large organizations exploring algorithms to automate business processes or building bots to field customer inquiries. But while some CIOs see self-learning software as a boon for achieving greater efficiencies, others are leery about entrusting too much of their operations to AI because it remains difficult to ascertain how the algorithms arrive at their conclusions. CIOs in regulated industries in particular, such as financial services and any sector exploring autonomous vehicles, are grappling with this so-called "black box problem." If a self-driving rig suddenly swerves off of the road during testing, the engineers had darn well better figure out how and why. Similarly, Finservs looking to use it to vet clients for credit risks need to proceed with caution to avoid introducing biases into their qualification scoring. Because of these and similar issues around risk, companies are increasingly seeking ways to vet, or even explain, predictions rendered by their AI tools. Most software developed today that automates business processes is codified with programmable logic. If it works as intended it does things that its programmers told it to do. But in this second wave of automation, software capable of teaching itself is king. Without a clear understanding of how this software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines. Our new gaming site is live! Gamestar covers games, gaming gadgets and gear. Subscribe to our newsletter and we’ll email our best stuff right to your inbox. Learn more here.
Subscribe to:
Post Comments (Atom)
No comments:
Write comments