“Let’s begin by looking at this short list of decisions that AI is making for us in the here and now.
Whether or not a tumor has become cancerous.
Whether or not an insurance claim should be processed or denied.
Whether or not a traveler is approved to go through airport security.
Whether or not a loan should be made.
Whether or not a missile launch is authorized.
Whether or not a self-driving vehicle brakes.
These are complex matters that are well suited to AI’s strength — its ability to process infinitely greater data than a human can, said Mike Abramsky, CEO of RedTeam Global. But the decisions AI can make are also reflective of the technology’s weakness, the so-called “Black Box” problem, Abramsky said. Because deep learning is non transparent, the system simply can’t explain why it got to the decision. No matter how much you respect AI’s advance, though, most of us would also like to know how AI came to the conclusions that it did, if only out of curiosity’s sake. So do proponents of a movement called explainable AI, and their reasons for wanting to know go far beyond mere curiosity.
“With AI-powered systems increasingly making decisions such as credit card approval for an application, a self-driving car applying the brakes after getting closer to an obstacle, and parole recommendation for incarcerated felons, it has become vital for humans to understand the decision-making mechanism of the underlying AI to ascertain that the AI makes accurate and fair decisions,” said Abhijit Thatte, VP of Artificial Intelligence at Aricent.”
Wordpress is loading infos from cmswirePlease wait for API server guteurls.de to collect data from