As we move from traditional machine learning to deep learning models, artificial intelligence is becoming more of a black box. Gone are the days when we could think of AI models as simple decision trees, where the determined outcome is succinctly explained by the collection of branches taken at each fork in the tree. Today’s AI algorithms often involve thousands or even millions of connections, making decision traceability near impossible.
For federal agencies, these decisions have the potential to directly impact people’s lives, which is why the mystery of how an AI algorithm reaches its conclusions makes many government analysts uncomfortable. They need assurance that the insights provided by their AI models are accurate, but also explainable so they can make informed decisions
Read the full article on Federal Times