Whitepaper: How To Build Trust With Explainable AI

Verena Eikmeier Blog, Data Science

Management Summary

Artificial Intelligence in Companies

Artificial intelligence (AI) in companies is no longer a new trend. Nevertheless, the potential for AI is far from exhausted and still offers enormous opportunities. A PWC study concludes that AI could contribute more than 15 trillion dollars to the global economy by 2030. According to an analysis by the Boston Consulting Group, the Covid-19 crisis will make AI even more important for companies. AI systems use vast amounts of data to recognize underlying data patterns, enabling computer systems to make complex data-based decisions, analyze images, understand human speech, predict sales, and more. These capabilities that AI offers companies are especially valuable in times of crisis, but also to adapt to the post-crisis world. According to BCG, use cases that combine AI with human judgment and experience will be particularly successful in this regard.

Why Explainable AI?

However, for people and AI to work together successfully, sufficient trust in AI systems is crucial. To be trustworthy, their decisions must be comprehensible, explainable, and reliable (reproducible). The increasing complexity of advanced AI systems makes it more difficult to understand how they work in detail. For users, AI systems are usually so-called black boxes. The training and architecture of modern AI systems can be technically so complex that even experts can no longer explain the system’s decisions on a semantically meaningful level.

This is where Explainable Artificial Intelligence (XAI) comes in. XAI is designed to make the decisions and activities of an AI transparent and comprehensible to humans. It creates the basis for users and decision-makers to understand how AI systems work and achieve certain results. This transparency forms the basis for trust in and acceptance of artificial intelligence – the prerequisites for its successful deployment.

Today, a multitude of methods make it possible to explain even complex AI systems. Even though there are several challenges to be considered, the benefits of XAI in companies are immense. The whitepaper “How to Build Trust With Explainable AI” provides an overview of possible applications, advantages, methods, and challenges in employing XAI in companies and thus serves as a guide to this essential future topic.

Über den Autor

Verena Eikmeier

I am a data scientist at STATWORX and especially interested in natural language processing and on how data science applications can enrich human decision making.

ABOUT US


STATWORX
is a consulting company for data science, statistics, machine learning and artificial intelligence located in Frankfurt, Zurich and Vienna. Sign up for our NEWSLETTER and receive reads and treats from the world of data science and AI. If you have questions or suggestions, please write us an e-mail addressed to blog(at)statworx.com.