What’s it about?
This white paper elaborates on the core elements of AI governance in detail and discusses five considerations that should be at the heart of any AI governance.
As artificial intelligence continues to permeate all workplace areas, the technology has moved from research into the heart of corporate strategies. Making AI product delivery safe, sustainable, and trustworthy requires internal policies and established processes that address ethical principles, legal requirements, and quality standards for AI. An AI strategy must therefore address the following elements of AI governance:
Data quality & data protection: The quality and the validity of the data used for the modeling process must be guaranteed. Sensitive data must be protected from unauthorized access and used appropriately.
External & internal risk potential: AI projects involve ethical, legal and financial risks. The identification and consideration of this risk potential must be anchored in the development process.
Fairness & Ethics: Historical biases and tendencies in training data must be eliminated to ensure fair predictions and forecasts.
Responsibility & Accountability: Responsibility for internal guidelines and governance mechanisms must be clearly and bindingly distributed to facilitate accountability. In addition, monitoring and control mechanisms are needed to guarantee compliance with the guidelines that have been set.