Responsible AI
As with any new technology, generative AI creates new challenges as well. Potential users must evaluate the promise of the technology while also analyzing the risks. Responsible AI is the practice of designing, developing, and using AI technology with the goal of maximizing benefits and minimizing risks. At AWS, we define responsible AI using a core set of dimensions that we assess and update over time as AI technology evolves:
-
Fairness: Considering impacts on different groups of stakeholders.
-
Explainability: Understanding and evaluating system outputs.
-
Privacy and security: Appropriately obtaining, using, and protecting data and models.
-
Safety: Preventing harmful system output and misuse.
-
Controllability: Having mechanisms to monitor and steer AI system behavior.
-
Veracity and robustness: Achieving correct system outputs, even with unexpected or adversarial inputs.
-
Governance: Incorporating best practices into the AI supply chain, including providers and deployers.
-
Transparency: Enabling stakeholders to make informed choices about their engagement with an AI system.
Elements of the Responsible AI framework are weighted more heavily for generative AI systems as opposed to traditional machine learning solutions (like veracity or truthfulness). However, the implementation of Responsible AI requires a systematic review of the system along the defined dimensions.