Model transparency - Model Explainability with AWS Artificial Intelligence and Machine Learning Solutions

Model transparency

ML owners require a degree of model transparency to match the AI/ML method to the business objective. This is either a high level understanding to conceptualize the impact of the ML model in real world environments, or a deep level comprising how the method functions internally. The level of model transparency depends on the knowledge required to understand the internal mechanics of the ML algorithm. In initial stages of AI/ML development, you should consider trade-offs between interpretability versus performance with regulatory requirements in mind.

Occasionally, regulators require evidence to justify how the ML model works. In these scenarios, the technical practitioner must present how the ML model functions with related data and provide artifacts as evidence. These artifacts can be used to explain positive or negative model impacts to real-world processes.

If regulatory requirements are present, it is recommended to initially investigate approaches utilizing interpretable algorithms. Data scientists can utilize AI/ML frameworks such as scikit-learn with SageMaker, and build an interpretable AI/ML model. With SageMaker Notebooks, AI/ML practitioners can document each model building step from exploratory data analysis to model building, and then to model deployment.

Diagram showing Using SageMaker services to communicate model explainability

Using SageMaker services to communicate model explainability

Additionally, the AI/ML practitioner can communicate specific model parameters (for example, decision tree path given decision tree model) and show the method through a SageMaker Notebook. The notebook can be saved and pushed to an internal repository, which is archived and shared with regulation teams or AI/ML owners, as depicted in the preceding figure.

When interpretable model performance cannot meet business objectives but model transparency is required, you need to either visit other pillars mentioned in this section, or pursue a model agnostic approach. To pursue a high-level understanding of the AI/ML model, business owners should ask AI/ML practitioners to use model agnostic approaches to answer common real-world questions such as:

  • Why did this email get flagged as spam?

  • How did this person’s loan application get rejected?

  • What data features are causing the model to recommend these product types?

These types of questions can be answered by using model agnostic approaches that include methods such as feature attribution, local interpretable explanations (LIME), and use of surrogate models.

On AWS, AI/ML practitioners can use Amazon Sagemaker Clarify, which uses Shapley values to help answer how different variables influence model behavior. These techniques help derive explainability to help business leaders reach a level of model transparency to understand and meet their business goals.

Although it is not required for business leaders to fully understand the ML algorithm, having high-level knowledge of how the model behaves with given data can help business leaders conceptualize the model’s implementation. This provides business leaders with context and an intuitive understanding behind any model shortcomings. To achieve this, business leaders should ask the technical practitioners to explain the model in human terms related to the context of the addressed business objectives.