𝐄𝐱𝐩π₯πšπ’π§πšπ›π’π₯𝐒𝐭𝐲 𝐯𝐬. πˆπ§π­πžπ«π©π«πžπ­πšπ›π’π₯𝐒𝐭𝐲

Understanding the difference between Explainability & Interpretability

Padmini Soni

11/2/20242 min read

Explainability & interpretability are crucial concepts in AI, especially as AI systems are increasingly used for decision-making. Although often used interchangeably, understanding their distinctions is vital for building trust and transparency in AI.

πˆπ§π­πžπ«π©π«πžπ­πšπ›π’π₯𝐒𝐭𝐲 is about understanding how an AI system works internally. It focuses on making the model’s internal mechanisms, such as its weights, features, & algorithms, understandable to humans. This deep understanding allows data scientists & engineers to improve model accuracy, identify potential biases, & ensure compliance with ethical principles and regulations.

𝐄𝐱𝐩π₯πšπ’π§πšπ›π’π₯𝐒𝐭𝐲 focuses on explaining why an AI system made a specific prediction or decision. It’s about conveying the model’s reasoning in a way that humans, especially end-users, can understand. Explainability doesn’t require understanding the model’s internal workings; instead, it relies on analyzing the relationship between input data & model output.

Let me break this down with a simple math example:

Interpretability is like solving 243 Γ— 56 by hand:

243 Γ— 56
β€”β€Šβ€”

Step 1: Multiply by 6
3 Γ— 6 = 18, write 8, carry 1
4 Γ— 6 = 24, plus carried 1 = 25, write 5, carry 2
2 Γ— 6 = 12, plus carried 2 = 14, write entire 14
Result: 1458

Step 2: Multiply by 50
3 Γ— 5 = 15, write 5, carry 1
4 Γ— 5 = 20, plus carried 1 = 21, write 1, carry 2
2 Γ— 5 = 10, plus carried 2 = 12, write entire 12
Add 0 (for 50)
Result: 12150

Step 3: Add both results
1458
12150
β€”β€Šβ€”β€Šβ€”
13,608

You see EVERY single step. Complete transparency!

Explainability is like, well imagine a smart calculator that can explain:

Input: 243 Γ— 56
Output: 13,608

Calculator explains:
β€œI followed multiplication rules and got 13,608.
The number is large because both inputs were large.
If you used a smaller number like 10, you’d get a smaller result.”

It gives an answer and general reasoning, but hides the actual computation.

The decision of whether to prioritize interpretability or explainability often depends on the specific application and its performance requirements.

Interpretability, which focuses on understanding the internal workings of a model, often comes at the cost of performance.

Explainability, which aims to explain the model’s reasoning without necessarily understanding its internal mechanisms, can be applied to complex models without sacrificing performance.

So how do we decide?

If transparency is a must-have, opt for an interpretable model, even if it means sacrificing some performance. This is crucial for industries with strict regulations that require clear understanding of model outputs.

However, if performance is key and complex data demands a sophisticated model, prioritize explainability. For instance, with large datasets like images or text, neural networks might be necessary for accuracy, & explainability methods can help provide insights into their decisions.