Overview
The Explainability module provides token attribution analysis to understand which parts of the input influence model outputs. It uses the interpreto library for token-level attribution.Use case
Explainability helps you understand:- Which tokens in the input have the most influence on the output
- Whether the model is attending to the right parts of the context
- Potential biases in token-level attention patterns
Usage
Requires the
explainability extra: pip install "gaussia[explainability]".