An Intuitive Explanation of Sparse Autoencoders for LLM Mech Interpretability

Sparse Autoencoders (SAEs) have recently become popular for interpretability of machine learning models (although SAEs have been around since 1997). Machine learning models and LLMs are becoming more powerful and useful, but they are still black boxes, and we don’t understand how they do the things that they are capable of. It seems like it would be useful if we could understand how they work.

Read more here: External Link