• Daily Notes
        • A note on a note on mechanistic interpretability, variables, and importance of interpretable bases
        • Linear Decoding and Deep Nets with Neural Collapse
        • Masked Language Modeling
        • Notes on Toy Models of Superposition
        • Some definitions in Information Theory
      • Paper Reading
        • How Does Information Bottleneck Help Deep Learning?
        • Language Models for Text Classification: Is In-Context Learning Enough?
        • Large Language Models Struggle to Learn Long-Tail Knowledge

    Akash Sharma

    Search

    SearchSearch

    Daily Notes

    • Jul 17, 2024

      A note on a note on mechanistic interpretability, variables, and importance of interpretable bases

    • Jul 16, 2024

      Notes on Toy Models of Superposition

    • Jun 29, 2024

      Masked Language Modeling

    • Jun 28, 2024

      Linear Decoding and Deep Nets with Neural Collapse

    • Jun 25, 2024

      Some definitions in Information Theory


    Created with Quartz v4.2.3 © 2024

    • GitHub
    • Twitter
    • LinkedIn