[Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning](https://somwrks.notion.site/Token-Assorted-Mixing-Latent-and-Text-Tokens-for-Improved-Language-Model-Reasoning-319e4acf846e80c08d1ae79c7d6f354b)

[Do Latent Tokens Think? A Causal and Adversarial Analysis of Chain-of-Continuous-Thought](https://somwrks.notion.site/Do-Latent-Tokens-Think-A-Causal-and-Adversarial-Analysis-of-Chain-of-Continuous-Thought-319e4acf846e80ab97bddf7f3f259a47)

Training Large Language Models to Reason in a Continuous Latent Space

[LATENT THINKING OPTIMIZATION: YOUR LATENT REASONING LANGUAGE MODEL SECRETLY ENCODES REWARD SIGNALS IN ITS LATENT THOUGHTS](https://somwrks.notion.site/LATENT-THINKING-OPTIMIZATION-YOUR-LATENT-REASONING-LANGUAGE-MODEL-SECRETLY-ENCODES-REWARD-SIGNALS-I-319e4acf846e803da89cdd74d4d77455)

[LADIR: LATENT DIFFUSION ENHANCES LLMS FOR TEXT REASONING](https://somwrks.notion.site/LADIR-LATENT-DIFFUSION-ENHANCES-LLMS-FOR-TEXT-REASONING-319e4acf846e800d93ffe09cadc0e802)

PREVENTING LANGUAGE MODELS FROM HIDING THEIR REASONING

Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought

Unnatural Languages Are Not Bugs but Features for LLMs

Transcoders Find Interpretable LLM Feature Circuits

Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space

Latent Collaboration in Multi-Agent Systems

Skill Reuse as Compression in Agentic RL

Thinking Like Transformers

THE GEOMETRY OF REASONING: FLOWING LOGICS IN REPRESENTATION SPACE

LLMS KNOW MORE THAN THEY SHOW: ON THE INTRINSIC REPRESENTATION OF LLM HALLUCINATIONS

SPARSE AUTOENCODERS FIND HIGHLY INTERPRETABLE FEATURES IN LANGUAGE MODELS