Awni Altabaa
 
 Kline Tower, Office 1117
219 Prospect St
New Haven, CT 06511
Hi! My name is Awni. Welcome to my homepage.
I am a PhD student in the Department of Statistics & Data Science at Yale University studying the foundations of machine learning. I’m grateful to be advised by Prof. John Lafferty.
My research interests lie broadly in the intersection of machine learning, statistics, and computer science. More specifically, my research aims to study questions of the following flavor:
- What are the architectural mechanisms and inductive biases necessary for efficient learning and strong generalization in different domains?
- What are the fundamental theoretical limits of what is or is not possible to learn under different learning paradigms?
- To what degree can neural networks learn functions and algorithms that can generalize compositionally to out-of-distribution inputs?
Our work tackles these questions through complementary empirical investigation and theoretical analysis. My current research focus is on algorithmic generalization and reasoning in machine learning models.
selected publications
-   Unlocking Out-of-Distribution Generalization in Transformers via Recursive Latent Space ReasoningUnder review, 2025 Unlocking Out-of-Distribution Generalization in Transformers via Recursive Latent Space ReasoningUnder review, 2025
-   Disentangling and Integrating Relational and Sensory Information in Transformer ArchitecturesInternational Conference on Machine Learning (ICML), 2025 Disentangling and Integrating Relational and Sensory Information in Transformer ArchitecturesInternational Conference on Machine Learning (ICML), 2025
-   Learning Hierarchical Relational Representations through Relational ConvolutionsTransactions on Machine Learning Research (TMLR), 2024 Learning Hierarchical Relational Representations through Relational ConvolutionsTransactions on Machine Learning Research (TMLR), 2024
-   The Relational Bottleneck as an Inductive Bias for Efficient AbstractionTrends in Cognitive Science (TICS), 2024 The Relational Bottleneck as an Inductive Bias for Efficient AbstractionTrends in Cognitive Science (TICS), 2024
-   Abstractors and Relational Cross-Attention: An Inductive Bias for Explicit Relational Reasoning in TransformersInternational Conference on Learning Representations (ICLR), Apr 2024 Abstractors and Relational Cross-Attention: An Inductive Bias for Explicit Relational Reasoning in TransformersInternational Conference on Learning Representations (ICLR), Apr 2024
 
  
  
 