Profile Picture
Gabriele Prato
I am a final-year PhD candidate at Mila, University of Montreal, anticipating graduation in the spring or summer of 2025. My research focuses on the fundamental aspects of Large Language Models (LLMs). Specifically, I have explored how data segmentation influences critical functions such as parametric knowledge retrieval and latent multi-hop reasoning.

In addition to investigating these foundational dynamics, my work also seeks to address the inherent limitations of LLMs. For instance, I aim to develop methods that allow these models to consolidate their knowledge during training, enhancing their utility and impact.

I am deeply committed to advancing the field of machine learning through open-ended research and academic exploration. My goal is to produce impactful, publishable work that contributes meaningfully to the scientific community. I am seeking research-focused roles in industry that align with these values and support my passion for driving innovation.
Publications
Do Large Language Models Know How Much They Know?
EMNLP 2024
EpiK-Eval: Evaluation for Language Models as Epistemic Models
EMNLP 2023
PatchBlender: A Motion Prior for Video Transformers
NeurIPS 2022 Workshop, Vision Transformers: Theory and Applications
Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers
ICML 2021 Workshop, Uncertainty & Robustness in Deep Learning
Fully Quantized Transformer for Machine Translation
Findings of EMNLP 2020
Towards Lossless Encoding of Sentences
ACL 2019
Blog Posts
EpiK-Eval and the Role of Knowledge Consolidation in Language Models
December 22, 2023