Publications

Foundational research advancing the state of the art in AI.

Capital One’s Applied AI research

publications

FB-RAG: Improving RAG with forward and backward lookup

A new training-free framework based on a simple yet powerful forward-looking strategy. (AACL)

publications

Integrating sequential and relational modeling

A collection of public datasets and prediction tasks that incorporate personal and relational events. (LoG)

publications

Tuning-free LLM can build a strong recommender

A novel framework that constructs an intent-centric knowledge graph where both users and items are explicitly linked. (LoG)

publications

Leveraging parameter space symmetries

Utilizing an alignment-first strategy to transfer advanced reasoning skills to a non-reasoning model (NeurIPS).

publications

Play by the type rules: inferring constraints for small LMs

An efficient solution to enforce the well-typedness of LLM functions. (EurIPS)

publications

Continual pre-training of MoEs: how robust is your router?

A systematic study of Mixture of Experts (MoE) continual pre-training. (NeurIPS)

publications

T1: a tool-oriented conversational dataset

A conversational dataset specifically designed to capture and manage inter-tool dependencies across diverse domains. (NeurIPS)

publications

R3: robust rubric-agnostic reward models

A novel reward modeling framework that is rubric-agnostic, generalizable and provides reasoned score assignments. (NeurIPS)

publications

SoTA with less: MCTS-guided sample selection

Visual reasoning models that achieve SoTA performance using an order of magnitude fewer training samples. (NeurIPS)