Capital One Fellows for AI 2025-2026
Capital One Fellows PhD award recipients for 2025-2026 advancing research in AI.
Capital One Fellowship award recipients for the 2025-2026 academic year
Capital One is proud to support PhD students in their academic and research pursuits through our valued academic partnerships. We offer fellowship awards to exceptional students who demonstrate academic excellence, innovative leadership and a commitment to cutting-edge research.
This year, our support extends across multiple partnerships, including our Academic Centers of Excellence at Columbia University, University of Southern California and University of Illinois Urbana-Champaign, as well as through our long-standing collaboration with the University of Virginia.
Congratulations to all the deserving fellowship recipients!
Columbia University, Center for Responsible AI and Financial Innovation (CAIRFI)
Tao Long
Tao is a Computer Science PhD student at Columbia University, advised by Professor Lydia Chilton. Working on human–AI interaction, Tao’s research explores how humans collaborate with generative AI systems and AI agents over time, focusing on making AI tools more usable, useful, trustworthy, reliable and seamlessly integrated into everyday productivity practices. Specifically, Tao builds human–AI and agentic systems that reduce cognitive and temporal effort for challenging or complex tasks, offload work to AI while maintaining human ownership and authenticity and fit naturally into users’ existing processes for writers, developers, designers, event organizers and many other communities. Before starting their PhD, Tao earned a BS Summa Cum Laude from Cornell University.
University of Southern California (USC), Center for Responsible AI and Decision Making in Finance
Duygu Nur Yaldiz
Duygu is a fourth-year PhD student at USC advised by Professor Sai Praneeth Karimireddy and Professor Salman Avestimehr. Before starting her PhD, she got her BSc in Computer Science at Bilkent University. Her research interests are trustworthy LLMs, uncertainty estimation and continual learning.
Ke Xu
Ke is a third-year PhD student in the Industrial and Systems Engineering department at USC, advised by Professor John Calsson. Ke is researching responsible AI in finance by applying topological data analysis to GraphRAG. In this project, she is currently focused on advancing query accuracy and improving AI decision-making through deeper understanding of data relationship structures. During her PhD, she has conducted research in transportation analytics with a focus on last-mile delivery.
Tejas Srinivasan
Tejas is a fifth-year PhD student in the GLAMOR Lab at USC, advised by Professor Jesse Thomason on user-centric approaches to building reliable LLM-based systems. His research lies at the intersection of human-AI collaboration, uncertainty quantification and training human-centered LLM agents. Alongside being a Capital One CREDIF Fellow, he was also previously an Amazon ML Fellow.
Yutai Zhou
Yutai is a third-year PhD student at USC, advised by Professor Erdem Bıyık of Lira Lab. He is primarily interested in aligning generative models (e.g. robot policy, web agent) with human values, with applications predominantly in decision-making and robot learning settings. His work spans interactive imitation learning, preference-based reward modeling, offline reinforcement learning and Bayesian deep learning. Currently, he is working on evaluation methods for data attribution, and improving robustness by incorporating human feedback signals. Prior to joining USC, he spent a few years in Boston as a research engineer at the MIT Lincoln Laboratory, working on human-AI coordination. He received his BS in Computer Science from the University of Florida in 2019, where he worked with Professor Alina Zare and interned at Sandia National Labs.
Yuan Xia
Yuan is a final-year PhD candidate in Computer Science at USC, advised by Professor Jyotirmoy V. Deshmukh. Yuan’s research integrates large language models with formal methods to make complex software systems safer and more reliable. Her work spans likely-invariant synthesis for distributed systems, improving the reasoning capabilities of LLMs and video understanding in cyber-physical systems. She has authored several peer-reviewed papers, presented work at VMCAI and FMCAD and interned at Nokia Bell Labs and Microsoft.
Yuxin Yang
Yuxin is a third-year PhD student in Computer Science at USC, advised by Professor Viktor Prasanna. Her research spans Graph Neural Networks (GNNs) and LLMs. She is currently working on building agentic systems for LLMs in text and graph domains. In collaboration with Capital One, she is developing an agentic RAG framework with adaptive test-time search control, exploring how multi-agent reasoning and dynamic computation allocation can improve retrieval and generation quality. Her future work aims to extend these ideas toward general agentic AI systems capable of adaptive decision-making.
Shahab Sepehri
Shahab is a third-year PhD student advised by Professor Mahdi Soltanolkotabi. His research focuses on improving the reasoning capabilities of generative models, including multimodal LLMs and diffusion models. He designs new benchmarks to systematically identify their failure modes and develop novel architectures and algorithms to overcome these limitations. He is also exploring applications of these reasoning approaches in time-series forecasting, particularly for financial prediction tasks.
University of Illinois Urbana-Champaign (UIUC), Center for Generative AI Safety, Knowledge Systems and Cyber Security (ASKS)
Cheng Qian
Cheng is a second-year PhD student at UIUC, advised by Professor Heng Ji. He received his BS degree from the Department of Computer Science at Tsinghua University, where he was advised by Professor Zhiyuan Liu. His current research focuses on tool-augmented reasoning with LLMs and AI agents. He has published papers at top-tier conferences such as ACL, EMNLP, COLM, COLING, NAACL and ICLR, including more than fifteen first or co-first authors. His Google Scholar citations exceed 1,100. He serves as an Area Chair for ACL and EMNLP, and as a reviewer for AAAI, EMNLP, NeurIPS and COLM.
Hyeonjeong Ha
Hyeonjeong is a second-year PhD student at UIUC, advised by Professor Heng Ji. Her research enhances the visual perception of multimodal LLMsthrough fine-grained, structured understanding, aiming to bridge human-like perception and trustworthy reasoning for more reliable real-world applications. She has published as first author in top-tier conferences including ACL and NeurIPS, and actively contributes to the research community as a reviewer for leading venues.
Chenyuan Yang
Chenyuan is a fourth-year PhD student at UIUC, advised by Professor Lingming Zhang. His research bridges software systems and machine learning to enhance the reliability and quality of large-scale systems through testing, reasoning and verification techniques powered by ML and LLMs. His work has been published in top-tier conferences spanning systems (SOSP, ASPLOS), software engineering (ICSE, FSE), programming languages (OOPSLA) and machine learning (ICLR), leading to the detection of over 650 critical bugs in ML systems, C/C++ compilers and operating systems, including 40 CVEs. He earned his B.S. degree with honors from Nanjing University, advised by Professor Yanyan Jiang.
University of Virginia (UVA), School of Data Science
Mengxuan Hu
Mengxuan is a third-year PhD data science student at UVA advised by Dr. Sheng Li. Her prior research has primarily focused on improving the safety alignment of LLMs, particularly in high-stakes scenarios. Building on this foundation, she is excited to extend her work into the financial domain through her funded project. In particular, she is drawn to the integrative decoding (ID) framework, which provides a principled way to promote self-consistency and factuality in LLM outputs without requiring model retraining. She is especially interested in developing reinforcement learning-based decoding policies tailored to financial tabular reasoning tasks.
Ding Zhang
Ding is a first-year PhD data science student at UVA advised by Dr. Chirag Agarwal. He is interested in modelling such graph data using graph representation learning techniques: e.g., applying GNNmodels. Additionally, as AI systems become increasingly integrated into critical decision-making processes, such as health care, finance and public policy, it is essential that these systems are not only accurate but also interpretable and reliable. Building models that can clearly explain their predictions is crucial for fostering user trust and ensuring ethical deployment. This area is the key to bridging the gap between technical innovation and real-world, responsible adoption of AI.


