CREDIF Fall '25: USC AI Fellows & Awards
Capital One is proud to announce eight AI fellows for the Center for AI and Responsible Decision-Making in Finance (CREDIF).
Each academic year, our Academic Centers of Excellence grant research awards to leading faculty. This year, we are proud to announce eight recipients from the Center for AI and Responsible Decision-Making in Finance (CREDIF) at the University of Southern California (USC). These awards support innovative research projects in various AI fields, including generative LLMs, reasoning and agents.
We are pleased to share the following research projects that are receiving awards for the current academic year:
Deployment of Uncertainty Estimation Methods for Hallucination Detection and Effective Test Time Computing
Salman Avestimehr is a Dean's Professor of Electrical and Computer Engineering and Professor of Computer Science at USC. He holds key leadership roles, serving as the inaugural director of the USC-Amazon Center for Secure and Trusted Machine Learning (Trusted AI) and the director of the Information Theory and Machine Learning (vITAL) research lab at USC. He is also a prominent figure in the industry as the CEO and co-founder of FedML, an open-source research library and benchmarking ecosystem for federated machine learning. Previously, he was an Amazon Scholar in 2021.
About this project
This project examines how well large language models (LLMs) can recognize and express their own uncertainty, an ability that is crucial to developing reliable and transparent AI systems.
Our main finding is that models do encode rich uncertainty signals within their internal representations during pretraining, but standard instruction tuning or fine-tuning does not teach them how to use these signals in decision-making. As a result, while the models "know" when they are likely to be wrong internally, they often fail to behave or communicate accordingly. We perform rigorous analyses to reveal this gap between latent uncertainty and expressed confidence. The work further explores how explicit uncertainty-aware training can bridge this gap, allowing models to make more calibrated and reliable predictions.
Ultimately, this research aims to transform uncertainty from a hidden property of LLMs into an actively used skill, laying the groundwork for safer, self-aware and more accountable AI systems.
Applying Topological Data Analysis to GraphRAG for Responsible AI in Finance
John Gunnar Carlsson is the Kellner Family Associate Professor in the Daniel J. Epstein Department of Industrial and Systems Engineering at USC. His PhD is in computational mathematics from the Institute for Computational and Mathematical Engineering (ICME) at Stanford University, and his Bachelor of Arts is in mathematics and music from Harvard University. He is a specialist in applications of computational geometry to problems in optimization, such as the use of topological data analysis (TDA) to improve AI applications.
About this project
The purpose of this project is to enhance the reliability of AI systems in financial decision-making by integrating TDA with graph retrieval-augmented generation (GraphRAG). By applying TDA to graph embeddings from financial knowledge graphs, we create visual representations that reveal query ambiguities and knowledge gaps, which are primary sources of AI hallucinations. The topological structure of retrieved information provides computable indicators of response quality: concentrated clusters suggest reliable answers, while scattered patterns signal ambiguity or insufficient knowledge. This enables automated self-checking mechanisms where AI systems assess their own confidence based on the geometry of the underlying knowledge space. The resulting framework provides interpretable visual diagnostics and a mathematically principled foundation for responsible AI deployment in finance.
MaSC: Multi-Agent Test-time Scaling for Retrieval Augmented Generation for Question Answering
Viktor K. Prasanna is Charles Lee Powell Chair in Engineering and Professor of Electrical and Computer Engineering and Computer Science at USC. He leads the Center for Energy Informatics and the Data Science Lab, where his research spans high-performance computing, reconfigurable architectures, parallel and distributed systems and machine learning systems. Prof. Prasanna is a Fellow of Institute of Electrical and Electronics Engineers (IEEE), Association for Computer Machinery (ACM) and American Association for the Advancement of Science (AAAS), and has received numerous awards for his contributions to scalable computing and data-driven intelligence systems. He is also actively involved in bridging academic research and industry applications, including large-scale AI systems and intelligent infrastructure.
About this project
Multi-Agent Test-time Scaling for Retrieval Augmented Generation for Question Answering (MaSC) addresses the challenge of complex multi-hop question-answering by introducing an adaptive multi-agent framework that scales computational effort at test-time based on query complexity. Unlike traditional RAG systems that apply fixed retrieval strategies, MaSC employs LLM-based controllers that orchestrate iterative cycles of retrieval, answer generation, evaluation and query rewriting, dynamically selecting between sparse, dense or hybrid retrieval methods. The key innovation lies in its combination of parallel and sequential reasoning paths to be explored simultaneously and intelligently combined, while reinforcement learning optimizes both query complexity classification and rewriting policies. By learning to adaptively allocate retrieval resources—resolving simple queries directly while decomposing complex multi-hop questions into iterative sub-queries—the system achieves improved accuracy across diverse question-answering benchmarks while maintaining computational efficiency through intelligent test-time scaling.
Beyond improving benchmark performance, MaSC represents a broader research direction: building agentic AI systems that adapt their reasoning strategies, retrieval depth and collaboration patterns based on task requirements. This collaboration explores how such systems can support real-world applications where dynamic reasoning, explainability and efficient use of computation are essential. This work aims to advance the frontier of adaptive, controllable and resource-aware AI systems.
Generative AI for Financial Decision-Making in Noisy, Dynamic and Agentic Settings
Stephen Tu is an Assistant Professor in the Department of Electrical and Computer Engineering at USC. He also holds a joint (courtesy) appointment in the Department of Computer Science at USC. He works on problems in learning and control for dynamical systems, generative modeling and robotics.
Mahdi Soltanolkotabi is the director of the center on AI Foundations for the Sciences (AIF4S) at USC. He is also a professor in the Departments of Electrical and Computer Engineering, Computer Science and Industrial and Systems Engineering. Prior to joining USC, he completed his PhD in electrical engineering at Stanford in 2014. He was a postdoctoral researcher in the EECS department at UC Berkeley during the 2014-2015 academic year. His research focuses on developing the mathematical foundations of modern data science via characterizing the behavior and pitfalls of contemporary nonconvex learning and optimization algorithms with applications in AI, deep learning, large scale distributed training, federated learning, computational imaging and AI for scientific and medical applications.
About this project
This project aims to develop a new generative AI framework, “FinAgent,” designed to enhance financial decision-making in complex, fast-changing and uncertain environments. The research focuses on creating AI systems that can better handle noisy and dynamic data, adapt to market shifts and remain stable and trustworthy when making long-term decisions. By combining advanced data-cleaning methods, time-aware learning models and safeguards that keep AI behavior reliable over time, FinAgent seeks to improve applications such as fraud detection and portfolio management. Tools and datasets developed as part of this project will be open-sourced, fostering collaboration and innovation across academia and industry.
Human-in-the-Loop Multi-Agentic AI Systems for Task-Oriented Dialogue
Jesse Thomason is an Assistant Professor at USC where he leads the Grounding Language in Actions, Multimodal Observations and Robots (GLAMOR) Lab. His research enables agents and robots to better understand and respond to human language by considering the grounded context in which language occurs on three threads: 1) We jointly learn models with language, world perception and physical action to enable end-to-end agent behavior and improve continual learning; 2) We investigate ways to take advantage of the extra-textual visual world and embodied context in which language is uttered to improve reasoning in language-and-vision and language-guided robotics tasks; and 3) We work to improve speech and sign recognition by leveraging contextual and structural information, as well as to apply language technologies to accessibility and health applications.
About this project
We propose to empower multi-agentic AI systems with transparency and explainability by surfacing their reasoning processes in natural language. This includes: frictive dialogue with human users to clarify information, agent-agent introspection to overcome single-agent uncertainties and human-initiated interruption for human-in-the-loop control. Current agentic research largely considers individual agents acting autonomously, without human intervention, or, at most, in response to single instructions. Even in cases where users attempt to correct or guide model behavior, LLM-powered agents are frequently instruction-tuned to produce sycophantic, rather than pragmatic, responses. By introducing human-like frictive dialogue, as well as agent-agent introspection that builds on our work in model uncertainty estimation and multi-agent reasoning, our proposed human-in-the-loop, multi-agentic workflow could improve both quantitative task success and qualitative user experience. This workflow could be utilized in critical decision-making arenas like finance, where human intervention must be possible, easy and appropriately enabled.
Benchmarks for Symbolic and Logical Reasoning Abilities of LLMs
Jyotirmoy V. Deshmukh is an Associate Professor in the Thomas Lord Department of Computer Science and the Department of Electrical and Computer Engineering in the Viterbi School of Engineering at USC. Before joining USC, Jyo worked as a Principal Research Engineer in Toyota Motors North America R&D. He got his PhD degree from the University of Texas at Austin and was a post-doctoral fellow at the University of Pennsylvania. Jyo's research interest is in the broad area of analysis, design, security and trustworthiness, and synthesis of cyber-physical and autonomous systems, including those that use machine learning and AI-based components.
About this project
In recent times, there have been several experience reports touting the ability of LLMs to reason about complex mathematical and logical tasks. In this work, we will examine the problem of creating systematic benchmarks for evaluating the logical reasoning abilities in the structured task of automatically obtaining proofs in a formal logical system. Based on prior work on evaluating the performance of LLMs on proofs for propositional logic using the single inference rule of modus ponens, we consider the problem of obtaining proofs for statements restricted to certain efficiently decidable theories. We will make use of Satisfiability Modulo Theory (SMT) solvers to test the validity of the reasoning steps of the LLM and thus establish the correctness of the proof as generated by the LLM. Using a procedure to systematically enumerate (invalid) formulas in various decidable theories, we will create a taxonomy of logical reasoning tasks to evaluate the performance of various popular LLMs.
Causality-Aware Imitation Learning with Applications to Financial Forecasting and Trading
Erdem Bıyık is an assistant professor in Thomas Lord Department of Computer Science at USC, and in Ming Hsieh Department of Electrical and Computer Engineering by courtesy. He leads the Learning and Interactive Robot Autonomy Lab (Lira Lab). Prior to joining USC, he was a postdoctoral researcher at UC Berkeley's Center for Human-Compatible Artificial Intelligence. He received his PhD and M.Sc. degrees in Electrical Engineering from Stanford University, working at the Stanford Artificial Intelligence Lab (SAIL), and his BSc degree in Electrical and Electronics Engineering from Bilkent University in Ankara, Türkiye. During his studies, he worked at the research departments of Google and Aselsan. Erdem was an HRI 2022 Pioneer and received an honorable mention award for his work at HRI 2020. His TMLR 2023 paper was an outstanding paper finalist and RLC 2025 paper received the outstanding paper award on empirical reinforcement learning research. His works were published at premier robotics and artificial intelligence journals and conferences, such as IJRR, CoRL, RSS and NeurIPS.
About this project
We develop robust imitation learning (IL) algorithms that address causal confusion, the possibility of AI agents to misinterpret spurious correlations as causal relationships, within financial domains. Building on our prior work in robotics, our project adapts these ideas to finance by designing data collection pipelines that capture which information expert traders actually use when making decisions. We follow three main thrusts: 1) labeling expert-accessed observations to prevent the model from learning from irrelevant data, 2) developing interfaces to track experts’ focus through browsing history, post-decision feedback and counterfactual reasoning and 3) implementing an attention-regularized IL algorithm that aligns model saliency with expert-identified causal factors. The expected outcomes include new datasets of annotated financial demonstrations, open-source tools and algorithms and publications advancing causality-aware learning for reliable decision-making.
A Benchmark Framework for Accelerating Co-design Mixture of Experts Algorithms and Systems
Seo Jin Park is an assistant professor of Computer Science at USC. Previously, he worked at Google Systems Research Group. Seo Jin did his postdoc at MIT CSAIL with Mohammad Alizadeh. He received a PhD in Computer Science from Stanford University in 2019, advised by John Ousterhout. His research interest has been broadly in efficient distributed systems.
About this project
This project aims to optimize Mixture of Experts (MoE) models by co-designing their architecture and serving system simultaneously. Currently, model design is often constrained by existing system limitations, preventing researchers from discovering potentially superior, cost-effective architectures. We propose a new framework to systematically explore this joint design space, generating performance predictors and tailored runtimes to identify MoE models that achieve maximum accuracy per dollar.
Conclusion
As we continue to propel research further, pushing the boundaries of discovery and innovation, we extend our deepest gratitude to our academic partners. Their unwavering commitment to fostering new research ideas and actively engaging with our dedicated research scientists is invaluable. These collaborations are the bedrock of groundbreaking advancements, enabling us to tackle complex challenges and unlock new frontiers of knowledge across various disciplines.
For comprehensive information on our diverse and impactful partnerships, including details about current projects, collaborative initiatives and opportunities for future engagement, we invite you to explore our dedicated academia page. This resource provides a deeper insight into how our joint efforts are shaping the future of research and contributing to the global scientific community.


