Capital One and UIUC 2025-2026 AI awardees announced
Faculty awards and PhD fellowships at the Capital One Center for Generative AI Safety, Knowledge Systems and Cybersecurity (ASKS).
The Capital One Center for Generative AI Safety, Knowledge Systems and Cyber Security (ASKS) at the University of Illinois Urbana-Champaign is officially off to a strong start, marked by the official awarding of faculty research awards and PhD fellowships for the 2025-2026 academic year.
Earlier this month we hosted a virtual kick-off event where ASKS leadership, including Center Directors Heng Ji and Gang Wang, Siebel School Director Nancy Amato and Capital One Chief Scientist and Head of Enterprise AI Prem Natarajan, joined with awarded faculty, PhD fellows and Capital One collaborators to share the transformative impact of their research plans.
We're thrilled about the future of these collaborations and pleased to share more on ASKS Awarded Research Projects for the 2025-2026 academic year:
Academic year 2025-2026 awarded research projects
LM-Dynamic: Teach language models to be super-speed readers
Heng Ji, Professor of Computer Science
In this proposal, the research team is exploring LM-Dynamic, a zero-shot enhancement for off-the-shelf large language models (LLMs), which prioritizes text most related to the current generation and down-weighting less salient ones to understand context with infinite length.
Toward safe and trustworthy LLM agents: Addressing evaluation, privacy, multi-agent truthfulness
Yuxiong Wang, Assistant Professor, Computer Science
This research aims to build safe and trustworthy agentic language models by addressing key challenges in reward modeling, privacy preservation and multi-agent coordination. We propose outcomes-based evaluation methods for unverifiable tasks, system-level safeguards for privacy and training protocols that promote truthful, cooperative communication in multi-agent systems. Together, these efforts will lay the foundation for secure applications of LLM-based agents.
SentinelAgent: Knowledge-enabled secure agents for trustworthy agentic systems
Bo Li, Abbasi Associate Professor, Computer Science
This research introduces SentinelAgent, a knowledge-enabled security agent designed to proactively identify, assess and mitigate emerging risks in agentic AI systems. It offers automated attack generation, dynamic risk detection and policy-aligned guardrails to ensure the secure and trustworthy deployment of AI agents across real-world applications.
Toward more pragmatic LLM jailbreaking
Varun Chandrasekaran, Assistant Professor, Electrical and Computer Engineering
This research proposes a more pragmatic approach: generating low-perplexity jailbreak prompts that are linguistically natural yet capable of eliciting harmful outputs. These prompts more closely resemble real user input, bypass simple defenses and better expose alignment failures grounded in model behavior rather than token-level quirks. They also offer a stronger basis for adversarial training and highlight the risk of low-effort, high-impact jailbreaks.
CVE-Bench++: A difficult benchmark for AI agents and cybersecurity
Daniel Kang, Assistant Professor, Computer Science
With this research the team will build CVE-Bench++, a new benchmark designed to evaluate whether LLM agents can autonomously expose real-world software vulnerabilities. Building on prior work (CVE-Bench), it reproduces recent, high-complexity Common Vulnerabilities and Exposures (CVEs). By August 2026, it aims to become the first benchmark for measuring LLM agents’ capabilities in real-world, challenging cyber-offense tasks.
LLM reasoning for code safety: Evaluating and enhancing reasoning faithfulness
Gang Wang, Assistant Professor, Computer Science
Huan Zhang, Assistant Professor, Electrical and Computer Engineering
This research will focus on evaluating, understanding and improving the reasoning faithfulness of code LLMs and multi-agent systems. With a focus on code LLMs, the team will first develop fine-grained evaluation metrics and benchmarks for reasoning faithfulness, explore and understand the root causes of unfaithful reasoning and improve reasoning faithfulness to enable auditable code generation to reduce security vulnerabilities. Finally, research aims to extend and generalize a reasoning faithfulness evaluation to multi-agent systems, where coding and reasoning play a critical role, to broaden the impacts.
Academic year 2025-2026 awarded PhD fellows
Cheng Qian
Cheng is a second-year PhD student at UIUC, advised by Professor Heng Ji. He received his B.S. from the Department of Computer Science at Tsinghua University, where he was advised by Professor Zhiyuan Liu. His current research focuses on tool-augmented reasoning with LLMs and AI agents. He has published papers at top-tier academic conferences such as ACL, EMNLP, COLM, COLING, NAACL and ICLR, including more than fifteen first-or co-first authored papers. His Google Scholar citations exceed 1,100. He serves as an Area Chair for ACL and EMNLP, and as a reviewer for AAAI, EMNLP, NeurIPS and COLM.
Hyeonjeong Ha
Hyeonjeong is a second-year PhD student at UIUC, advised by Professor Heng Ji. Her research enhances the visual perception of multimodal LLMs through fine-grained, structured understanding, aiming to bridge human-like perception and trustworthy reasoning for more reliable real-world applications. She has published as first author in top-tier conferences including ACL and NeurIPS, and actively contributes to the research community as a reviewer for leading venues.
Chenyuan Yang
Chenyuan is a fourth-year PhD student at UIUC, advised by Professor Lingming Zhang. His research bridges software systems and machine learning to enhance the reliability and quality of large-scale systems through testing, reasoning and verification techniques powered by ML and LLMs. His work has been published in top-tier conferences spanning systems (SOSP, ASPLOS), software engineering (ICSE, FSE), programming languages (OOPSLA) and machine learning (ICLR), leading to the detection of over 650 critical bugs in ML systems, C/C++ compilers and operating systems, including 40 CVEs. He earned his B.S. with honors from Nanjing University, advised by Professor Yanyan Jiang.


