LLM agents, cyber reasoning systems, fuzzing, exploit generation, program analysis, red teaming, and scalable defenses.
Publications
All publicationsOSS-CRS: Liberating AIxCC Cyber Reasoning Systems for Real-World Open-Source Security
SoK: DARPA's AI Cyber Challenge (AIxCC): Competition Design, Architectures, and Lessons Learned
ATLANTIS: AI-driven Threat Localization, Analysis, and Triage Intelligence System
Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned
Improving Large-Scale Vulnerability Analysis of IoT Devices with Heuristics and Binary Code Similarity
Talks
All talksInside the AI Cyber Challenge
AI Security 101: Red Team's Perspective on Navigating New Threats in AI Frontier
Red Team's Perspective on Navigating New Threats in AI Frontier
AI Security 101: Red Team's Perspective on Navigating New Threats in AI Frontier
Security for Generative AI, Generative AI for Security
AI Security Primer: Red Team Perspectives on Navigating New Threats and Safeguarding AI Frontier
AI Security Primer: Red Team Perspectives on Navigating New Threats and Safeguarding AI Frontier
Writing
All writingThe Accelerating Gap Between Attack and Defense in the Age of AI
The time from vulnerability disclosure to exploit has shrunk from 2.3 years to 1.6 days. AI is widening the gap between attack and defense.
Team Atlanta Wins DARPA AI Cyber Challenge (AIxCC)
Team Atlanta wins 1st place in DARPA's AI Cyber Challenge (AIxCC), demonstrating AI-driven vulnerability detection and patching.
Thoughts on GenAI (4/4): AI Speculation
Why Jensen Huang emphasizes biology, how training data quantity affects AI behavior, jailbreaking LLMs with low-resource languages, and why understanding AI needs more than just coding.
Thoughts on GenAI (3/4): Safe, Secure AI
AI Red Teams at Microsoft, Google, and NVIDIA. Why AI security must consider safety too, lessons from DARPA AIxCC, and experiences reviewing AI services at Samsung.
Thoughts on GenAI (2/4): Fake Information, Scams, and Phishing
How generative AI is making fake information, scams, and phishing more convincing and harder to detect.
Thoughts on GenAI (1/4): AI Development and Proof of Digital Identity
If AI gains human-like senses and learns from the same data as you, how do we prove who's real? Thoughts on AI creativity, sensory development, and digital identity.