Student Qual/Thesis Committee Request (Update as of 1/13/2026):
I have already served on more than 10 student committees for Spring, and combined with upcoming travel commitments, my schedule is fully occupied.
Therefore, I am unable to take on any new student committees (both internal at USC and external).
External Collaboration and Employment.
I am open to external opportunities for invited talks, research collaborations, and employment (only on the part-time/advising/visiting basis).
Let us connect by email.
I frequently visit major cities, e.g., Seattle, NYC, Chicago, Boston, Atlanta, and Bay Area to meet people, give talks, and host social events.
Lab Openings. We are warmly welcoming new members to the FORTIS Lab!
Hiring Ph.D. Students with stringent criteria (2026 Fall is done):
- For Fall 2026, due to funding reasons, my lab will rely on fellowship offers but no RA offers (which require an impossible amount of funding).
Thus, this year we can only recommend candidates for fellowship but the fellowship committee will make the final decision.
- Also check other labs with openings. Good luck!
Research Collaborators/Interns (Any Time, All Year Round):
- We welcome both undergraduate and graduate interns from USC and other institutions.
- We will provide GPUs/API keys for the project.
- Preferred candidates are located in North America for time zone compatibility.
- I do not hire in-person summer interns -- I am enjoying summer and working remotely :)
Application Process: To apply for either opportunities, complete the
Application Form, email
fortis@usc.edu after submitting the form, and review the
FORTIS Lab website for more information before reaching out.
Note on External Advisory/Consultancy:
Dr. Zhao provides technical consultancy to selected projects on topics such as privacy-preserving AI and secure machine learning systems.
These collaborations are strictly technical in nature, with no involvement in financial operations, external fundraising, or investment-related activities.
Research Interests:
My research centers on building reliable, safe, and scalable AI systems, with a focus on understanding and mitigating
failure modes in modern foundation models and agentic systems.
I organize my work into two tightly connected tiers:
(1) advancing the scientific foundations of safety and robustness in AI, and
(2) translating these foundations into system-level evaluation frameworks and high-impact applications.
-
Tier 1: Foundations of Reliable & Safe AI
I study why and how modern AI systems fail under distribution shift, uncertainty, and strategic pressure,
and develop methods to make their behavior more predictable and reliable.
This tier integrates two complementary threads:
-
LLM & Agent Safety:
Analyzing and mitigating failure modes in large language models and agentic systems,
including hallucinations, jailbreaks, privacy leakage, model extraction, and multi-agent instability.
-
Robustness & Failure Detection:
Developing algorithms and benchmarks for identifying abnormal or unreliable behavior,
grounded in robustness, out-of-distribution generalization, and anomaly detection.
Keywords:
LLM Safety, Robustness, Agents, Hallucination Mitigation,
Jailbreak Detection, OOD Generalization, Failure Analysis
-
Tier 2: System-Level Evaluation & Scientific/Societal Impact
I adopt a system-oriented perspective to evaluate, stress-test, and deploy reliable AI in realistic settings,
and apply these methods to domains where failures carry high cost.
This tier emphasizes two directions:
-
Evaluation & Benchmarking:
Designing scalable evaluation frameworks, benchmarks, and workflows
that probe model and agent behavior under realistic and adversarial conditions.
-
AI for Science & Society:
Applying reliable foundation models to climate and weather forecasting,
healthcare and biomedicine, and political or social decision-making.
Keywords:
Evaluation, Benchmarking, System-Level Analysis,
AI for Science, Scientific Foundation Models,
Climate & Weather Modeling, AI for Healthcare
Biography.
✈ News and Travel
[Dec 2025] Our entire group is at NeurIPS 2025, in San Diego! Please reach out to our Ph.D. students for collaborating opportunities and internships!
[Nov 2025] 🎉Our work on explainability–extractability tradeoffs in MLaaS wins the Second Prize CCC Award at the IEEE ICDM 2025 BlueSky Track!.
[Nov 2025] Our paper on mitigating hallucinations in LLMs using causal reasoning has been accepted to AAAI 2026! See our Preprint.
[Nov 2025] 🎉LLM-augmented transformers (TyphoFormer) for typhoon forecasting wins the Best Short Paper Award at ACM SIGSPATIAL 2025; see our Preprint!
[Oct 2025] Two new papers accepted to IJCNLP-AACL 2025 Findings —
AD-AGENT: A Multi-agent Framework for End-to-end Anomaly Detection and
LLM-Empowered Patient-Provider Communication (a data-centric survey on clinical applications of LLMs). Congratulations to all!
[Oct 2025] 🎉Congratulations to our Ph.D. students
Yuehan Qin and
Haoyan Xu
for successfully passing their qualifying exams!
Both of them achieved this after 1.5 years transferring to our group.
We are so proud of their accomplishments and excited for their continued research journeys and graduation!
[Sep 2025] 🎉Congratulations to Shawn Li for being selected as an Amazon ML Fellow (2025–2026). The fellowship recognizes his strong research achievements as a PhD student and will further accelerate his work in secure and trustworthy machine learning.
[Sep 2025] New collaborative NeurIPS 2025 paper “DyFlow” proposes a dynamic workflow framework for agentic reasoning with LLMs.
🏅 Awards and Grants
As Principal Investigator (August 2023 onwards)
- Second Prize CCC Award, IEEE ICDM BlueSky Track, 2025
- Best Short Paper, ACM SIGSPATIAL, 2025
- Capital One Research Awards, 2024
- Amazon Research Awards, 2024
- Best Paper, KDD Resource-Efficient Learning Workshop, 2024
- NSF Award 1, NSF Award 2, 2024, NSF Award 3, 2025
- Google Cloud Research Innovators, 2024
- AAAI New Faculty Highlights, 2024
Prior to Principal Investigator Role (Before August 2023)