Student Committee Request (Update as of 8/29/2025):
I have already served on more than 10 student committees this semester, and combined with upcoming travel commitments, my schedule is fully occupied.
Therefore, I am unable to take on any new student committees (both internal at USC and external).
Lab Openings. We are warmly welcoming new members to the FORTIS Lab!
Ph.D. Students (1 Ph.D. student for Fall 2026 with prerequisite):
- Future Ph.D. students should be comparable to our 1st year Ph.D. students -- see FORTIS Lab.
- We prioritize the current members of the lab.
Research Collaborators/Interns (Any Time, All Year Round):
- We welcome both undergraduate and graduate interns from USC and other institutions.
- We will provide GPUs/API keys for the project.
- Preferred candidates are located in North America for time zone compatibility.
- I do not hire in-person summer interns -- I am enjoying summer and working remotely :)
Application Process: To apply for either opportunities, complete the
Application Form, email
fortis@usc.edu after submitting the form, and review the
FORTIS Lab website for more information before reaching out.
Collaboration with Me.
I am open to external opportunities for invited talks, research collaborations, and employment (only on the part-time/advising/visiting basis).
Let us have a chat by email.
I frequently visit major cities, e.g., Seattle, NYC, Chicago, Boston, Atlanta, and Bay Area to meet people, give talks, and host social events.
Research Interests:
My research builds reliable, robust, and scalable AI that advances science and benefits society.
I focus on developing rigorous algorithmic foundations, advancing safety and interpretability in large models,
and creating open systems that connect research with real-world impact.
-
Reliable AI Foundations: Detecting the Unexpected.
I develop fundamental algorithms and benchmarks for detecting rare, unseen, or abnormal patterns across modalities.
This work unifies anomaly detection, out-of-distribution (OOD) detection, and automated model selection to ensure
that AI systems remain reliable and predictable under uncertainty.
Keywords: Anomaly Detection, OOD Detection, Model Selection, Robust Learning
-
Trust & Safety in Large Language Models and Agents.
I study how to make large models and agentic systems safe, interpretable, and aligned under real-world conditions.
My work investigates hallucination mitigation, privacy and security safeguards, jailbreak prevention,
and dynamic evaluation frameworks for trustworthy reasoning and decision-making.
Keywords: LLM Safety, Hallucination Mitigation, Privacy & Security, Trust Evaluation
-
Foundation Models for Science & Society.
I apply foundation models and generative AI to scientific and societal domains,
addressing challenges in climate forecasting, healthcare, and political or social decision-making.
These efforts combine domain knowledge with foundation model reasoning to accelerate discovery and policy insights.
Keywords: AI for Science, Generative AI, Decision Modeling, Computational Social Science
-
Scalable, Automated & Open AI Systems.
I create efficient and reproducible machine learning systems that enable large-scale, open, and automated deployment of AI.
My open-source work emphasizes distributed inference, workflow automation, and user-centric design, promoting transparent
and accessible AI research for academia and industry alike.
Keywords: ML Systems, Automated ML, Open-source AI, Distributed Computing
Biography.
✈ News and Travel
[Oct 2025] 🎉 Congratulations to our Ph.D. students
Yuehan Qin and
Haoyan Xu
for successfully passing their qualifying exams!
Both of them achieved this after 1.5 years transferring to our group.
We are so proud of their accomplishments and excited for their continued research journeys and graduation!
[Sep 2025] Congratulations to Shawn Li for being selected as an Amazon ML Fellow (2025–2026). The fellowship recognizes his strong research achievements as a PhD student and will further accelerate his work in secure and trustworthy machine learning.
[Sep 2025] New collaborative NeurIPS 2025 paper “DyFlow” proposes a dynamic workflow framework for agentic reasoning with LLMs.
[Aug 2025] We have two new papers accepted to EMNLP Findings 2025: one on causal methods for hallucination mitigation (Treble Counterfactual VLMs) and another introducing a benchmark for NLP anomaly detection (NLP-ADBench). See our Treble Preprint and NLP-ADBench Preprint!
[Aug 2025] We have a new paper on improving typhoon track forecasting with LLM-augmented transformers (TyphoFormer) accepted to ACM SIGSPATIAL 2025; see our Preprint!
🏅 Awards and Grants
As Principal Investigator (August 2023 onwards)
- Capital One Research Awards, 2024
- Amazon Research Awards, 2024
- Best Paper, KDD Resource-Efficient Learning Workshop, 2024
- NSF Award 1, NSF Award 2, 2024, NSF Award 3, 2025
- Google Cloud Research Innovators, 2024
- AAAI New Faculty Highlights, 2024
Prior to Principal Investigator Role (Before August 2023)