Yue Zhao
Avatar of Yue Zhao
Assistant Professor
Thomas Lord Department of Computer Science
School of Advanced Computing

University of Southern California

Los Angeles, CA, USA
Email:

Collaboration with Me. I am open to external opportunities for invited talks, research collaborations, and employment (only on the part-time/advising/visiting basis). Let us have a chat by email. I frequently visit major cities, e.g., Seattle, NYC, Chicago, Boston, Atlanta, and Bay Area to meet people, give talks, and host social events.

Research Interests: My research aims to build trustworthy, robust, and scalable AI that advances science and benefits society. I focus on rigorous algorithmic foundations, open-source system development, and high-impact applications in both human-centric and scientific domains.

  1. Robust & Trustworthy AI: Detecting the Unexpected.
    I design core algorithms to detect anomalies, out-of-distribution (OOD) data, and outliers across diverse modalities (including graph-structured data). These methods reinforce AI systems against rare or unseen scenarios, enhancing reliability, security, and interpretability.
    Keywords: Anomaly Detection, OOD Detection, Trustworthy AI, Graph Anomaly Detection
  2. AI for Science & Society: Foundation Models in Action.
    By pairing robust detection with large language models (LLMs) and generative AI (GenAI), I tackle interdisciplinary challenges—from scientific discovery to political forecasting and computational social science. This approach bridges algorithmic research with real-world decision-making and public policy.
    Keywords: AI for Science, Generative AI, LLMs, Political Forecasting, Computational Social Science
  3. Scalable, Automated & Open-source ML Systems.
    To ensure widespread adoption, I build reproducible and efficient tools—most notably PyOD (27M+ downloads) for anomaly detection, along with PyGOD, ADBench, and other libraries with 20K+ GitHub stars (top 800 worldwide). My work emphasizes automated model selection, distributed inference, and user-friendly designs, democratizing advanced ML across academia and industry.
    Keywords: ML Systems, Automated ML, Open-source AI, Distributed Computing

Biography.

Lab Openings. We are warmly welcoming new members to the FORTIS Lab!

Ph.D. Students (no more than 1 Ph.D. student for Fall 2026):
  • Due to the large number of interested candidates, future Ph.D. students are ideally have multiple published, relevant papers (not necessarily with me) in top-tier ML, System, CV, or NLP conferences/journals.
Research Interns (Any Time, All Year Round):
  • We welcome both undergraduate and graduate interns from USC and other institutions.
  • Preferred candidates are located in North America for time zone compatibility.
  • I do not have any summer interns -- I am also enjoying summer for fun :)
Application Process: To apply for either opportunities, complete the Application Form, email me after submitting the form, and review the FORTIS Lab website for more information before reaching out.

✈ News and Travel

[Apr 2025] Our paper on label‑efficient graph open‑set learning (LEGO‑Learn) has been accepted to TMLR! Read the final version on OpenReview.

[Apr 2025] We have a new paper on mitigating hallucination in LLMs via logical reasoning and retrieval-based verification; see our Preprint!

[Apr 2025] We have a new paper on adversarial prompt optimization to manipulate LLM ranking systems (StealthRank); see our Preprint!

[Apr 2025] We have a new paper on jailbreak detection for MLLMs—JailDAM proposes adaptive memory updates for generalizing to unseen jailbreaks. See our Preprint!

[Apr 2025] DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection is accepted to CVPR 2025 as a highlight paper; see our Preprint!

[Apr 2025] We have a new paper on few-shot graph out-of-distribution detection using LLMs (LLM-GOOD); see our Preprint!

[Mar 2025] We have a new paper on hierarchical cross-modal alignment for decoupled multimodal representation learning (DecAlign); see our Preprint!

[Mar 2025] We have a new paper exploring a causal approach to mitigating hallucinations in Vision-Language Models (VLMs); see our Preprint!

[Mar 2025] We have a new paper on secure and efficient on-device OOD detection without backpropagation (SecDOOD); see our Preprint!

[Mar 2025] Join the newly established ACM Transactions on AI for Science (TAIS) as an Associate Editor!

[Mar 2025] We have a new paper, TRUSTEVAL: A Dynamic Evaluation Toolkit on Trustworthiness of Generative Foundation Models, accepted to NAACL 2025 Demo Track; see our Preprint soon!

[Feb 2025] We have a new paper, Edit Away and My Face Will Not Stay: Personal Biometric Defense against Malicious Generative Editing, accepted to CVPR 2025; see our Preprint!

[Feb 2025] We have a new paper on multimodal LLMs for time series anomaly detection (Can Multimodal LLMs Perform Time Series Anomaly Detection?); see our Preprint!

[Feb 2025] We have a new paper on model extraction attacks and defenses in distributed computing environments; see our Preprint!

[Feb 2025] We have a new paper on the trustworthiness of generative foundation models ("On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective"); see our Preprint and Project Website!

[Feb 2025] We have a new paper, “ClimateLLM,” proposing a frequency-aware foundation model for efficient and accurate global weather forecasting. See our Preprint!

[Feb 2025] We have a new survey paper on LLM-based Active Learning, covering selection, generation, and its impact on modern AI pipelines. See our Preprint!

🏅 Awards and Grants

As Principal Investigator (August 2023 onwards)
Prior to Principal Investigator Role (Before August 2023)