Yujia Huang

yjhuang [at] caltech (dot) edu.

prof_pic.jpg

I am a Machine Learning Researcher with a Ph.D. from Caltech (advised by Prof. Yisong Yue) and currently a Quantitative Researcher at Citadel Securities.

My research philosophy centers on Inference Dynamics—moving beyond static, one-pass predictions to treat inference as a dynamic, controllable process. During my Ph.D., I laid the theoretical groundwork for this vision by establishing robustness guarantees for dynamic systems (e.g., via Neural ODEs and recurrent feedback).

Currently, I apply this lens to Generative AI to prototype System 2 reasoning. My recent work pioneers training-free guidance methods for diffusion models, framing generation as a test-time optimization problem. By allocating inference-time compute to search and verify outputs against complex rules, my research aims to build AI systems that are not just powerful, but reliable and steerable.

selected publications

  1. ICMLOral
    Symbolic Music Generation with Non-differentiable Rule Guided Diffusion
    Yujia Huang, Adishree Ghatare, Yuanzhe Liu, Ziniu Hu, Qinsheng Zhang, Chandramouli S Sastry, Siddharth Gururani, Sageev Oore, and Yisong Yue
    In International Conference on Machine Learning , 2024
    Oral Presentation [Top 1.5%]
  2. ICML
    Diffusion Models for Adversarial Purification
    Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar
    In International Conference on Machine Learning , 2022
  3. NeurIPS
    Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
    Yujia Huang, Huan Zhang, Yuanyuan Shi, J Zico Kolter, and Anima Anandkumar
    In Neural Information Processing Systems , 2021
  4. NeurIPS
    Neural Networks with Recurrent Generative Feedback
    Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Y. Tsao, and Anima Anandkumar
    In Neural Information Processing Systems , 2020