Yonghua Zhang

Yonghua Zhang

Postdoc

Yonghua Zhang (张永华) is a Ph.D. student at School of Computer Science and Technology, Beihang University. His research focuses on AI model compression and compiler optimization, especially for convolutional neural network (CNN) and large language model (LLM) on GPUs. His current research interest is on quantifying LLM models and optimizing compiler backends, for performance portability and productivity.

Interests
  • LLM quantization
Education
  • Ph.D. in Computer Science, 2023

    Beihang University