Yu Lu Liu

Yu Lu Liu

刘雨璐

Johns Hopkins University

Hi!

My name is Yu Lu and welcome to my website!

I’m a first year PhD student at Johns Hopkins University, where I’m working with Prof. Ziang Xiao. Previously, I was a Master’s student at McGill University and Mila, supervised by Prof. Jackie Chi Kit Cheung. I also had the opportunity to work with collaborators (and mentors!) from Microsoft Research Montreal: Dr. Alexandra Olteanu and Dr. Su Lin Blodgett. Before my master’s studies, I completed my Bachelor’s degree in (Honours) Statistics and Computer Science at McGill University, and I also served as the Co-President (2021-2022) of McGill Artificial Intelligence Society.

RESEARCH INTERESTS: Broadly speaking, I’m interested in evaluating how AI systems impact people and how people impact AI systems. I do focus more on (but do not limit myself to) natural language processing. More specifically, I’m interested in:

  • How do people interact with these systems, how are people impacted by the use of these systems, by development/research practices in AI/NLP?
  • How do people develop these systems, or conduct research in the field of AI/NLP?

The best way to reach me is via email (yliu624 [at] jh [dot] edu) or Bluesky.

About my name: “Yu Lu” is my given name and “Liu” is my family name. You can pronounce my given name by saying “you lunatic!” and then omit the “natic” part. It’s a 70% correct pronunciation and I’ve gotten quite used to it! That being said, I’ll be very happy if you get 100% by focusing on the last two characters of this Chinese idiom.

News

I’ll be attending CHI 2025 in Yokohama, Japan (April 26 - May 1), as the lead organizer of the HEAL (Human-centered Evaluation and Auditing of Language models) workshop!

Publications

(2023). ECBD: Evidence-Centered Benchmark Design for NLP. In ACL 2024 Main Proceedings.

PDF Cite Poster

(2023). Responsible AI Considerations in Text Summarization Research: A Review of Current Practices. In EMNLP 2023 Findings.

PDF Cite Poster

(2022). MaskEval: Weighted MLM-Based Evaluation for Text Summarization and Simplification. arXiv Preprint.

PDF Cite