Research
My research interests are in steering the Foundation Models(including Large Large Models and Vision Languge Models) effectively and efficiently. š Iām currently working on personalizing LLMs for every user and auto-search a better prompt starting from an initial one.š½ļø Below is my selected publications.
|
|
ODIN: Disentangled Reward Mitigates Hacking in RLHF
Lichang Chen*, Chen Zhu*, Davit Soselio, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro
Preprint, 2024.
|
|
AlpaGasus: Training a Better Alpaca with Fewer Data
Lichang Chen*, Shiyang Li*, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin
ICLR, 2024
|
|
Unbiased watermark for large language models
Zhengmian Hu, Lichang Chen, Xidong Wu, Yihan Wu, Hongyang Zhang, Heng Huang
ICLR, 2024 (Spotlight)
|
|
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin
NAACL , 2024
|
|
HallusionBench: an image-context reasoning benchmark challenging for multi-modality models
Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, et al.
CVPR, 2024
|
|
InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models
Lichang Chen*, Jiuhai Chen*, Tom Goldstein, Heng Huang, Tianyi Zhou
Preprint, 2023
|
|
How Many Demonstrations Do You Need for In-context Learning?
Jiuhai Chen, Lichang Chen, Chen Zhu, Tianyi Zhou
EMNLP, 2023
|
|
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer
Lichang Chen, Jiuhai Chen, Heng Huang, Minhao Cheng
EMNLP, 2023
|
|