# In-context Learning of Large Language Models for Controlled Dialogue Summarization
>Yuting Tang, Ratish Puduppully, Zhengyuan Liu, and Nancy Chen. 2023. [In-context Learning of Large Language Models for Controlled Dialogue Summarization: A Holistic Benchmark and Empirical Analysis](https://aclanthology.org/2023.newsum-1.6). In _Proceedings of the 4th New Frontiers in Summarization Workshop_, pages 56–67, Singapore. Association for Computational Linguistics.
<b class='rainbow-text'>Significance of study: </b>
+ #Large-Language-Model The study demonstrates LLMs can generate **better summaries given the controlled signals**.
+ #In-context-Learning The study shows LLMs can achieve **few-shot learning for dialogue summarization** through in-context learning.
+ #Controlled-Text-Generation The study points out the problem of **poor controllability of LLMs on numeric information** under ICL.
> [!Example Summaries]
>![[llm_1.png]]
> [!Benchmark under Entity Control]
>![[llm_2.png]]
> [!Poor Controllability over Numeric Information]
> ![[llm_3.png|350]]