Exploring the Dialogue Comprehension Ability of Large Language Models
Published in Arxiv, 2023
Recommended citation: She S, Huang S, Wang X, et al. Exploring the Dialogue Comprehension Ability of Large Language Models[J]. arXiv preprint arXiv:2311.07194, 2023. https://arxiv.org/abs/2311.07194
This study introduces a dual-assessment approach for large language models (LLMs), using dialogue summarization to evaluate factual consistency and derived factual questions to gauge comprehension, uncovering a notable error rate and proposing a multi-task fine-tuning strategy for improvement.
Recommended citation: She S, Huang S, Wang X, et al. Exploring the Dialogue Comprehension Ability of Large Language Models[J]. arXiv preprint arXiv:2311.07194, 2023.