Jump to Content

Evaluating and Enhancing the Robustness of Dialogue Systems: A CaseStudy on a Negotiation Agent

Minhao Cheng
Wei Wei
Cho-Jui Hsieh
North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) (2019)
Google Scholar

Abstract

Recent research has demonstrated that dialogue agents trained on large datasets can achieve striking performance when facing normal users. However, in real world applications, it is important to ensure that the agent performs smoothly interacting with not only normal users but also malicious users who are trying to attack the system. In this paper, we develop algorithms to evaluate the robustness of a dialogue agent by carefully designing adversarial agents to attack it, in both black-box and white-box settings. Furthermore, we demonstrate that adversarial training using our attacks can significantly improve the robustness of a dialogue system. On a case-study of the negotiation agent developed by (Lewis et al., 2017), our attacks can reduce the average advantage of the RL-based agent from $2.68$ to $-5.76$ on random problems with total value of $10$. 4.55 to 8.71. Furthermore, with the adversarial training process, we are able to improve the robustness of this negotiation agent under strong attacks.