Dialogue Act Recognition

Dialogue Acts are also referred to as moves, or illocutionary acts. They mark important characteristics of utterances, indicate the role or intention of an utterance in a specific dialogue, and make relationships between utterances more obvious. The research area of Dialog Act Recognition splits into two parts: Segmentation and Labeling. Where others explored methods for a joint recognition, we focussed our work on a sequential approach where first the transcript stream gets segmented and in a second step labeled.

For the segmentation, we found that temporal information - pauses between the words - play the most significant role, using supervised machine learning. Furthermore, lexical information like, e.g., the word itself, and dynamic information about the surrounding segments help to improve the decision, whether the word in question has to be added to the current segment or starts a new one.

In AMI, a set of 15 Dialog Act labels have been defined to be used in the context of multiparty conversations. Based on the annotations with this set, we developed a system for the automatic labeling of segments using machine learning techniques. A special challenge has been introduced by the observation that the label of the current segment also depends on the labels of previous and upcoming segments. As the latter information may not be available on the time of the classification, we constructed our system in an any-time manner. Hence, the system assigns a "pre-labeling" to the segment and updates this once new information of the surrounding segments is available. This allows to both, have a low response latency and a decent recognition rate.

List of relevant Publications

Contact Persons: