You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are some missing abstracts in the dataset. Is this a dataset collection issue or an issue with the released dataset?
Example where @cite_0 is missing abstract field
{
'aid': 'cs9903008',
'mid': '2949261815',
'abstract': "Recent technological advances have made it possible to build real-time, interactive spoken dialogue systems for a wide variety of applications. However, when users do not respect the limitations of such systems, performance typically degrades. Although users differ with respect to their knowledge of system limitations, and although different dialogue strategies make system limitations more apparent to users, most current systems do not try to improve performance by adapting dialogue behavior to individual users. This paper presents an empirical evaluation of TOOT, an adaptable spoken dialogue system for retrieving train schedules on the web. We conduct an experiment in which 20 users carry out 4 tasks with both adaptable and non-adaptable versions of TOOT, resulting in a corpus of 80 dialogues. The values for a wide range of evaluation measures are then extracted from this corpus. Our results show that adaptable TOOT generally outperforms non-adaptable TOOT, and that the utility of adaptation depends on TOOT's initial dialogue strategies.",
'related_work': "In the area of spoken dialogue, @cite_2 has proposed a method for adapting initiative in form-filling dialogues. Whenever the system rejects a user's utterance, the system takes more initiative; whenever the user gives an over-informative answer, the system yields some initiative. While this method has the potential of being automated, the method has been neither fully implemented nor empirically evaluated. @cite_3 has evaluated strategies for dynamically deciding whether to confirm each user utterance during a task-oriented dialogue. Simulation results suggest that context-dependent adaptation strategies can improve performance, especially when the system has greater initiative. @cite_1 and @cite_0 have used reinforcement learning to adapt dialogue behavior over time such that system performance improves. We have instead focused on optimizing performance during a single dialogue.",
'ref_abstract': {
'@cite_0': {'mid': '200223693', 'abstract': ''},
'@cite_1': {'mid': '2141839844', 'abstract': "This paper describes a novel method by which a dialogue agent can learn to choose an optimal dialogue strategy. While it is widely agreed that dialogue strategies should be formulated in terms of communicative intentions, there has been little work on automatically optimizing an agent's choices when there are multiple ways to realize a communicative intention. Our method is based on a combination of learning algorithms and empirical evaluation techniques. The learning component of our method is based on algorithms for reinforcement learning, such as dynamic programming and Q-learning. The empirical component uses the PARADISE evaluation framework (, 1997) to identify the important performance factors and to provide the performance function needed by the learning algorithm. We illustrate our method with a dialogue agent named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We show how ELVIS can learn to choose among alternate strategies for agent initiative, for reading messages, and for summarizing email folders."},
'@cite_3': {'mid': '2063157598', 'abstract': 'As with human?human interaction, spoken human?computer dialog will contain situations where there is miscommunication. One natural strategy for reducing the impact of miscommunication is selective verification of the user utterance meanings. This paper reports on both context-independent and context-dependent strategies for utterance verification that show that the use of dialog context can be very helpful in selecting which utterances to verify. Simulations with data collected during experimental trials with the Circuit Fix-It Shop spoken natural language dialog system are used in the analysis. In addition, the performance of various selection strategies is measured separately for computer-controlled and user-controlled dialogs and general guidelines for selecting an appropriate strategy are presented.'},
'@cite_2': {'mid': '1882353391', 'abstract': 'While user modelling has become a mature field with demonstrable research systems of great power, comparatively little progress has been made in the development of user modelling components for commercial software systems. The development of minimalist user modelling components, simplified to provide just enough assistance to a user through a pragmatic adaptive user interface, is seen by many as an important step toward this goal. This paper describes the development, implementation, and empirical evaluation of a minimalist user modelling component for TIMS, a complex commercial software system for financial management. The experimental results demonstrate that a minimalist user modelling component does improve the subjective measure of user satisfaction. Important issues and considerations for the development of user modelling components for commercial software systems are also discussed.'}
}
}
The text was updated successfully, but these errors were encountered:
There are some missing abstracts in the dataset. Is this a dataset collection issue or an issue with the released dataset?
Example where
@cite_0
is missingabstract
fieldThe text was updated successfully, but these errors were encountered: