Spoken dialogue is the most natural means of human communication. It has also become part of human-machine interface in smartphones, personal assistants, intelligent agents, robot companions, in-car telematics, among others, that offers invaluable services. In practice, a dialogue system can be goal-driven, such as an ATM machine for people to complete a transaction; it can also be a chatting system, such as ELIZA chatbot for entertainment purposes without a specific goal; and it can also be something in-between that provides humans with information. Most of today’s dialogue systems are built on some existing knowledge database, and perform pattern classification tasks in one of the above three operating modes. They work in two separate phases: learning and run-time execution. At run-time, machines execute what they have learnt during training.
In this project, we develop novel methods for natural language dialogue with humans that will allow the a robotic system to proactively elicit information at run-time from a human co-worker on task details, and co-ordinate with the co-worker on sub-task allocation during planning. In particular, the system may use dialogue interaction to understand the co-worker’s goals, intentions and other aspects of the collaborative task which cannot be explicitly perceived through other means (e.g. visual). The project also studies the methods for the system to explain its responses using general and domain knowledge, as well as contextual information. Specifically, by combining low-level learning and high-level reasoning, we aim to enable machines to provide context-aware, user-centric explanations in response to human inquiries and to converse with humans more naturally to form a peer-like relationship. In this way, machines will perform services (e.g. inspection, repair, etc.) more accurately, while humans can be more confident in machines and make informed decisions.
The project will deliver a dialogue system that initially learns from sample, generic conversations, and further learn to generate domain-specific conversations using limited samples from the domain. The system also consolidates the contextual information such as working environment, user preferences and so on, and integrates such information with commonsense knowledge, and performs reasoning during the conversations to produce only relevant and concise responses, together with necessary explanations for certain types of inquiries.
Project Duration: 26 November 2018 – 25 November 2023
Funding Source: RIE2020 Advanced Manufacturing and Engineering Programmatic Grant A18A2b0046
Acknowledgement: This research work is supported by Programmatic Grant No. A18A2b0046 from the Singapore Government’s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain). Project Title: Human Robot Collaborative AI for AME.