Show simple item record

dc.contributor.authorPanesar, Kulvinder
dc.date.accessioned2020-10-07T15:54:12Z
dc.date.accessioned2020-10-27T07:20:27Z
dc.date.available2020-10-07T15:54:12Z
dc.date.available2020-10-27T07:20:27Z
dc.date.issued2018-07-06
dc.identifier.citationPanesar K (2018) ‘How can one evaluate a conversational software agent framework?’ 7th International Conference on Meaning and Knowledge Representation. 4-6 Jul 2018. Institute of Technology, Blanchardstown, Dublin, RoI.en_US
dc.identifier.urihttp://hdl.handle.net/10454/18136
dc.descriptionYesen_US
dc.description.abstractThis paper presents a critical evaluation framework for a linguistically orientated conversational software agent (CSA) (Panesar, 2017). The CSA prototype investigates the integration, intersection and interface of the language, knowledge, and speech act constructions (SAC) based on a grammatical object (Nolan, 2014), and the sub-model of belief, desires and intention (BDI) (Rao and Georgeff, 1995) and dialogue management (DM) for natural language processing (NLP). A long-standing issue within NLP CSA systems is refining the accuracy of interpretation to provide realistic dialogue to support the human-to-computer communication. This prototype constitutes three phase models: (1) a linguistic model based on a functional linguistic theory – Role and Reference Grammar (RRG) (Van Valin Jr, 2005); (2) Agent Cognitive Model with two inner models: (a) knowledge representation model employing conceptual graphs serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts (Wooldridge, 2013) and intentionality (Searle, 1983) and rational interaction (Cohen and Levesque, 1990); and (3) a dialogue model employing common ground (Stalnaker, 2002). The evaluation approach for this Java-based prototype and its phase models is a multi-approach driven by grammatical testing (English language utterances), software engineering and agent practice. A set of evaluation criteria are grouped per phase model, and the testing framework aims to test the interface, intersection and integration of all phase models and their inner models. This multi-approach encompasses checking performance both at internal processing, stages per model and post-implementation assessments of the goals of RRG, and RRG based specifics tests. The empirical evaluations demonstrate that the CSA is a proof-of-concept, demonstrating RRG’s fitness for purpose for describing, and explaining phenomena, language processing and knowledge, and computational adequacy. Contrastingly, evaluations identify the complexity of lower level computational mappings of NL – agent to ontology with semantic gaps, and further addressed by a lexical bridging consideration (Panesar, 2017).en_US
dc.language.isoenen_US
dc.rights(c) 2018 The Author. Full-text reproduced with author permission.en_US
dc.subjectConversational software agentsen_US
dc.subjectNatural language processingen_US
dc.title‘How can one evaluate a conversational software agent framework?’en_US
dc.status.refereedNoen_US
dc.typeConference paperen_US
dc.type.versionPublished versionen_US
dc.date.updated2020-10-07T14:54:14Z
refterms.dateFOA2020-10-27T07:21:20Z


Item file(s)

Thumbnail
Name:
EvaluationOfCSAPaperFINAL4-7-2 ...
Size:
957.8Kb
Format:
PDF
Description:
panesar_2018

This item appears in the following Collection(s)

Show simple item record