Detecting conflicting, non-cooperative Smart Watch Assistants.
This scenario involves smart assistants for improving personal health. An artificial agent provides behavior recommendations to users asking questions regarding daily diet and exercise schedule. The aim is to detect and explain deceptive behaviors such as lies about one's own activity from data errors due to external conditions (e.g. available resources, varying environmental conditions). Argumentation Theory is used to detect errors and conflicts through reasoning over the users' prior knowledge in the form of a knowledge graph.
(dynamic) user questions, behaviors, errors, causes, historical user data. (static) societal health values, external background knowledge.
- receive question from a user;
- classify a user behavior based on questions, historical data, and societal values;
- assess the user behaviors;
- classify a deceptive behavior as an error using Argumentation Theory;
- induce the causes for an error;
- rank the classified behaviors ;
- provide the recommended behavior based on the detected cause.
Recognition (1-2), Monitoring (3), Explaining (5), Recommendation (4-7).