A robot that is a member of a human-robot team needs to not only perform its assigned tasks efficiently but also in a manner that human teammates find trustworthy. By maintaining adequate trust, the robot can prevent underutilization, disuse, and excessive supervision. We have previously investigated an agent that is able to learn behaviors that human operators find trustworthy, assess its own trustworthiness, and adapt its behavior accordingly. In this article, we add an additional transparency layer that allows the robot to provide simple, concise, and understandable explanations for why it adapted its behavior. Our approach uses case-based reasoning and reuses information stored in existing behavior adaptation cases, thereby not requiring any additional knowledge to be collected or learned. We evaluate the system on scenarios from a simulated robotics domain. Our results demonstrate that the agent can provide explanations that closely align with an operator’s assessment of the robot’s behavior.
展开▼