Code underlying the publication: Integrity-based Explanations for Fostering Appropriate Trust in AI Agents
This repository includes the software code that was developed for the publication titled "Integrity-based Explanations for Fostering Appropriate Trust in AI Agents".
Research Objective: How does the expression of different principles of integrity through explanation affect the appropriateness of human’s trust in the AI agent?Type of research: EmpiricalCode Environment: Microsoft Power Platform - Power AppsType of Code: Power Apps Solution. Solutions are the mechanism for implementing application lifecycle management (ALM) in Power Apps.Method of data collection: Participants were provided guest credentials to login in Power Apps platform and interact with an AI agent to estimate calories of a food plate. The AI agent provided different types of integrity-laden explanations to help the participant in estimating the calories. Data was collected regarding reliance of the participant on the AI agent in form of 0 (not relied) and 1 (relied).