This comprehensive review synthesizes over two decades of empirical research on trust formation in human-AI avatar and robot interactions. Drawing from meta-analyses, experimental studies, and theoretical frameworks across human-robot interaction (HRI), social robotics, and cognitive psychology, this document identifies the key design features that drive trust and presents an integrated framework for maximizing trust in AI systems.
The landmark meta-analysis by Hancock et al. (2011), analyzing 29 empirical studies, established that robot-related factors—particularly performance—have the strongest association with trust (r = +0.26), while environmental and human factors showed moderate to small effects.
Key Finding: Trust in AI avatars and robots is primarily determined by three interconnected domains: (1) the AI system's demonstrated competence and reliability, (2) its anthropomorphic design features that signal warmth and social presence, and (3) the transparency and predictability of its behavior.
This enhanced edition includes: expanded visual design guidelines with scientific references, comprehensive coverage of Duchenne smile research for avatars and robots, and detailed pre-testing methodologies for validating avatar/robot designs before deployment.
Visual Design Elements: Science-Based Guidelines
Research consistently demonstrates that visual design choices profoundly impact trust formation in human-AI interaction. This section synthesizes empirical findings to provide evidence-based guidelines for avatar and robot visual design.
Facial Feature Design and Trust Perception
Facial features are primary drivers of trustworthiness judgments, with effects emerging within milliseconds of exposure (Willis & Todorov, 2006). Research on anthropomorphic trustworthiness reveals specific design principles:
Eye Design
Studies show that big eyes significantly increase perceived trustworthiness in social robots (Reeves et al., 2020). The 'baby schema' effect—large eyes relative to face size—activates caregiving responses and increases perceived approachability and warmth. Research by Kalegina et al. (2018) found that the presence of lifelike eyes, rather than abstract or absent eyes, is perceived as more personable and suitable for domestic contexts.
• Optimal eye position: Medium vertical and horizontal positioning increases perceived trustworthiness
• Eye contact patterns: Small changes to eye contact frequency significantly affect perceived trustworthiness
• Gaze responsiveness: Embodied agents with appropriate eye gaze are trusted more than voice-only systems
Mouth and Expression Capability
Research by Pollmann et al. (2019) suggests that the absence of a visible mouth may actually aid emotional expressions in some robot designs by avoiding uncanny valley effects. However, when mouths are present, medium horizontal positioning increases perceived trustworthiness. The android robot Nikola demonstrated successful recognition of 17 out of 22 complex emotional expressions across international samples (Sato et al., 2025).
The Duchenne Smile: Critical Design Element for Trust
Named after the 19th-century French neurologist Guillaume Duchenne, the Duchenne smile has emerged as a critical factor in avatar and robot design for trust building.
What is a Duchenne Smile?
The Duchenne smile involves the simultaneous activation of two facial muscles: the zygomaticus major (lifting the corners of the mouth) and the orbicularis oculi (causing crow's feet wrinkles around the eyes). This distinguishes it from non-Duchenne 'social' smiles that only involve mouth movement.
Research shows individuals displaying Duchenne smiles are perceived as more trustworthy, likable, and socially competent (Schindler & Trede, 2021)
Duchenne Smiles in Virtual Agents and Avatars
Key Research Finding (Baylor & Kim, 2011): In studies with virtual peer counselors, digitally rendered smiles animated to approximate a Duchenne smile led to significantly more positive evaluations on perceived empathy and positivity. Frank, Ekman, and Friesen (1993) demonstrated that individuals can reliably distinguish between Duchenne and non-Duchenne smiles, with Duchenne smiles associated with more positive impressions.
Avatar Smile Enhancement Research (Oh et al., 2016): A study published in PLOS One found that participants who communicated via avatars exhibiting enhanced smiles used more positive words to describe their interaction experience. Targets with genuine smiles received the most positive ratings across all conditions.
Anticipatory Smiling: The Emo Robot Study
Breakthrough Research (Hu et al., 2024, Science Robotics): Researchers trained the anthropomorphic robot 'Emo' to display anticipatory facial expressions. The robot was equipped with 26 motors and flexible silicone skin for precise facial control. By observing subtle changes in a human's face, Emo could predict an approaching smile 839 milliseconds before the human smiled and adjust its face to smile simultaneously.
Design Implication: A mimicry smile occurring after the human has finished their expression feels artificial and disingenuous. Anticipatory smiles, generated with understanding and prediction of the other party's emotional state, are vital for creating genuine human-robot rapport. This is particularly critical as smiles play an outsized role in social bonding.
Implementation Guidelines for Duchenne Smiles
1. Eye region activation: Implement orbicularis oculi contraction causing cheek raise and eye narrowing
2. Temporal dynamics: Genuine smiles have characteristic onset (0.5s), apex (variable), and offset (slower than onset) timing
3. Symmetry: Duchenne smiles are typically more symmetrical than fake smiles
4. Anticipatory timing: For maximum rapport, implement predictive models to enable simultaneous (not reactive) smiling
5. Context congruence: Facial expressions deviating from context negatively impact trust (Paradeda et al., 2016; Appel et al., 2021)
Anthropomorphism Strategy: Navigating the Uncanny Valley
Research reveals that facial expressions can enhance warmth and attractiveness of robots and affect trust impressions (Ghazali et al., 2018; Mishra et al., 2023; Dong et al., 2023). However, design choices must navigate the uncanny valley phenomenon.
Clearly Robotic: Industrial HRI research shows robots as tools are perceived more trustworthy in work settings (Onnasch & Hildebrandt, 2021). It avoids uncanny valley; sets appropriate expectations; best for task-focused contexts
Stylized/Cartoon: Dong et al. (2023): Facial expressions increase warmth and attractiveness on humanoid robots without adding uncanny valley effects; high likability; positive affect; effective for social and educational contexts
Hyper-Realistic: Avatar realism research: No uncanny valley dip when photorealistic avatars reach sufficient quality; maximizing realism maximizes trust (2024 avatar quality study); maximum emotional engagement if executed well; requires excellent animation quality; LLM capabilities can reduce eeriness
Testing Methodologies
1. Static Image Pre-Testing
Purpose: Rapid assessment of visual design elements before investing in animation/interaction development.
• Present static images of avatar/robot designs to target user sample (minimum N=50)
• Administer RoSAS-SF (6 items) for rapid warmth/competence/discomfort assessment
• Include trustworthiness rating items: 'How trustworthy does this avatar appear?' (1-7 scale)
Research Basis: Willis & Todorov (2006) showed that 100ms exposure is sufficient for initial trustworthiness evaluations. Carpinella et al. (2017) validated RoSAS using robot images alone.
2. Video-Based Expression Testing
Purpose: Validate facial expression recognition and emotional appropriateness.
• Create short video clips (3-5 seconds) of each target facial expression
• Ask participants to identify the emotion expressed (forced choice from 6 basic emotions)
• Measure 'appropriateness' rating: 'How appropriate is this expression?' (1-7 scale)
• Target: 80%+ correct recognition rate (based on Nikola android validation: 82.5% average appropriateness)
3. Interactive Trust Assessment
Purpose: Measure trust development during actual interaction.
• Administer Trust Perception Scale-HRI before and after interaction
• Include behavioral measures: advice-following rate, task delegation decisions
• Consider implicit measures: Implicit Association Test for trustworthiness (Van der Biest et al., 2020)
• Track reaction times in decision-making tasks (faster = higher trust)
Key Scientific References
Hancock, P. A., et al. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
Hu, Y., et al. (2024). Human-robot facial coexpression. Science Robotics. (Emo robot anticipatory smile study)
Dong, J., et al. (2023). Facial Expressions Increase Emotion Recognition Clarity and Improve Warmth and Attractiveness on a Humanoid Robot. HFES.
Reeves, S., et al. (2020). Effect of facial features on facial anthropomorphic trustworthiness in social robots. Applied Ergonomics.
Onnasch, L., & Hildebrandt, C. L. (2021). Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial HRI. ACM THRI.