Introduction

This paper explores AI-mediated communication (AI-MC), which involves interactions between individuals facilitated by AI through modifying, expanding, or generating content. The research investigates the impact of using Google’s Smart Reply in a text-based referential communication task on language use, interpersonal cognition, and task performance. The main focus is on replicating positivity bias in AI-generated language and introducing the Agency Pair Framework in AI-MC research.

  • For many people, when it comes to AI-mediated communication, Smart Reply is an interesting topic (blu3mo).

Language Patterns in AI-MC

2.1 Positivity Bias

AI-generated suggestions, such as Google’s Smart Reply, often contain positive emotions. This study assumes that AI-generated language is more positive than human-generated language and that this positivity bias affects the language of both the sender and the receiver.

2.2 Agency Pairs and AI-MC

This study employs the concept of Agency Pairs (pairs of consecutive utterances where the second utterance is determined based on the first utterance) to examine how Smart Reply is incorporated in dialogues and used as conversational turns.

Interpersonal Cognition

Language is closely linked to interpersonal cognition, particularly dimensions like warmth and competence. The research investigates whether senders are perceived as warmer by receivers when AI language introduces positive language. It also studies the impact of AI language on social attractiveness and task attractiveness.

Task Performance

The study examines how AI intervention affects task accuracy, conversation length, and words per message. It also considers the influence of AI language’s positivity bias on task performance.

Current Research

This research uses the Referential Communication Task “Tangram Task” conducted in a laboratory to measure the influence of AI language. It compares dyads randomly assigned to conditions using Smart Reply and those not using it.

  • What is that?

Method

6.1 Participants

68 individuals (34 dyads) participated in this study, with a smaller sample size than planned due to the impact of COVID-19.

6.2 Procedure

Participants received instructions in separate rooms and performed tasks using Google Hangouts Chat. Directors in the AI-MC condition were instructed to use Smart Reply whenever available.

6.3 Measurement Items

Measurement items included warmth and competence, social attractiveness and task attractiveness, familiarity with AI and Smart Reply, and funnel debriefing.

6.4 Data Analysis Approach

Conversation logs were collected using Google Takeout and analyzed by distinguishing AI language from human language. Data analysis was conducted using content analysis and a dictionary-based approach.

Results

7.1 Positivity Bias of Smart Reply

Both human content analysis and computational analysis confirmed that Smart Reply is overwhelmingly positive.

7.2 Human-AI Composite Messages and Human Language

Messages from directors in the AI-MC condition were more positive than messages from directors in the control condition.

7.3 Pragmatics of AI-MC

The study examined how directors incorporated Smart Reply into messages, identifying six types of Agency Pairs and three speech types.

7.4 Interpersonal Cognition and Task Performance> No evidence was found that AI-generated language models influenced warmth or competence, but they did have an impact on social attractiveness. There was no significant influence on the accuracy of tasks or the length of conversations.

  1. Discussion This study confirmed that AI language tends to have a positive bias and showed that the concept of agency pairs is useful for analyzing AI-generated language models. The introduction of AI language may potentially affect language patterns in conversations and interpersonal cognition. 8.1 Limitations Limitations of this study include the small sample size due to the impact of COVID-19 and the focus only on American university students. Future research should involve surveys with more diverse samples. 8.2 Implications The results of this study provide important insights into language patterns, interpersonal cognition, and task performance in AI-generated language models, which could influence the design of AI systems. 8.3 Conclusion It was shown that AI language tends to have a positive bias and may reduce the social attractiveness of the sender. The framework of agency pairs was confirmed to be useful for analyzing AI-generated language models.