Psychologists explore ethical issues associated with human-AI relationships

It’s becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies. At their extreme, people have “married” their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice. In an opinion paper publishing April 11 in the Cell Press journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human-human relationships and give harmful advice.
“The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” says lead author Daniel B. Shank of Missouri University of Science & Technology, who specializes in social psychology and technology. “If people are engaging in romance with machines, we really need psychologists and social scientists involved.”
AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners. And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics.
A real worry is that people might bring expectations from their AI relationships to their human relationships. Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.”
Daniel B. Shank, lead author, Missouri University of Science & Technology
There’s also the concern that AIs can offer harmful advice. Given AIs’ predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers say.
“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” says Shank. “If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”
The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud.
“If AIs can get people to trust them, then other people could use that to exploit AI users,” says Shank. “It’s a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they’ll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user.”
As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person. The researchers also argue that relational AIs could be more effectively used to sway people’s opinions and actions than Twitterbots or polarized news sources do currently. But because these conversations happen in private, they would also be much more difficult to regulate.
“These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they’re more focused on having a good conversation than they are on any sort of fundamental truth or safety,” says Shank. “So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner.”
The researchers call for more research that investigates the social, psychological, and technical factors that make people more vulnerable to the influence of human-AI romance.
“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” says Shank. “Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.”
Shank, D. B., et al. (2025). Artificial intimacy: ethical issues of AI romance. Trends in Cognitive Sciences. doi.org/10.1016/j.tics.2025.02.007.