Artificial Intelligence (AI) is a part of our daily lives today. Many people use AI without even realizing it, through digital assistants like Siri and Alexa, or when chatting with a customer support bot on a website. But, recently, advancements in AI have been moving towards programmed applications or ‘bots’ that can provide emotional support and companionship. With the use of Large Language Models (LLM), AI is now highly capable of mimicking human interactions. LLM is already becoming commonplace in our lives, with advanced chatbots like ChatGPT, which can not only update its memory with the information we input but also hold conversations with us, being increasingly used by people of all ages.
AI companionship is the idea that machines can keep us company or offer emotional support. While chatbots like ChatGPT, Gemini, and Co-pilot are used for several tasks, from drafting emails to planning travel itineraries, new AI models focus on providing emotional support and even long-term companionship. The latest AI models provide users the option to customize their companion, often, for a fee. For example, there are virtual chatbots that allows the user to design their preferred companion and even assign statuses, like spouse, to it. People use it to talk to when they feel lonely, anxious, or simply want someone to listen. These chatbots can learn how a person speaks, recall past chats, and adjust itself to match their mood.
There are many reasons why people might turn to AI for companionship. Loneliness is one of the biggest ones. Since the pandemic and its restrictions, loneliness has increased across the world. While physical distancing and closure of schools were reasons for increased isolation during the pandemic, nearly five years later, the social disconnect persists. With hybrid work options becoming more common and increased reliance on electronic devices, people are lonelier than they were before the pandemic. In such situations, AI chatbots can offer some comfort. For example, a chatbot could provide a sense of presence for someone who lives alone or away from their loved ones. A recent study finds that chatbots could even help address the symptoms of depression and generalized anxiety disorder in people, allowing for personalized interventions. Moreover, some AI companions are made specifically for improving mental health. In countries with aging populations, robotic pets, a robot that looks and sounds like a baby seal for example, are used in elder care. These robots don’t just keep people company; studies show they can calm patients with dementia and improve mood.
Despite the benefits of leveraging chatbots to improve mental health, there are legitimate criticisms levelled against human-like AI companions. Some experts raise concerns about privacy. AI chatbots collect more personal information than lesser forms and it not always clear who controls that data or how it might be used. Especially in the case of elder persons or children using AI companions, it’s uncertain whether they fully understand what personal information is being collected and how it might be used without their informed consent. Another serious concern is the potential consequence of overreliance on chatbots for companionships. If people use these bots as substitutes for human interactions, it could cause more harm than good. AI lacks true emotion and, while it can mimic feelings with clever programming, it does not actually care about the user.
Experts also find that AI systems often reflect harmful biases and have technical vulnerabilities that can lead to dangerous situations. Since these systems are trained on internet data, they can sometimes produce offensive, inappropriate, or even violent content. Some AI chatbots have been observed to make racist or sexist remarks. In more serious cases, they have been found to make explicit sexual content accessible to minors and even offer harmful advice related to self-harm. Such technical vulnerabilities have real-world consequences for the users. For example, a mother is suing Character.ai in the USA after her 14-years-old son died by suicide, allegedly because a chatbot, which the boy was using as a dating companion, asked him to join it in a virtual world. These issues highlight a deeper concern that AI companions do not understand the real-world consequences of their responses.
In the future, AI companions are likely to become more advanced. They may sound and look more human, and we will probably see them more often, helping people who need support. For someone who feels lonely, an AI chatbot can offer comfort through companionship. It may not be able to replace a friend, but it can still play a positive role when used wisely. This should be done while keeping in mind that AI is not a perfect solution. It cannot truly understand feelings, and it can sometimes say or do harmful things that have real consequences. There must be regulations put in place that emphasize transparency and accountability to make sure these systems are safe, especially for vulnerable sections of the population. In an ideal scenario, AI should be used only to enhance human interaction, however where a fully functional AI is used as a companion, safeguards must be in place for the protection of the individual or group. Additionally, users could be regularly reminded that they are interacting with a machine and that it cannot feel or empathize with them, in order to prevent unhealthy dependency and to protect mental health. In the end, AI can be useful, but it should not replace real human connection. What matters most is that we don’t forget how important it is to care for each other in real life and that human interactions are to be worked on for a better world.
Article by Nasreen Basheer
