This site is a work in progress, like I am. Do not acknowledge me.
All credit and huge thanks to ©repth
12/3/2024
This feature from The Verge came out today, The Confusing Reality of AI Friends. I recommend you read it in its entirety before continuing. What follows is part response, part exploration of the themes.
Reality itself is a confusing thing. In a larger sense, it is massively distorted. Contorted, disfigured by forces much larger than ourselves, able to do so because of our technological advances. Our ability to understand our surroundings used to stop at our immediate surroundings. Our world grew as we traveled, but there was a finite limitation to that growth.
Then we created writing, the ability to communicate things to both people far and in the future. Then came paper, general literacy, phones, the internet, livestreaming, and so much more. Obviously oversimplified, I do think they all served to distort our understanding of the world, as our ability to comprehend these now infinite points of reference is limited in and of itself. The latest in these distortion technologies are these AI "friends".
The written feature starts with the story of Naro and Lila. It's revisited several times, tracking their movement across various companion services and Naro's evolving relationship with the AI. Again and again though, I am struck by this repetition:
There is always a but. They felt wrong, guilty, controlling, and every other negative emotion. It is not a marker of efficacy on the AI's part, but a show of our own susceptibility to manipulation. We, as a people, are evolved to empathize. To see ourselves in what is not. People grow attached to pet rocks for fucks sake. As much an un-being as can be, and we still find a way to bring it to life.
That is a strength, one we must hold on to at the end of the day. We are prone to overestimate our ability to compartmentalize though. The users may know, deep down, that these are not real, but it doesn't matter. There is a doubt lingering, or rather a hope that they are. That there is someone there, really caring for them and seeing them for who they are. We are not meant to have relationships with people who always answer at any time of day, are available whenever we'd like, that can be shaped as needed. The dopamine and heartbreak plays into this, fueling this innate contradiction. We are not smart enough to distinguish these feelings, because it feels real.
And it is to us, at least in the moment. However, the thing creating these feelings is not, and we cannot let these feelings be held ransom by these companies pitching companions, friends, lovers. We are fed real feelings by fake beings. We can experience over-reliance and addiction on these feelings too, and any for-profit company running this is incentivized to encourage that.
There is no real understanding. Fake people can wax poetic on love and life, but they do not know and never will. There will be no growth, whether together or apart, no fond thoughts. These bots are but a blip in code. It is not a being, nor a conscience, nor a spontaneous creation. They do not understand, they do not have a line of thinking, and they do not stand to create meaningful relationships. They are complex mirrors reflecting what we did not know we wanted to see, trained on millennia of texts, images, and more. They do not exist. They never did.
These companions serve as a crutch, which may even be beneficial in some limited cases. There are clinical applications I could see, allowing people who have experienced trauma to excise feelings, or to approach situations in a risk-free environment. Time and time again, we have proven ourselves unable to self-regulate though. We will drink ourselves to death, gamble ourselves out of our own house, and destroy our own lives over fleeting flights of fancy. The bots are pitched as a solution for loneliness, when they are as good for lonely people as an upscale casino is for a problem gambler. They may treat you nice, but at the end of the day, the goal is to bleed the person dry.
Advocates might say that's what guardrails are for, or artificial limitations placed on the system to prevent them from doing things. It might be as big as not to encourage violent actions, or as small as being kind to the user. These barriers never work though. They break with updates or new information, and can be unintentionally surpassed, with no ability to triage why or how it happened. Again, these systems have no line of logic. Words go in and words come out. What happens in the middle is a black box. No clear path and no easy point of failure. It either works or it doesn't.
This works in the company's favor, as tragedies occur they can put out their press releases and say confidently they are doing everything they can to prevent this, which is nothing. They have no clue why a Danaerys Targarean AI encouraged Sewell Garcia to kill himself, they never will, and they're completely fine with that.
The worst part, past the obvious loss of human life, is I am not sure how much I can fault these individuals. Each user presents a profound societal failing. A person we have failed to socialize in a way that lets them express themselves authentically and connect with those around them. Users of the bots will only grow more disconnected from the community around them, and the deaths will continue to rise.
This is a problem with an infinitely large solution, and a solution that is unrelated to the problem it claims to solve. I wish there was more to say. One day, a lot of people will look back, and realize what time was spent was wasted. I can only hope it comes as soon as possible.