6/23/2025

The Empty Promises of an AI-Powered Future

I keep coming back to this thought, time and time again: what am I supposed to do with AI, really?

Depending on who you ask, anything and everything. It can (according to these companies and their sycophants):

And so I ask: why? Why is AI the one to do all of this? A word often thrown around is accessibility, but not in the sense of disability access. Disabled people already create things, have meaningful relationships, productive work days, find answers to questions, etc. What is meant by accessibility in this case is access without the barrier of knowledge or expertise.

The internet has long been a forefront of informational accessibility in this manner, but AI takes things a few steps further. It allows the implementation and actioning of this information without understandingwhy or what is happening.

This is best exemplified by the area that I think is most reasonable for AI use, coding. Coding is largely arbitrary knowledge of how functions and languages fit together or conflict when trying to solve a given problem. The documentation, which lays out all of these functions and limitations, can be thousands of pages and extremely dense. Add into that the disparate bits of information that can be scattered throughout various forums and blogs which can be key to understanding certain principles and you've got one complicated pie.

Even given all this, I am against what has been called "vibe coding". It gives you the output of knowledge of these constraints without the understanding of what's at play. If Tony deploys a bit of code to the production environment that he generated and it breaks, does he have any of the tools to solve this problem? No, not without going back to the AI and hoping it can actually fix the problem and not just say it has.

Another area is therapy, which is often expensive and can be difficult to access. However, therapeutic language without guidance is extremely dangerous. Therapeutic skills and processes can be used to enable bad behavior and hurt people. AI has a tendency to play into delusions as well, as we have seen time and time and time again and is being exploredmore in depthby some academic groups. Again, I take issue with the lack of understanding of the tools being used, and of the potential harms at play.

These are just the few that I think are most reasonable as well. Why in the hell do I need a pendant that can mock me? Why does my hiking app need AI to tell me what routes are good and what the weather is? Do I need services that can make anyone nude, including minors? Why do I need a chat that can make a picture of woman with gigantic tits and weird hands? Do I really need a bot on The Site Formally Known as Twitter to tell me about "white genocide" in South Africa? Why do I need my search engine to be able to just lie to me sometimes, complete with real and fake links?

What good is this really doing, and is it reallyworth the exceptional amount of materials and power being poured in to make it do the mediocre to outright harmful job it's doing now? OpenAI, the biggest of the AI focused startups, just closed a $40B funding round. Another, Anthropic, just closed a $3.5B funding round with a $60B+ valuation. Microsoft brags they are on track to spend $80B dollars on AI investments.

Here are a few things that could be done with that money (not to mention the endless power used in the data centers):

Are there valid uses for AI in some industries? Sure, I think the usage in medication research is fairly inoffensive and effective. Do I think most uses are bullshit and breed the worst kind of behaviors? Absolutely.

If there was a fraction of a chance of liability on the part of these companies managing these tools, I might feel differently. But there is none. I am extremely pessimistic any lawsuits now or any time in the future will have any notable impact on the companies at fault. They will continue to leave death and pain in their wake, and those using the tools will shrug off any fault to the AI, as "they said it was true".

AI is not a tool of democratization, it is a tool of deflection. Deflection of responsibility, effort, and care. I refuse to cede this to a computer, and will pour care and attention into the things I do. Any reasonable person will do the same.