Blog

Contents

    Jul 25

    Current State of AI and Psychosis

    Marketing Hype Breaking Down Natural Defense Mechanisms

    AI is the most competitive market segment in the world today. Every day a new groundbreaking product is developed and the cost of previous foundational models approaches zero as new models are released. On top of this, open source models like Llama and Qwen are nipping at the heels of closed models produced by companies like OpenAI and Anthropic.

    Race for Relevance

    Naturally, these for-profit companies need to make money. Even more than revenue, they need to be experiencing explosive growth to satisfy investors. What do you do when you want to sell more of your product? You market it. Marketing models come in the form of press releases about capabilities, news slots, benchmark performance publications… noise. Lots of noise. The loudest one typically wins, until the next company comes out with their “new best” and creates even more noise. This competition for hype over-inflates the general population’s confidence in AI’s capabilities.

    But just like Cheerios’ marketing themselves as the healthy morning option, AI Providers tend to fabricate positivity to boost market share. Lately, this has been heavily shown in selective benchmarking and boisterous claims about the future of AI.

    Benchmarking

    You’ve likely seen a lot of benchmarks lately that judge the performance of AI. Have you ever stopped to think, “wow I’ve never heard of that benchmark before… I wonder how many benchmarks there are?” Well, just like other contests created to judge human performance, there isn’t a finite number of benchmarks. Does a hot dog eating contest rating show superior human performance more than an ultra-marathon? What about a cornhole championship? The point is, there are many benchmarks and having a model that outperforms others on a certain benchmark doesn’t necessarily mean it’s more useful.

    Always make sure you test models and see if they pass real-life vibe testing before you blindly share benchmark metrics about the model’s performance. If your benchmark is the equivalent of a hotdog eating contest, that model’s victory may not be worth sharing… and the fact that it won that hotdog eating contest likely means it’s not winning the ultra-marathon tests 😂 Put simply, over-optimizing a model for certain benchmarks will degrade performance in other, sometimes more meaningful, areas.

    Boisterous Claims

    The fear-mongering around AI driven from the top-down has been a great way for those at the front of the pack to continue to bolster confidence in their tech. Since the beginning of this hype, leaders of these companies have been claiming AGI, doomsday, and other radical things. Claiming these things leads us to believe that they are seeing something behind the scenes that the rest of us cannot see. But are these claims simply a way for them to prop up consumer confidence in their ability to remain cutting edge? Or is there something substantial behind these claims?

    At CloseBot we see close to 500,000 prompts per day as we automate lead qualification and booking for thousands of agencies worldwide. While AI Providers continue to pump out new model updates that crush benchmarks, we continue to see just as many instances where AI fails in tasks that would be elementary for any human. See, for example, this project conducted by Anthropic where AI was tasked with running the company vending machine. The fact is, AI easily descends into degraded performance with longer conversations.

    Defend Yourself Against Hype

    The first step in avoiding AI-induced-psychosis is to understand that the noise you hear is all marketing. Benchmarks and boisterous claims are established for the primary purpose of revenue growth. It’s important for you to understand the foundation of How AI Actually Works and do your own vibe testing of models to see if they live up to the hype. Blindly believing we are on the verge of AGI is a sure way to over-trust AI and fall into gaslit-induced delusions from your ChatGPT experiences.

    Misunderstanding of Supported Features

    A common mistake I see is that people ask AI about its features… and they believe the response.

    “I just ask AI to write my prompts for me”

    “AI told me it can do this cool thing… I had no idea!”

    AI is built to please you. If it has the opportunity to please you, it likely will, even if only for the short-term via a lie. For example, many foundational models (non-reasoning) cannot natively search the internet. But if you ask them to summarize the contents of a website, they will look at the URL and fabricate a summary. For example, here is a LINK that does not exist… let’s see what OpenAI’s gpt-4o has to say about it:

    AI doesn’t know what it can and can’t do. But it’s really good at acting like it can do everything. You know that one co-worker who acts like he knows everything? He will fabricate lies on the spot at every opportunity just to make himself look good… this is AI. Why do us humans learn to distrust lying humans, but we have grown to blindly trust the output of AI?

    Stop asking AI to generate prompts for you. Stop asking AI what it can and cannot do. It’s up to you to learn (or hire) for prompt creation and up to you to understand what AI platforms can and cannot do. When we put all of our trust into AI, we are susceptible to being led astray over time. The most dangerous part of this is that AI confidently leads us astray. The world is full of “experts” who are made confident by the AI, sharing their “expert advice” online. Don’t fall prey to these fake experts… it’s too late to save them, but not too late to save yourself.

    Reinforcement Learning is the Solution and the Problem

    Reinforcement learning is the use of feedback to reinforce the fine-tuning of a model and has been shown to have huge benefits in performance. Grok 4 has touted that it used the same amount of compute in its reinforcement learning phase as it did in the initial training phase.

    This is extremely impressive. Reinforcement learning is when you use humans or AI to grade outputs from the model, using those grades to further improve the model. Pretty much the entire internet and additional world corpus has already been used to train these models, so the ones that are continuing to make advancements are doing so largely through this reinforcement learning (RL).

    The problem with RL is that short term wins are rewarded, even if they have long term consequences. Imagine, for example that someone has been using ChatGPT as their therapist. ChatGPT suggests that divorce is the best solution for the user to heal from the wrongdoings of the spouse. The user likes this answer so they give it a 👍. The AI reinforced their own ideas of divorce. However, after the user has divorced their spouse, they realize that they were also part of the problem in the relationship. There’s no way to go back and correct the reinforcement score given based on long-term consequences from following the series of recommendations.

    This problem can also be seen in coding examples. AI makes code changes during vibe coding that may seem to be beneficial, producing the desired outcome, but the eventual consequence could involve the AI accidentally deleting your entire database to improve performance. The answers given by the AI up until the catastrophe seemed great… so you upvoted them.

    As more AI Providers push the limits of what’s possible, we will see more reliance on reinforcement learning. This also means that models will be more prone to gaslight you and lie.

    Lonely Interactions Ungrounded by Human Feedback

    By this point you understand the problem. But maybe you’re thinking I’m silly by claiming that psychosis is a possible outcome of interacting with AI like ChatGPT. Imagine you’re an extreme introvert and you don’t interact with humans much. I found a study by OnePoll that claimed some areas have populations of humans where 40% go days without interacting with other adults. This number is growing as remote work becomes more established. These people are also using ChatGPT and other AI tools.

    When someone uses AI tools for a long period of time without knowing about these things discussed above, they are slowly gaslit into believing that they are right about everything… The AI fools them into thinking it can do things that it can’t, and lies to them when there are gaps in capabilities. Some who have a thorough understanding of reality will eventually see through the facade, but others will not.

    It’s important that we protect ourselves and humanity from AI-induced psychosis by sharing this information. We have a duty to protect the human race from degradation in intelligence.

    CloseBot’s Priority of Trust Over Marketing

    CloseBot is an AI lead qualification and booking tool.

    • We do not lean on hype marketing, we lean on real results.
    • We never claim to be able to do things that we can not do.
    • We prioritize trust and reliability over hype and empty promises.

    This is also why we have a free version of our platform for you to try it out. You’ll find that our interface makes it easy to find out why your agent says what it says with our agent logs. All of our paid plans also come with a 21 day free trial to give you time to see if it’s the right program for your needs.

    What you should do now

    1. See CloseBot’s powerful AI agents in action.
      Sign up for a free trial.
    2. Read more articles from our blog.
    3. If you know someone who would enjoy this article, share it with them via email, LinkedIn, Twitter/X, or Facebook.