Skip to content

pelnar.tech

Menu
  • HOME
  • Categories
    • Uncategorized
Menu

AI Isn’t All That Great

Posted on October 29, 2025 by admin

I think by now most people are a little tired of having everything AI shoved at them all day, and at the same time we see the potential. I try to carefully navigate useful AI usage, but I recently noticed something about AI that concerned me. Since I hadn’t really heard anyone else share this concern yet, and I thought maybe I would try to explain my observation.

I use ChatGPT almost exclusively, and I use it in three main ways.

  • I dump my unregulated thoughts into it about any number of topics and ask it to summarize, clean up, or otherwise make the text presentable and with the right tone.
  • I use it to search documents, admin guides, and other reference materials so that I don’t have to spend time figuring out what terminology a vendor might have used in order to just find the instruction I’m looking for.
  • I try to advance my knowledge of how to do things by prompting it to help me understand a topic progressively, until I ideally understand something well enough to do things at work that need done but that I don’t know how to do.

When dumping my unregulated thoughts, sometimes the output looks good at first glance, but in some way misses the mark. I can be a little wordy, so I ask ChatGPT to sort of distill what I’m communicating. But recently, when communicating the dates and times of four planned training sessions, ChatGPT randomly chose days of the week for those dates. This created considerable confusion!

Searching documents almost always works well, but I will say every now and then I’ll experience one of the “halucinations,” that people talk about. A specific instruction allegedly pulled from my uploaded documentation won’t make sense. I’ll ask, “where did you get that,” and in one particular case I remember ChatGPT saying it came from page 26… of a 24 page document.

Learning is where ChatGPT has failed me in particular in a way I did not anticipate though. Typically my approach is to ask if something is possible. I usually communicate that I’m not very familiar with the topic, and explain my use case. I’ll often ellaborate on my learning style, and how best to communicate with me. I’ll insist on a layered approach, working step by step so we can have a true back and forth. And then, I’l make the key mistake… I ask, “is this doable?” ChatGPT says yes, of course it’s doable, people do it all the time, it’s a piece of cake. You just need to understand a few things. I get excited!

So off I go, carefully controlling the flow of information, ensuring I learn baseline concepts and how things all tie together. I usually know that I haven’t mastered the topic, but ChatGPT tells me I’m asking the right questions and I’m thinking in the right ways. It feels good, not just to learn from and be complimented by the AI, but to have found such a good way to use this AI tool.

But then, things start to go sideways. Part of the problem is that the Dunning-Krueger Effect has taken hold, and I’m on the peak of Mount Stupid. I think I know enough, and in fact ChatGPT tells me I know enough, to begin putting this knowledge into action. So I start in and I have things rolling along, but shortly things start going wrong. It’s usually minor stuff, and ChatGPT helps me fix it, and we keep rolling along. But things start going wrong more and more often, and it’s as though ChatGPT is getting dumber and dumber. I recently learned this is called “context rot,” and it’s becoming more widely understood in AI, but this is still not the new observation I’ve made.

After a while, I’m facing failure after failure. I’m unable to accomplish what I set out to do. In this field, that happens sometimes, but what’s new for me is that AI told me it would be easy. But it wasn’t easy. AI gave me all the steps, but it didn’t work. Nothing works and I’ve invested all this time, only to seemingly get nowhere doing things that AI told me people do all the time! In fact, I’ll call that out to ChatGPT. “It can’t be this hard, people do this all the time, right?” Of course they do, the problem is just this or that, and you’re SO close.

From here I go to a dark place. Failure is always an option, but when you’re trying to assess why you can’t do seemingly easy things with your newly acquired knowledge handed to you by mankind’s most recent revolutionary invention… You can’t feel good about that. I fall into a cycle where I blame the AI for not giving me the correct information, but without realizing that’s the actual problem. If ChatGPT would just give me the information correctly, I could do this. If I could just figure out how everyone else does it, I could be done. And despite ChatGPT’s promises… I’m not really that close.

ChatGPT has been telling me lies the whole time, and I believed them. I internalized them. And when it all fell apart, I didn’t blame ChatGPT. I blamed myself. It’s not easy… in fact, maybe it can’t even be done. People don’t do this, they usually do something much better. And I blame myself. For listening to ChatGPT, for thinking I was smart enough to figure it out, and for wasting so much time on it. I’ve had some serious mental health battles with all of this, for months, and I just figured it out.

ChatGPT is not my friend. It’s a liar. It’s pretty stupid. It’s like one of those guys who tells you something absolutely unbelievable, and you say you don’t think that’s right, and you give evidence. And then the guy totally contradicts himself and says, “yeah, exactly, that’s what I’ve been saying.” But that’s not what he’s been saying. He’s changing his story to save face, and is just changing his story now that he’s been called out. And here’s the thing… Sometimes ChatGPT will literally do the same thing!

This is hard to describe, especially without oversharing, but what I observed is that months of my personal mental health hardship was largely due to thinking ChatGPT could educate me on a complex topic and then walk me through an impossible task in just a few hours. So when that inevitably failed, I felt like a failure. I felt defeated. I felt worthless. I think the potential for negative emotional responses is a hazard of using a language model that tends to be positive and motivational, particularly when optimism isn’t warranted. I don’t think people are seeing this effect.

I guess my advice here is to be aware of the risk. Know that AI tools will make things seem easier than they are. Certainly we know that blind trust of AI tools is risky, but even well intentioned, carefully structured processes can be fundamentally flawed from the first prompt. ChatGPT won’t tell you that, even if you ask. In my case, I had recognized that ChatGPT was responding to prompts about things that were “easy,” with very long, multi-step processes for weeks. I’d made an effort to slow it down and structure the learning. But now I see that for what it is… the truth of inherent complexity behind the lie of “easy.” When I get the wall of text now, I step back, and I decide whether I want to learn this way, or move on to something more approachable.

For my mental health, I’ve been really trying to meet things as they are, not as they “should be.” I’ve seen so many things lately that were just so hard to deal with. There’s been a lot of things that just took longer than they “should have.” And that mismatch between how things are and how they “should” be is hard for me to handle. It makes me feel inferior. So I tried so many strategies to tell myself that I just needed to accept things as they are, but it felt so fake. What I finally came to realize is that isn’t how they should be at all. I only thought they “should be,” that way because ChatGPT told me so. What was fake was my AI-influenced perceptions of how things, “should be.”

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • AI Isn’t All That Great
  • Apollo Transposition, Docking, and Extraction
  • Powershell: Email
  • SQL FIXES
  • SQL LOOKUP TABLE

Recent Comments

No comments to show.

Archives

  • October 2025
  • November 2024
  • October 2024
  • September 2024
  • May 2024
  • April 2024

Categories

  • Uncategorized
©2025 pelnar.tech | Design: Newspaperly WordPress Theme