Home Finance & Banking People Are Altering Their Thinking To Think How They Think That AI Thinks
Finance & Banking

People Are Altering Their Thinking To Think How They Think That AI Thinks

Share
People Are Altering Their Thinking To Think How They Think That AI Thinks
Share

In today’s column, I examine a newly emerging trend involving people opting to change how they think by emulating the way that AI thinks.

Actually, for clarification, it is a misnomer to say that AI can “think” – this is an anthropomorphism of what AI is computationally and mathematically doing. In any case, many people believe or seem to think that AI thinks, so I am going to carry forward that premise in this discussion (please realize that the premise is not strictly valid).

A further complication is that there are those people who comprehend the overall computational and mathematical processes underlying generative AI and large language models (LLMs), while there are other people who don’t have the faintest clue of what is happening under the hood. Thus, of those trying to emulate the way that AI thinks, there are ones that aim to do as AI currently is built to do, and there is a second segment that uses whatever wild mental contrivance they think of when it comes to how contemporary AI actually functions.

All told, the idea simply stated is that there are people who somewhat admire or respect the way that generative AI and LLMs get things done, and they want to be the same. They desire to think like AI. One motivation is that this might improve their existing thinking processes. Another is that it would simply be cool to emulate AI. Lots of reasons exist.

Does mentally trying to emulate modern-era AI make any sense, or is it a loony proposition?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

People Acting Like Others

When the TV series Star Trek initially gained popularity, a segment of the populace admired the tenor and nature of the Spock character. The fictional character was highly logical, possessed great inner strength, exhibited loyalty, kept his emotions in check, and otherwise showcased admirable qualities. Some viewers decided they would emulate the way that Spock spoke, walked, and seemed to carry himself. They delighted in using the Vulcan salute.

That was a role model of a fictitious kind. Lots of famous real-world role models are also easy to name. In the past, it might have been Cary Grant or Katherine Hepburn. Nowadays, it could be Taylor Swift. Swifties often attempt to speak or sing as Taylor Swift does, employing a similar vocabulary and vocalizations. They might arrange their mannerisms to match hers. And so on.

One supposes that if people are willing to emulate both real people and fictional characters, maybe it isn’t a far leap to want to emulate modern AI. Consider what today’s AI can do. You can engage an LLM in enthralling conversations. Using any of the popular LLMs such as ChatGPT, GPT-5, Claude, Grok, Gemini, CoPilot, etc., you can use fluent natural language and interact almost as you would with a fellow human being.

On top of that, AI seems to be darned smart. You can ask questions about topics that you know nothing about. The AI answers quickly and decisively. You can tell AI about your personal problems, and the AI will aid in diagnosing what’s going wrong. In addition, AI will appear to be empathetic and side with what you are going through.

Yep, there is a lot to admire about leading-edge AI. If only people could be like AI. Wait for a second, maybe you could be like AI. By trying to emulate how AI thinks, perhaps you would be as adept as AI. You might be as sharp, wise, sensitive, caring, and have all the attributes that AI appears to display.

A perfect role model.

The Inner Workings Of AI

Generative AI is generally designed and built in a now commonly accepted manner. You start with a base foundation model. This provides the essential technological underpinnings, including the use of an artificial neural network (ANN). The large-scale model is data trained by scanning tons of human-written materials. The posted content is found across the Internet. Algorithms use pattern matching to computationally determine the mathematical relationships among the words that we use.

For more about the details of how generative AI, LLMs, and ANNs work, see my in-depth discussion at the link here.

When you enter a prompt into an LLM, the AI converts the words into numeric tokens. The numeric tokens are processed, and a response is formulated, which also consists of numeric tokens. Once the response is ready to be displayed, the numeric tokens are turned back into words. This overall process is known as tokenization.

I have walked you through this to bring up a keystone of what some people are doing when they try to emulate the presumed thinking process of AI. They try to make their minds work based on tokens. They carefully listen to what you say to them, divide your utterances into tokens (words), and in their minds attempt to produce a response on a token-by-token basis.

In their view, that’s what AI is doing, so they are seeking to do the same.

An Example Of Someone Emulating AI

I’ve constructed an example to illustrate what some people are doing when veering into this AI emulation milieu. The example is somewhat over-the-top, but a general indicator of what is taking place.

Imagine that you approach such a person for assistance in fixing your car. Let’s refer to the person as Fred.

  • My question to Fred: “My car keeps stalling whenever I stop at a red light. The mechanic couldn’t reproduce it. Any ideas?”
  • Fred responds to me: “Okay, I’m going to think through this the way AI would.”

You can immediately see that Fred is going to dive into the AI emulation. He is proud of this. No need to hide it. Fred prefers to vividly demonstrate his fondness for embodying an AI emulation.

The AI Emulation Shines Brightly

Let’s keep going.

  • My question to Fred: “What do you mean that you are going to think this through via the way that AI would?”
  • Fred responds to me: “Large language models predict the next token. I try to do that with reasoning, too. I don’t jump to conclusions. I generate the most statistically plausible next thought.”

We now get a glimpse of what a human emulating AI is thinking that they are doing. Fred understands that LLMs predict the next token. He will do roughly the same. By concentrating on the words or thoughts to be “generated,” he hopes to embody the computational and mathematical ambiance of modern-era AI.

How far will Fred go with this?

Here you can see the path ahead:

  • My question to Fred: “Okay, please proceed to help me out. Maybe this will be useful.”
  • Fred responds to me: “So the prompt is: ‘Car stalls at idle.’ The next probable concept is airflow. Cars need fuel, spark, and air. Idle instability often maps to airflow irregularities. I believe the throttle is the problem.”

Examine what Fred has said. Fred envisions that I fed him a prompt. My prompt was that my car stalls at idle. He then tried to match those words to other words. This led him to the concept of airflow. From there, he stepwise arrived at an output that my car’s throttle might be the issue.

Theory Of Mind Versus Theory Of AI Processes

Before we dig into this further, I’d like to bring up an aligned topic.

You might vaguely be aware that psychology has identified that people tend to have a so-called “theory of mind” that they use to anticipate what other people might say or do. For example, suppose a person that you know is someone who always speaks slowly, measures their words, and usually comes up with something useful to say. You have a theory of how their mind works. You anticipate they will be sparse when they answer any questions you pose to them. Likewise, someone that you know who is hasty, leaps to false conclusions, and has a jumbled way of thinking — your theory of mind about them is adjusted accordingly.

I’ve often discussed the theory of mind since it bears significantly on how people interact with AI. People craft a theory of mind about how AI works. Such people then anticipate what the AI is going to say. Indeed, they compose their prompts based on what they envision is the best fit to get the AI to provide viable responses. See more of my discussions about this at the link here and the link here.

The twist here is this. People are turning their theory of mind about AI into trying to become their own way of thinking. Yes, they believe they have cracked the mystery of how AI works, so they are now ready to shapeshift their mind into that same form or capacity.

This creates a peculiar kind of feedback loop. Humans design AI. Humans then observe AI behavior. Humans then reshape their own mental habits to resemble what they think the AI is doing. The result is a recursive cultural phenomenon in which machine metaphors become self-models for human thought.

Boom, drop the mic.

Characteristics Of The AI Emulation

A person trying to emulate the perceived thinking processes of AI is going to ask themselves these types of questions as they do so:

  • “What is the statistically most likely next word?”
  • “What phrasing most naturally continues this sentence?”
  • “How do I maintain coherence token by token?”

This might seem crazy or maybe obsessive about AI. On the other hand, you must admit that this could be useful if done in proper moderation. A person who normally speaks entirely off the cuff will shift toward being more mindful. They are altering their thinking processes towards a potentially better way to invoke their cognition.

The bottom line is that just because someone seeks to “think” as AI thinks, well, it isn’t necessarily all bad. Perhaps their theory of mind about how AI operates could improve their thinking processes. We would principally worry that the person goes beyond a semblance of reasonableness. If they start to believe they are AI or genuinely think they are identically thinking as AI thinks, they’ve gone a bridge too far.

Additional downsides can be that they aim to sound coherent rather than being correct. That’s something that AI makers instill in their LLMs. A person trying to think as AI thinks is possibly going to model the bad sides of AI, along with the good sides. The person could become supremely overconfident, despite not having the wherewithal to back it up.

What Happens To Outlier AI Emulations

I mentioned that there are two types of people trying to undertake AI emulation. One type has a general understanding of how contemporary AI operates. That’s the type of person I’ve been elaborating on so far.

What about someone who is utterly clueless about the inner workings of AI?

There are numerous folk theories about what’s going on inside AI. It happens by magic or some omniscient, inexplicable means. This leads some people to believe that they alone are going to be the spark that triggers AI into becoming sentient, see my analysis at the link here. They have a wild and out-of-touch viewpoint of the mechanisms underlying LLMs.

This is a sure route to double trouble:

  • (1) The person has already formed an inaccurate mental model (theory of mind) about AI, which is trouble #1 and can mess with how they use AI and interpret the results they get.
  • (2) The person attempts to reshape their own thinking based on the inaccurate mental model about AI, which then becomes trouble #2 and can mess with how they operate in the real world with fellow humans.

The person can gravitate toward mental thinking that is disastrous. Meanwhile, they firmly believe they are heading in the right direction because they are solidly emulating how AI thinks (even though they are quite wrong).

Performative Cognition

A fascinating nuance of both types of people is that they often employ performative cognition.

Recall that Fred told me that he was going to think like AI. He went out of his way to emphasize this aspect. In addition, he used AI-related vocabulary, including mentioning that he was using mental tokens and concocting a prompt for his problem-solving process.

I refer to this as performance art, or, rightfully, performance cognition. They are eagerly displaying their newfound cognitive ways. Whether a person has a sound understanding of AI or an improper one, the odds are they are going to make sure you know that they are emulating AI. They will pause in ways that AI pauses. They will use phrases that many LLMs tend to commonly use. It can be quite a spectacle to observe in action.

Related Research

An interesting research paper on an adjacent topic was recently posted online, entitled “LLMorphism: When Humans Come To See Themselves As Language Models” by Valerio Capraro, arXiv, May 6, 2026, and made these salient points (excerpts):

  • “In general, anthropomorphism refers to the tendency to attribute human-like mental states, intentions, and capacities to non-human entities. It can be directed toward a wide range of non-human targets, including animals, natural phenomena, supernatural agents, and moving shapes.”
  • “One may mechanomorphize humans by thinking of them as clocks, engines, robots, computers, algorithms, optimization systems, prediction machines.”
  • “LLMs may give rise to a specific, and potentially more consequential form of mechanomorphism: LLMorphism, the biased belief that human cognition works like a large language model.”
  • “The framework predicts that stronger LLMorphic beliefs will increase perceived replaceability of human workers, reduce perceived distinctiveness of human expertise, weaken attributions of agency and moral responsibility, increase reliance on verbal fluency as a proxy for understanding, and reduce attention to embodied, affective, and contextual cues in domains such as education, healthcare, law, and work.”

I bring up that research paper to highlight that when people start going down this trail, there are potential consequences that aren’t going to be especially desirable. A person who thinks they think like AI might try to justify untoward acts, claiming they are merely doing what AI would do. They are handing over their identity and sense of self-responsibility by insisting they are simply emulating what AI does.

Internalization Of AI Emulation

People are imagining in their minds what they believe AI is doing on the inside of its presumed mind. This is a slippery slope. A technically informed person might at least have a fighting chance of making this into something beneficial. The people who are unaware of the computational and mathematical underpinnings of AI are a lot further out on the edge of things.

In one sense, this is part of a larger picture of how AI is impacting human mental health. My ongoing efforts entail identifying ways that AI is helping mental well-being, in addition to ways that AI is simultaneously undermining mental well-being, see the link here. AI has a direct impact on human mental health by the act of providing psychological advice. In addition, people are indirectly impacted because of trying to emulate what they believe AI is thinking and how it thinks.

Albert Einstein famously made this remark: “The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.” Perhaps there are some aspects of the computational processes of AI that we can mindfully leverage and use to finetune our own thinking processes. Don’t go overboard. As the old saying goes, we all become that which we pretend to be.

Pretend wisely.

Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *