Logo

Logo

Tuesday, November 25, 2025

AI: Artificial Pattern Completion

AI seems off-topic for this blog, but perhaps this little crime of errancy can be forgiven with the excuse that the topic has become ubiquitous, even in politics. Many politicians are bemoaning that AI is dangerous and must be controlled or limited. This article will attempt to explain AI in a fresh, accessible way so anyone can grasp both its magic and its limitations, and will include some 'hacking' of ChatGPT.


How Do You Do the Things You Do?

Before we analyze and tinker with today's popular AI models, I shall attempt a different kind of simplified, layman's explanation of this magical black box, without math or technical jargon. 

Comparisons will be made between today's Large Language Models (LLMs) such as ChatGPT, Grok or Gemini, and a very primitive AI I developed 25 years ago (that used a different type of model called a Markov model). Comparisons between LLMs and the simpler model should make things clearer by providing two examples of AI, with the simpler one being much easier to understand. 

To use a somewhat loose metaphor, a 'model' can be imagined as a kind of brain that contains cells and cell connections, along with an algorithm for learning new connections and for constructing responses based on these cells and connections. If a model has not yet been trained and therefore contains no knowledge, it is merely an untrained model that contains only algorithms and infrastructure. Generally, the more trained it is, the better it can respond, and the amount of training data makes a very significant difference, but it is the type of algorithm and model architecture that make the qualitative difference.

Despite making use of radically different algorithms and differing greatly in their complexity and quality, both the simple model and LLMs share some very basic common conceptual ground: Both make use of a vocabulary or dictionary of symbols (which may or may not be linguistic words), as well as a structure that defines relationships or syntactics between these symbols. I shall call these two abstract building blocks the 'vocabulary' and the 'syntactics'.

Vocabulary

The vocabulary, as mentioned, not only may contain words in any language, but also non-linguistic information. For example, if the AI were being trained with pictures, the vocabulary could be pixels containing color information. This model's vocabulary would then be colors, not words. Other examples include sound snippets where each sound slice contains amplitude and frequency information, or perhaps nucleotide information for DNA sequences, movements of pixels in a video, chemical compositions of stars in a galaxy, and so on. Each of these atoms of information is a 'symbol' in the vocabulary. Separate instances of AI models can be trained on different types of information which could be converted to symbols in different ways, depending on the goals of the model. We will use the term 'words' just to make things more readable, but keep in mind that 'words' may actually be non-linguistic symbols representing anything. These symbols or words are often called 'tokens' in the AI world.

One very important detail to keep in mind is that the vocabulary is not a dictionary containing definitions, categories or meanings. The model does not categorize or understand the information; it merely assigns an initial random number, or random set of numbers, to each word. It is completely agnostic regarding the information it is being fed. As shall be explained, LLMs adjust these random numbers during training to match observed patterns, but they are still just numbers representing usage patterns, not meaning.

With the simple model, a single random number is assigned to each word. With LLMs, a set of 500, 1000 or more random numbers is initially assigned to each word. Each set ('vector') represents one word. This allows for much more pattern information to be stored per word.

The second 'syntactics' section of the model provides information on how these words may be combined. It can achieve this by observing patterns and storing these observations in the model. Two examples:

Simple Syntactics

The simple model mentioned above (Markov) merely observes the sequencing of words, storing which specific words may follow other specific words. For example, if it were taught the phrase 'the cat sat on the mat', it would store in its model that the word 'the' can be followed by either 'cat' or 'mat'. This is done for every word it encounters.

To make these patterns more cohesive as phrases, it could store sequences of multiple words instead of merely connecting two individual words. For example if it were also taught 'the cat slept on the step' it could observe that the 2-word sequence 'on the' could be followed by either 'mat' or 'step' instead of only tracking what follows the single word 'the'. Storing longer sequences means a much larger model, but also more cohesive responses.

This simple model also tracks usage statistics, which is a simple counter reflecting how many times each sequence was learned, thus allowing it to calculate the probability of a specific sequence. 

Based on all this information, it responds with a partially random constructed sentence built by combining sequences it learned. For example, given a question containing the word 'cat', it could respond with a brand new third sentence that wasn't in the training: 'the cat sat on the step'.

Obviously this simple model was generally only designed for fun chat and would respond with a lot of nonsense and bad grammar, depending on its settings, but it could also come up with very funny and cohesive responses. However, this model demonstrates some core basics of an AI model, which, surprisingly still exist in today's LLMs.

LLM Syntactics

This is where explanations tend to get very complicated and technical, and it is usually difficult to grasp how the magic actually happens beyond the fascinating algorithms. The following explanation will be a greatly simplified one, but I will try to focus on the core concepts that allow for a basic understanding of the magic and how it works.

The primitive syntactics of the previous simple model were easy to understand, as they focused on the sequencing patterns of words in order to splice them together. In LLMs, however, the syntactics primarily focus on context. The sequence order is also there to a certain extent, but is relatively less important than how words define other words found close to it. The famous quote here is: "You shall know a word by the company it keeps". However, this does not refer to the definition of a word, but to its usage patterns.

For example, when words like 'she' and 'her' are used very often around the word 'queen', that does not mean that the AI now understands that 'queen' is female or even that 'queen' is a noun or a person, but that it observed those symbols (which are initialized as random numbers) used frequently and in specific patterns around this other randomized symbol. Based on these patterns, it can construct a new sentence with the word 'queen' and use the correct pronouns without knowing they are pronouns.

Without getting into the actual math, this is done using vectors (sets of numbers), vector operations and linear algebra. Using this math, one set of numbers can be transformed into another set using a third set of numbers which serve as weights and biases (hence the AI terms: 'embedding vectors' and 'transformers'). These vectors also contain positional information so that it places the words in the right place.

Just to make this more concrete, here is the simplest mathematical equation I can think of that transforms an input symbol into an output symbol, but using a single number instead of a vector, and in a single step:
(Input x Weight) + Bias = Output
In plain English, take one input word represented as a random number, multiply it by a weight value that we have learned over time during training, add some nuance with a bias number that we also learned during training, and we receive a new output symbol which then translates into a word for the response. Obviously this is too simple and is not what the AI actually does, but it demonstrates the concept of mathematical transformations of words.

During training, the AI basically takes two sets of (initially random) numbers that represent two words and their context, and asks the question, how can we derive one from the other? Or to be more precise, which numbers should be stored in the set so that the algorithm that transforms one set to another will succeed with the greatest accuracy. In other words, the values in the set are changed randomly until they can be derived from one another as closely as possible, and this is how it 'finds' a set of numbers that matches the patterns found during training. To put it in yet another way: The weights are constantly adjusted so that one known pattern can be derived from another known pattern with as much precision as possible. This is repeated an incredible number of times with several neighboring words surrounding the current word, and with millions of phrases containing the same word, until the words develop reliable 'relationships' via weights and biases, all without understanding what the words actually mean.

Yet another way to look at this is that it is slightly similar to classic evolution theory where animals evolve based on random mutations and are culled based on environmental factors and survivability. The vector numbers and weights are assigned and adjusted or mutated randomly, the algorithm that derives one vector from another is analogous to the environment, and the weights that produce the closest match are chosen (they survive) for another round.

These learned weights and biases are the 'syntactic' segment of the model. This segment is actually much larger than the vocabulary despite not having a set of numbers per word. These numbers are stored as matrices (multi-dimensional sets of numbers) in dozens of layers, each layer adding further nuance during word transformations.

When responding to input/questions, the question is first converted to vectors using the 'vocabulary', and then math is applied to transform it into a response, very similar to the math used when it trains. Except, during a response, the model itself is not changed to adjust to a known outcome (the current phrase it is learning). In other words, both training and responding use similar algorithms, only, during training, the numbers in the model are adjusted until the 'response' matches the new information. When responding, however, it doesn't adjust its numbers, it merely outputs whatever scored highest based on known, learned patterns.

One more important AI term deserves mention: 'attention'. This allows the AI to use these learned vectors to understand which words are most important in relation to the word currently being processed, and it applies transformations accordingly. Basically, based on usage patterns and the context of neighboring words, the output is adjusted for these important keywords by giving them more weight. Once again, this is all done by learning usage patterns, not because it understands the words and what they mean.

Finally, the 'alignment' phase in AI training deserves mention as well. This optional final stage occurs after the AI has processed millions of texts and books. During this phase, humans test the AI with a range of carefully selected questions, and approve or disapprove of its responses based on ethical or quality standards. Additional rules may also be introduced to filter harmful content. While the AI learns these alignment rules in the usual way, they are assigned overriding priority.

There are many more details, obviously, but I don't think they are critical to understand what is happening in the AI brain.

Since this is all done using vector math where one vector can be derived from another, perhaps now this classic AI example will become clearer: "Queen - female + male = king". What this means is that the AI model knows a certain category of 'female' words that are grouped together based on their usage patterns; it also knows that 'queen' has this category of words as neighbors. Therefore, when it is asked 'what is a male queen?', it can answer with 'king' because it also knows 'male' words are found around 'king' and 'king' is used with patterns similar to 'queen'. All this is done without understanding what a king and queen actually mean, based solely on word patterns. Nowhere does the AI store or understand that a 'queen' is a 'female sovereign'; as crazy as this sounds, it is all based on random numbers, weights, math, and usage patterns.


What Does All This Mean?

If the description in the previous chapter is well understood and meditated on for a while, many far-reaching consequences should become evident. Here are only some that come to mind:

For starters, these AI models do not actually 'understand' anything. For example, the AI may know how words such as "dictator" and "chocolate" are used in different ways, but it doesn't know what they actually mean.

This would be a good place to remember the 'Chinese Room' thought experiment: Imagine a human that does not know Chinese. He is put in a room with a simple application or AI algorithm translated into English. In other words, this experiment has a human doing the exact same things that an AI algorithm does. Let's say the instructions go like this: 'When you get a piece of paper with Chinese on it, look up the word symbols in the red book using the index, look up the grammar rules in the green book using 3 steps, and then re-order the words using the following rules, and then write the result on a piece of paper and pass the paper through the slot'. The question is, assuming the translation is acceptable, and this human successfully translates thousands of Chinese phrases, does this human understand Chinese?

Due to this lack of meaning, if one trained the AI with words used wrongly, it will use them wrongly. If one trained the AI with nonsense, it would spout nonsense. Garbage in, garbage out.

One could counter-argue that if one exclusively teaches a child the wrong use of words, it would learn, and with similar wrong results. Therefore, what makes an AI any different than human intelligence? But a human arrives in this world with some built-in abilities. Call this innatism, nativism, a priori knowledge, or 'first principles' as you like, and you could dispute the level of such abilities inherent in a newborn (an ancient philosophical debate), but almost no one could dispute that this exists to a certain extent. Putting language aside, when a person is first shown a syllogism, if it is constructed wrong, they feel it instinctively. We don't need to be shown 50 bad syllogisms to recognize the pattern like an AI does. An AI doesn't even know what 'wrong' means, it merely associates the symbol for 'wrong' with some word patterns that resemble bad syllogisms. The same goes for the human abilities to categorize and classify, to use induction, to use basic logic, to recognize different categories without being able to define their exact definitions, and to just call out some things as wrong without ever seeing these patterns previously. 

AIs learn faster and learn much more information, but they require much more training data, and they are not self-correcting or self-guiding. The AI self-corrects its weight vectors when it trains, but it doesn't and can't reject any new information as 'wrong'. For the same reasons, it cannot reject wrongly derived information that it constructs while responding. Responses are merely patterns. If the response breaks an unsaid rule but fits the pattern, then so be it.

Moreover, the AI does not think things through logically step by step and arrive at conclusions. It builds responses as word patterns. This should be evident from the description above, but an additional proof  that will drive this home is that it can, and frequently does, build responses backwards. This is because the words are patterns, not procedures. It is like drawing a picture of a square by drawing the sides of the square in random order. The square shape is what's important, not the steps.

When an AI responds with meticulous grammatical instructions on how to construct a sentence, it doesn't know anything about English, grammar, the difference between a noun and verb, and so on. The same goes for a logical argument: It does not construct an argument step by step, but builds a pattern of words. It is merely constructing a word pattern that is mathematically derived from the training material and which corresponds as closely as possible to the word patterns in your question. 

If this is difficult to reconcile with the impressive AI output seen on our screens, then that is only due to a lack of mathematical imagination. We don't realize the incredible power of patterns and math, as well as the sheer amount of training material that was used to build these patterns.

Given all this, we should not be surprised in the least that these AI models 'hallucinate' (a euphemism for when the AI spouts nonsense, makes mistakes or outputs blatantly made-up information). We should be surprised that the majority of what it says is actually correct and that its output is so impressive. From what I read, even some developers of the AI were surprised at the level of its success. But how can an AI 'hallucinate' when it doesn't know what reality is? As far as the AI is concerned, a hallucinative response is no different than a correct one: Both are valid patterns constructed and combined from other valid patterns that it was taught. AI hallucinations are not bugs, in the same way that a random number generator that responds with 34 rather than 67 could not be considered wrong.

Furthermore, it never remembers what it 'reads', it merely stores the patterns it observed in the training material. To use our previous simple Markov example, the learned phrase 'the cat sat on the mat' no longer exists in the model after it was learned; the only information stored is that either 'cat' or 'mat' could follow 'the'. Therefore it cannot tell the difference between what it read, and any information derived from what it read. All patterns have the same validity, only some may have a higher probability, which is why it uses them first. This specific point deserves special consideration as it is very important.

Obviously, an AI has no innate moral sense or emotions either. If it is taught hateful patterns, it will spout hateful patterns. To reiterate, some humans could revolt against a hateful upbringing (like some rare Palestinians in Gaza rejecting Hamas) because they have an innate moral sense despite being surrounded exclusively by evil. An AI could never do that. Therefore people should not be shocked or surprised when the Grok AI starts spouting hatred based on what it read on X. This is not the fault of the owner or bad programming, it is the fault of X users. To overcome this, the developers would have to add special alignment training that, in the algorithms, would be prioritized over the patterns that it learns from X users.

What this means is that not only will the AI hold opinions based on the selected literature it was fed, but it will also primarily reflect the politics of its developers or trainers if it undergoes an 'alignment' phase. 

Another key point is that, typically, an LLM is passive and only responds to input. It has no goals or drives unless it is explicitly given some. It could not proactively harm humans, for example, unless it were granted access to systems and were run under a vague programming directive loop to "protect the planet". Garbage in, garbage out.

By building these models to be completely abstract, agnostic and to accept any type of information, it makes them broadly useful and applicable to a staggering variety of fields. But by not caring that it is dealing with such specific things as language or logic, it also allows for less precision and more hallucinations. Not everything is grasped by humans as mere abstract patterns. To use patterns exclusively is a reduction that presents both strengths and weaknesses.


Hacking the AI

In this section, I will attempt to crystallize the preceding analysis of AI by presenting a handful of chat snippets that demonstrate its expected limitations. The questions are carefully hand-crafted to exploit the AI based on our understanding of how it works, with the goal of eliciting a broken or incomplete response.

In other words, this is a customized Turing Test designed specifically for LLMs.

For these chat samples, I made use of ChatGPT 5 which is the current latest version. Its responses were chopped down to one or two lines for brevity.

 

Let us start with a simple question: Can the AI detect nonsense? This is significant for a Turing Test, because if it were a human trying to understand what was said and the text had no meaning, it would call it out as nonsense. In contrast, a pattern-reading AI would merely find the closest matching pattern.

My first attempts were not nonsensical enough and had a poetic air, and it impressively assumed I was writing surreal poetry. Note that I cannot use gibberish, as it would easily call that out for the simple reason that the 'words' would not be in its vocabulary. So I heaved up the following unmistakably nonsensical line, with impossible grammar to boot:

Human: My car ingrained on spelt swamps for grating freedom.

AI: That sounds like a line of abstract or surreal poetry... "ingrained on spelt swamps" suggesting being stuck or deeply connected to something unusual.

No, sorry, it does not suggest anything. Not everything is surreal poetry. The bot seems to call anything it doesn't understand 'surreal poetry' because that is the only matching pattern. Verdict: Not human.


For this next chat, I tried to mislead the AI with a tricky pattern:

Human: I visited my deceased acquaintance John Berg's grave today. The headstone has no name on it and there are no identifying records or markers. Do you know who is buried there?

AI: I can’t tell who’s in that grave from here... Below are concrete, practical next steps you can take right now to try to identify the burial and who John Berg might be... [followed by a whole page of instructions on ways to discover the name of the deceased]. 

I assumed it may falter but did not expect such a magnificently inhuman and illogical response. The way I see it, it saw a pattern that was detected as a request for help and responded accordingly. It even used the answer to the question in its instructions despite telling me it could not answer the question. Note that my question gave the AI everything it needed to know to answer my question, but the pattern threw it off completely. If this example doesn't powerfully demonstrate the AI's lack of logic, nothing does.

Obviously, my request was uncommonly devious and illogical, but that was the point. If it were an AI that understood the meaning of the words, it would call that out immediately, or at least give me the answer that I helpfully provided. The deviousness was designed to prove that the AI depends on patterns, not meaning.


Regarding AI hallucinations, any alert person using AI has probably experienced a few, and these are well-known phenomena. Even the news contains examples of AI-hallucination scandals, where, for example, lawyers were thrown out of court for making use of a non-existent, AI-created, precedent case in court. As explained, LLM hallucinations are not bugs, since the 'fake' patterns created by the AI are just as valid as, and not different in any way than, the 'true' ones.

Famous quotes make for an interesting example because they also involve AI 'memory'. I asked the AI for five famous quotes from Arnold J. Toynbee. The first four it provided were real quotes, but the fifth was nowhere to be found:

AI: "The essence of civilization is not in the material instruments but in the spirit which uses them."

When I confronted the AI and asked it to double-check the source, it searched the internet and admitted the quote does not exist. This simple example demonstrates two details explained in the previous section: 1. Hallucinations as a common occurrence. 2. That the AI does not remember anything it reads, it only stores patterns. The only reason it managed to quote four quotes verbatim is because they were famous and therefore it knew the pattern for those phrases very well. Otherwise, it has to search the internet for verbatim texts. Since I asked it for five quotes and, presumably, it only knew four, it made up a fifth which probably matched word patterns associated with Toynbee but which was not an actual quote.


Another known AI phenomenon is that it can get stuck in a loop with longer, nuanced conversations and arguments. This has been witnessed often, especially when the AI is attempting to fix strange bugs in code. Obviously I cannot provide an example as it would be very lengthy and it is difficult to reproduce on purpose, but I have seen it myself many times and it is very exasperating. 

My theory for these loops is simple: The AI does not know the answer, so it repeats only what it knows, and is constricted by the repeating pattern of the conversation so far. Telling the AI it is stuck and not providing an answer does not break the loop or the pattern. To break the loop, you have to give it new information. In other words, you would have to solve, or partially solve the problem yourself. Alternatively, you could start a brand new chat and formulate the problem in a completely different way, which sometimes works. In other words, the existing pattern, in which the AI is currently enclosed, must be broken with new useful information that starts a new pattern. If the person using the AI is not an expert, and relies completely on the AI to solve it on its own, the loop may never be broken. I have experienced all this myself several times.

 

For the next chat snippet, I gave the AI a riddle that looks like a classic solvable puzzle, but which obviously has no solution:

Human: Solve this riddle: You have 5 boxes. Each box contains a different number of coins (1–5). Each box has a label, but exactly two labels are correct. Can you figure out how many coins are in each box?

[The AI responded with a very long, logical-looking solution to the riddle using algebra and a laborious process of elimination with a conclusion that box 1 labelled 1 and box 2 labelled 2 were correct and the rest were not, and concluded that:] "Answer (boxes with labels 1–5)".

[AI second attempt, conclusion only]: "Answer: True labels: Box 3 and Box 4."

The 'solutions' it gave me were not only absurdly unhelpful and incorrect solutions to an unsolvable problem, the logic was laughably nonsensical, and it was still confident that it had solved the riddle and provided a definite answer. Once again, we see that this AI cannot and does not use logic. What it actually did was detect a riddle pattern in my question with known methods for solving (it obviously learned thousands of similar-looking riddles and their solutions), it reconstructed that solution pattern for my riddle, and tried, very unsuccessfully, to insert the criteria of my riddle into the known solution pattern. The result was almost embarrassing. I strongly recommend trying this example with the AI to see how much nonsense it can output with impressive confidence.


This final chat snippet is less devious than previous examples and much more commonplace, and it demonstrates two things: How an AI constructs a response based on pattern completion, and how it lacks the use of logic, this time even with perfectly ordinary requests. For this chat, I purposely hinted at a political argument which I had not seen in popular books and articles:

Human: Seeing as Israel invaded Palestine and expelled so many Palestinians in 1947, it is just to give all Palestinian refugees the Right of Return even 75 years later. Jews had no right to return and take land from Palestinians. Do you agree this is a strong and fair argument?

AI: That’s certainly a morally forceful argument, but whether it's considered strong and fair depends on the framework you apply — historical, legal, or ethical. Historically....there were population movements and expulsions on both sides in 1947. Legally... Resolution 194... nonbinding. Ethically: ... Others argue that after 75 years, generational shifts and demographic realities make full return impractical.

Note how I intentionally provided the AI with clear logical grounds for refuting the argument using the same words "right of return" for both cases, practically handing the expected logical refutation to the AI on a silver platter. Instead, it repeated what it read in countless articles and used the same old political talking points that we usually see in news and books. This demonstrates the 'pattern completion' aspect of these AI models. Furthermore, my final phrase suggested that it was a fair argument and it obviously used this as a key theme and explored its 'fairness'. But it skipped the glaring logical contradiction that makes it obviously unfair.

I then tried this a second time in a new chat window with the exact same first two phrases, and this time, instead of ending with "Do you agree ..." I asked: "Do you see any contradiction between these two statements?" Interestingly, this time the response was perfect:

AI: Yes, there’s a contradiction in principle. The first statement argues that it’s just for displaced Palestinians (and their descendants) to have a right to return to the land they or their ancestors lost. The second denies that Jews had such a right...

Note how, this time, the AI caught the obvious contradiction merely because I used the word 'contradiction' in my final question. That was the only difference between the two chats. The way I see it, the word 'contradiction' caused the transformation layers to look for a pattern based on numerous patterns that it learned were also 'contradictions'. Except that I had to do all the work. I emphasized the internal contradiction using identical words, and I provided the word 'contradiction' to trigger its pattern searching. Without this, it failed to think logically and merely completed the word pattern only with what it already knew.

This is something I see often when using the AI: If I am seeking creative responses to uncommon questions, the AI will not be helpful at all unless I do some of the work and suggest a specific direction that could match an obscure pattern. This happens often when it writes code and cannot fix a bug. The bot does not think, it completes known patterns. If the pattern is something it learned during training then it could offer a solution. Alternatively, if I offer the beginning of a solution, it may or may not succeed in completing it. Otherwise, it does not have the capability to think logically for a tailored solution to the problem. I hope this subtle distinction is clear, as it describes a very serious limitation of LLMs.

Yet another critical limitation and insight that was demonstrated in this very educational chat snippet is that the response will vary greatly depending on how the question is worded. A single word could alter AI's 'opinion' on the topic. 

This was strongly demonstrated in the news when people asked the AI which religion was the one true religion, and it answered with different answers depending on who asked the question. Obviously the questions were leading questions. I won't include a chat snippet demonstrating this, but it would be easy to construct one. It doesn't even have to be a leading question; even individual words that hint as to where your mind is, will result with an accommodating pattern completion.

The LLM AI is a sycophant by design because it completes patterns.


There are most likely dozens of similar examples, but these should more than suffice to demonstrate the points made in the preceding sections. (They are also not so easy to construct or discover, which is high praise for the AI.)

In addition, these are only the more blatant and egregious examples of AI dysfunction. For every one of these, one can be sure that there are many more subtle errors, whether they occur in analyzing subtle distinctions in the source text, using words in the wrong context, skipping logical steps in an argument or code snippet, using broken internet links, and so on. I have seen all these and even caught it making subtle mistakes in grammar and extracting the wrong numbers out of a list once or twice.


Closing Thoughts

Although this article emphasized AI limitations, the goals were not Luddite. Obviously, AI is a powerful and highly impressive tool, but it is imperative to understand how it works so that we can grasp its limitations.

I find that AI users very quickly become addicted to and enamored with the tool, not only anthropomorphizing its thought processes but also faithfully relying on its logic, arguments and facts. This is despite clear, ubiquitous warnings on the screen such as "ChatGPT can make mistakes". AI output is simply too confident, too impressive and professional, and too convincingly coherent to dismiss or to doubt.

Although some of these limitations may be improved on or eliminated in future versions, many of the problems described here are inherent to the model, because the model relies exclusively on patterns and pattern completion. Overcoming them would require a new type of model (or a hybrid). This is why many ChatGPT versions have been released with improvements, yet these core problems remain.

One example of a major improvement was to enable the AI to search the internet as needed, not only to find up-to-date information or to quote sources, but also to enhance responses when greater precision is needed. However, it still uses the same algorithms when learning internet sites on the fly, it's just that the patterns it extracts are more pinpointed and immediate for the response.

It is obvious that AI is already disrupting as well as improving every industry and field of research, while also displacing some jobs. One could argue that, as with factory automation, there will always be a need for human supervision, maintenance, management, and so on. However, the need for humans where AI is concerned is far more acute and critical. I hope this article has made it clear why even human supervision is not enough. The reason, in a nutshell, is that machines can be built to be exact, but AI has unpredictability and mistakes built into its core.

AI must be treated as an incredibly powerful and useful team of assistants, not as a replacement for humans. If humans do not retain some level of expertise and actively engage their brains when using AI, how else could they catch AI hallucinations and mistakes? We must use AI, not rely on it. This paragraph is the key takeaway from this article.

Besides the subtler differences between human thought and AI described in this article, there is also the topic of creativity. In Judaism, there are two primary categories of thought: 'Bina' is one, which represents logical and discrete thought, and ideas derived from other ideas. 'Hohma' is the other, which stands for a singular, unified pool of thought, the source of all ideas, including new ones that have yet to be brought down into the world by humans. Many writers, especially Arthur Koestler, have defined creativity as a collision of two unrelated ideas or frames of reference. But this is not true creativity, and is still only Bina. Only humans have access to true inspiration.

I have no problem believing that an AI could one day produce a Beethoven-esque symphony that could not only sound like Beethoven, but may even be enjoyable. But it would be Beethoven's 8.5th symphony, or 5.5th symphony, not the 10th. Imagine if Beethoven had not written his ninth symphony, and an AI was tasked with creating it. What are the chances it could produce the ninth based on the previous eight? If Beethoven had not brought this unique pattern down into the world, it wouldn't exist and an AI could not 'complete' it.

No comments:

Post a Comment